(28 days)
The BD Veritor System for Rapid Detection of Flu A+B is a rapid chromatographic immunoassay for the direct and qualitative detection of influenza A and B viral nucleoprotein antigens from nasal and nasopharyngeal swabs of symptomatic patients. The BD Veritor System for Rapid Detection of Flu A+B (also referred to as the BD Veritor System and BD Veritor System Flu A+B) is a differentiated test, such that influenza A viral antigens can be distinguished from influenza B viral antigens from a single processed sample using a single device. The test is to be used as an aid in the diagnosis of influenza A and B viral infections. A negative test is presumptive and it is recommended that these results be confirmed by viral culture or an FDA-cleared influenza A and B molecular assay. Outside the U.S., a negative test is presumptive and it is recommended that these results be confirmed by viral culture or a molecular assay cleared for diagnostic use in the country of use. FDA has not cleared this device for use outside of the U.S. Negative test results do not preclude influenza viral infection and should not be used as the sole basis for treatment or other patient management decisions. The test is not intended to detect influenza C antigens.
The BD Veritor™ Flu A+B test is an immunochromatographic assay for the qualitative detection of influenza A and B viral antigens in respiratory specimens. The patient specimen is mixed in a reaction tube prefilled with RV Reagent C, gently mixed, and then added to the test device. RV Reagent C contains mucolytic agents that function to break down mucus in a patient specimen thereby exposing viral antigens and enhancing detection in the assay device. Processed specimens are expressed through a filter tip into a single sample well on the BD Veritor™ Flu A+B test device.
After addition to the test device, any influenza A or influenza B viral antigens present in the specimen bind to anti-influenza antibodies conjugated to detector particles on the Veritor "" Flu A+B test strip. The antigen-conjugate complexes migrate across the test strip to the reaction area and are captured by a line of antibody striped on the membrane. The Veritor™ Flu A+B test devices are designed with five spatially-distinct zones including positive and negative control line positions, separate test line positions for the target analytes, and a background zone. The test lines for the target analytes are labeled on the test device as 'A' for Flu A position, and 'B' for Flu B position. The onboard positive control ensures the sample has flowed correctly and is indicated on the test device as 'C'. Two of the five distinct zones on the test device are not labeled. These two zones are an onboard negative control line and an assay background zone. The onboard negative control zone addresses non-specific signal generation. The remaining zone is used to measure the assay background.
The BD Veritor '™ Flu A+B assay incorporates an active negative control feature in each test to identify and compensate for sample-related, nonspecific signal generation. The BD Veritor™ System Reader uses a proprietary algorithm that subtracts nonspecific signal at the negative control line from the signal present at both the Flu A and Flu B test lines. If the resultant test line signal is above a pre-selected assay cutoff, the specimen scores as positive. If the resultant test line signal is below the cutoff, the specimen scores as negative. Use of the active negative control feature allows the BD Veritor ™ System reader to correctly interpret test results that cannot be scored visually because the human eye is unable to accurately perform the subtraction of the nonspecific signal.
Here's an analysis of the provided text, focusing on acceptance criteria and a study proving device performance, as per your request:
Acceptance Criteria and Reported Device Performance
The provided document is a 510(k) premarket notification for a CLIA-waived kit, specifically for a labeling change to add strain reactivity data. It does not contain detailed acceptance criteria for the initial device's performance or a full study report proving those criteria were met for the initial clearance. However, it does reference "Performance characteristics for influenza A and B were established during January through March of 2011".
Given that specific performance values (sensitivity, specificity, etc.) and explicit acceptance criteria are not presented in this document for the initial device clearance, I cannot create a table of acceptance criteria and reported device performance.
The new information being added to the labeling is strain reactivity data for specific influenza strains. This isn't a performance claim against a general population but rather a demonstration of the device's ability to detect particular viral subtypes.
Regarding the studies mentioned in the document:
This document describes an amendment to an already cleared device (BD Veritor System for Rapid Detection of Flu A + B CLIA waived Kit). The primary focus of this specific submission (K160161) is to add additional strain reactivity data to the labeling, not to re-evaluate the device's overall clinical performance.
The document states: "The labeling has been changed to reflect the addition of strain reactivity data for the following strains: A/Northern Pintail/Washington/40964/2014 (H5N2) and A/Gyrfalcon/Washington/41088-6/2014 (H5N8)." It explicitly notes that "Additions made to the labeling to add additional strain testing did not change the intended use of the device or the fundamental scientific technology."
Therefore, the "study" described or referenced in this particular document primarily pertains to:
- Strain Reactivity Testing: Testing the existing device with specific novel influenza strains (H5N2 and H5N8) to confirm detection and add this information to the labeling. Details of this specific testing (sample size, ground truth, etc.) are not provided in this document.
The document also refers to the original performance characteristics established in 2011: "Performance characteristics for influenza A and B were established during January through March of 2011 when influenza viruses A/2009 H1N1, A/H3N2, B/Victoria lineage were the predominant influenza viruses in circulation..." However, the details of that study are not provided here.
Based on the provided text, I can only address aspects relevant to the type of information requested, indicating when details are absent or not applicable to this specific submission.
-
A table of acceptance criteria and the reported device performance
- Not provided in this document. This document focuses on supplemental strain reactivity data for an already cleared device, not the initial clinical performance metrics and acceptance criteria.
-
Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- For the new strain reactivity data (H5N2, H5N8): Not specified in this document.
- For the original performance characteristics (2011): Not specified in this document. The document mentions "January through March of 2011" and predominant influenza viruses, suggesting a prospective or retrospective clinical study, but no details on sample size or data origin (country) are given here.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not specified in this document. For influenza rapid tests, ground truth for clinical studies would typically be established by viral culture or a molecular assay, not human expert consensus like in imaging. For strain reactivity, it would be based on positive controls of the specific strains.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not applicable/Not specified. Adjudication methods are typically used when human interpretation of a diagnostic is the "ground truth" or part of the comparison. For assays like this, the reference method (e.g., PCR, viral culture) provides the definitive result.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- Not applicable. This device is a rapid chromatographic immunoassay, not an AI-powered diagnostic that assists human readers. It's a standalone test with an optical reader.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Yes, this is a standalone device. The BD Veritor™ System Reader uses a proprietary algorithm to interpret the test strip results. The text describes this algorithm: "The BD Veritor™ System Reader uses a proprietary algorithm that subtracts nonspecific signal at the negative control line from the signal present at both the Flu A and Flu B test lines. If the resultant test line signal is above a pre-selected assay cutoff, the specimen scores as positive. If the resultant test line signal is below the cutoff, the specimen scores as negative." This is a automated interpretation without human intervention in the result reading process.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- Not explicitly stated in this document for the original studies. For influenza diagnostics, the common ground truth methods are:
- Viral Culture: Considered the "gold standard" for live virus detection.
- FDA-cleared molecular assay (e.g., PCR): Highly sensitive and specific.
- The "Indications for Use" section and "Intended Use" section state: "A negative test is presumptive and it is recommended that these results be confirmed by viral culture or an FDA-cleared influenza A and B molecular assay." This strongly implies that these methods were used as the reference standard (ground truth) for the original performance evaluation.
- For the new strain reactivity, the ground truth would be the known presence or absence of the specific influenza strains in controlled samples.
- Not explicitly stated in this document for the original studies. For influenza diagnostics, the common ground truth methods are:
-
The sample size for the training set
- Not specified in this document. The document discusses clinical performance characteristics and new strain reactivity, not the development of a machine learning model with distinct training/test sets in the AI sense. The "proprietary algorithm" for reading the strip would have been developed and "trained" on a dataset of test strip results, but details are not provided.
-
How the ground truth for the training set was established
- Not specified in this document. As mentioned above, this isn't an AI/ML development context where human experts label data for an algorithm. The "ground truth" for calibrating the reader's algorithm would be derived from known positive and negative samples, likely confirmed by a reference method like viral culture or PCR.
§ 866.3328 Influenza virus antigen detection test system.
(a)
Identification. An influenza virus antigen detection test system is a device intended for the qualitative detection of influenza viral antigens directly from clinical specimens in patients with signs and symptoms of respiratory infection. The test aids in the diagnosis of influenza infection and provides epidemiological information on influenza. Due to the propensity of the virus to mutate, new strains emerge over time which may potentially affect the performance of these devices. Because influenza is highly contagious and may lead to an acute respiratory tract infection causing severe illness and even death, the accuracy of these devices has serious public health implications.(b)
Classification. Class II (special controls). The special controls for this device are:(1) The device's sensitivity and specificity performance characteristics or positive percent agreement and negative percent agreement, for each specimen type claimed in the intended use of the device, must meet one of the following two minimum clinical performance criteria:
(i) For devices evaluated as compared to an FDA-cleared nucleic acid based-test or other currently appropriate and FDA accepted comparator method other than correctly performed viral culture method:
(A) The positive percent agreement estimate for the device when testing for influenza A and influenza B must be at the point estimate of at least 80 percent with a lower bound of the 95 percent confidence interval that is greater than or equal to 70 percent.
(B) The negative percent agreement estimate for the device when testing for influenza A and influenza B must be at the point estimate of at least 95 percent with a lower bound of the 95 percent confidence interval that is greater than or equal to 90 percent.
(ii) For devices evaluated as compared to correctly performed viral culture method as the comparator method:
(A) The sensitivity estimate for the device when testing for influenza A must be at the point estimate of at least 90 percent with a lower bound of the 95 percent confidence interval that is greater than or equal to 80 percent. The sensitivity estimate for the device when testing for influenza B must be at the point estimate of at least 80 percent with a lower bound of the 95 percent confidence interval that is greater than or equal to 70 percent.
(B) The specificity estimate for the device when testing for influenza A and influenza B must be at the point estimate of at least 95 percent with a lower bound of the 95 percent confidence interval that is greater than or equal to 90 percent.
(2) When performing testing to demonstrate the device meets the requirements in paragraph (b)(1) of this section, a currently appropriate and FDA accepted comparator method must be used to establish assay performance in clinical studies.
(3) Annual analytical reactivity testing of the device must be performed with contemporary influenza strains. This annual analytical reactivity testing must meet the following criteria:
(i) The appropriate strains to be tested will be identified by FDA in consultation with the Centers for Disease Control and Prevention (CDC) and sourced from CDC or an FDA-designated source. If the annual strains are not available from CDC, FDA will identify an alternative source for obtaining the requisite strains.
(ii) The testing must be conducted according to a standardized protocol considered and determined by FDA to be acceptable and appropriate.
(iii) By July 31 of each calendar year, the results of the last 3 years of annual analytical reactivity testing must be included as part of the device's labeling. If a device has not been on the market long enough for 3 years of annual analytical reactivity testing to have been conducted since the device received marketing authorization from FDA, then the results of every annual analytical reactivity testing since the device received marketing authorization from FDA must be included. The results must be presented as part of the device's labeling in a tabular format, which includes the detailed information for each virus tested as described in the certificate of authentication, either by:
(A) Placing the results directly in the device's § 809.10(b) of this chapter compliant labeling that physically accompanies the device in a separate section of the labeling where the analytical reactivity testing data can be found; or
(B) In the device's label or in other labeling that physically accompanies the device, prominently providing a hyperlink to the manufacturer's public Web site where the analytical reactivity testing data can be found. The manufacturer's home page, as well as the primary part of the manufacturer's Web site that discusses the device, must provide a prominently placed hyperlink to the Web page containing this information and must allow unrestricted viewing access.
(4) If one of the actions listed at section 564(b)(1)(A)-(D) of the Federal Food, Drug, and Cosmetic Act occurs with respect to an influenza viral strain, or if the Secretary of Health and Human Services (HHS) determines, under section 319(a) of the Public Health Service Act, that a disease or disorder presents a public health emergency, or that a public health emergency otherwise exists, with respect to an influenza viral strain:
(i) Within 30 days from the date that FDA notifies manufacturers that characterized viral samples are available for test evaluation, the manufacturer must have testing performed on the device with those viral samples in accordance with a standardized protocol considered and determined by FDA to be acceptable and appropriate. The procedure and location of testing may depend on the nature of the emerging virus.
(ii) Within 60 days from the date that FDA notifies manufacturers that characterized viral samples are available for test evaluation and continuing until 3 years from that date, the results of the influenza emergency analytical reactivity testing, including the detailed information for the virus tested as described in the certificate of authentication, must be included as part of the device's labeling in a tabular format, either by:
(A) Placing the results directly in the device's § 809.10(b) of this chapter compliant labeling that physically accompanies the device in a separate section of the labeling where analytical reactivity testing data can be found, but separate from the annual analytical reactivity testing results; or
(B) In a section of the device's label or in other labeling that physically accompanies the device, prominently providing a hyperlink to the manufacturer's public Web site where the analytical reactivity testing data can be found. The manufacturer's home page, as well as the primary part of the manufacturer's Web site that discusses the device, must provide a prominently placed hyperlink to the Web page containing this information and must allow unrestricted viewing access.