Search Filters

Search Results

Found 14 results

510(k) Data Aggregation

    K Number
    K250398
    Date Cleared
    2025-07-03

    (141 days)

    Product Code
    Regulation Number
    866.3328
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    K250398**
    Trade/Device Name: Innovita Flu A/B Antigen Rapid Test
    Regulation Number: 21 CFR 866.3328
    Classification:** Class II

    Product CodeCFR #Panel
    PSZ21 CFR 866.3328
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Innovita Flu A/B Antigen Rapid Test is a rapid chromatographic immunoassay intended for the qualitative detection and differentiation of influenza A and B viral nucleoprotein antigens directly from nasopharyngeal swabs from patients with signs and symptoms of respiratory infection.

    The test is intended for use as an aid in the differential diagnosis of acute influenza A and B viral infections. The test is not intended to detect influenza C antigens. Negative test results are presumptive and should be confirmed by viral culture or an FDA-cleared influenza A and B molecular assay. Negative test results do not preclude influenza viral infection and should not be used as the sole basis for treatment or other patient management decisions.

    Performance characteristics for influenza A were established during the December 2023, and July 2024 influenza season when influenza A/H1N1pdm09, influenza A/H3N2 and influenza B/Victoria viruses were the predominant influenza viruses in circulation. When other influenza A viruses are emerging, performance characteristics may vary.

    If an infection with a novel influenza A virus is suspected based on current clinical and epidemiological screening criteria recommended by public health authorities, specimens should be collected with appropriate infection control precautions for novel virulent influenza viruses and sent to state or local health department for testing. Viral culture should not be attempted in these cases unless a BSL 3+ facility is available to receive and culture specimens.

    Device Description

    The Innovita Flu A/B Antigen Rapid Test is a double antibody sandwich immunoassay-based test. The test device consists of the specimen zone and the test zone. The specimen zone contains monoclonal antibody against the Flu A/Flu B antigen labeled with latex microspheres and chicken IgY antibody conjugated with latex microspheres. The test line contains the other monoclonal antibody against Flu A/Flu B antigen. The control line contains rabbit anti-chicken IgY antibody.

    After the specimen is applied into the specimen well of the device, antigen in the specimen forms an immune complex with the binding reagent in the specimen zone. Then the complex migrates to the test zone. The test line in the test zone contains antibody from a specific pathogen. If the concentration of the specific antigen in the specimen is higher than LoD of Flu A or Flu B, it will be captured at Flu A or Flu B line and form a red-purple line. In contrast, if the concentration of the specific antigen is lower than LoD, it will not form a red-purple line. The test also contains an internal control system. A red-purple control line (C) should always appear after the test is completed. Absence of a red-purple control line indicates an invalid result. The product contents are listed below.

    ContentsAmountDescription
    Test cassette25Each sealed foil pouch containing one test device and one desiccant
    Extraction diluent25Vials with 500microliters of solution, mainly composed of Tris-HCl buffer (pH8.4), NaCl and Triton X-100.
    Swab25Nasopharyngeal swab
    Influenza A Positive Control1Swab is coated with non-infectious recombinant influenza A antigen
    Influenza B Positive Control1Swab is coated with non-infectious recombinant influenza B antigen
    Negative Control1Swab contains inactivated Staphylococcus aureus
    Package Insert1
    Quick Reference1
    AI/ML Overview

    Here's an analysis of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) clearance letter:


    Acceptance Criteria and Device Performance for Innovita Flu A/B Antigen Rapid Test

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided document does not explicitly state pre-defined acceptance criteria for the clinical study's Positive Percent Agreement (PPA) and Negative Percent Agreement (NPA). However, the reported device performance is presented. For analytical performance (LoD, inclusivity, cross-reactivity, interfering substances), the acceptance criterion is generally 100% agreement or no interference/cross-reactivity as implied by the reported results.

    MetricAcceptance Criteria (Implicit)Reported Device Performance (Influenza A)Reported Device Performance (Influenza B)
    Analytical Performance
    Limit of Detection (LoD)Detect with high positivity rate (e.g., ≥95%)96.67% - 100% at specified LoDs98.33% - 100% at specified LoDs
    Inclusivity100% detection at near LoD concentrations100% detection for all tested strains100% detection for all tested strains
    Specificity / Cross-ReactivityNo false positives100% (no cross-reactivity for 49 tested organisms)100% (no cross-reactivity for 49 tested organisms)
    Microbial InterferenceNo interference100% (no interference for 49 tested organisms)100% (no interference for 49 tested organisms)
    Interfering SubstancesNo false positives/negatives100% (no interference for 30 tested substances)100% (no interference for 30 tested substances)
    Biotin InterferenceNo interferenceNo interference up to 4000 ng/mLNo interference up to 4000 ng/mL
    Precision/Reproducibility100% agreement between expected and read result100% agreement100% agreement
    Clinical Performance
    Positive Percent Agreement (PPA)(Not explicitly stated, but typically a high percentage, e.g., >80%)85.7% (95% CI: 80.6%-89.5%)85.7% (95% CI: 72.2%-93.3%)
    Negative Percent Agreement (NPA)(Not explicitly stated, but typically a high percentage, e.g., >95%)99.5% (95% CI: 98.8%-99.8%)100% (95% CI: 99.6%-100%)

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size (Clinical Study): 1101 evaluable nasopharyngeal swab specimens.
    • Data Provenance:
      • Country of Origin: U.S.
      • Study Type: Prospective clinical study. Specimens were collected from subjects with flu-like symptoms during the 2023-2024 influenza season.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    The ground truth for the clinical test set was established using an "FDA-cleared influenza A and B molecular assay" as the comparator method. The document does not specify the number of human experts involved in establishing this ground truth or their qualifications. The ground truth relies on the performance characteristics of the molecular assay itself.

    4. Adjudication Method for the Test Set

    The document does not describe any adjudication method (e.g., 2+1, 3+1) for the clinical test set results. The comparison appears to be a direct one-to-one comparison between the Innovita Flu A/B Antigen Rapid Test result and the result from the FDA-cleared molecular assay.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No MRMC comparative effectiveness study was done comparing human readers with AI vs. without AI assistance. This device is a rapid diagnostic test (RDT) with visual interpretation. The precision study involved "untrained operators" to assess reproducibility, but it was not a comparative effectiveness study with or without AI assistance.

    6. Standalone Performance (Algorithm Only without Human-in-the-Loop)

    Yes, this study primarily assesses the standalone performance of the rapid diagnostic test device. While human operators interpret the visual results, the performance metrics (PPA, NPA) are based on the device's output compared to the ground truth, without an explicit human-in-the-loop component being evaluated for its improvement with AI, as there is no AI component mentioned for interpretation.

    7. Type of Ground Truth Used

    The ground truth for the clinical study was: Comparator Method (FDA-cleared influenza A and B molecular assay).

    8. Sample Size for the Training Set

    The document does not explicitly state a sample size for a "training set." Rapid diagnostic tests typically do not involve machine learning algorithms that require a distinct training set in the same way an AI/ML device would. The development of the assay (e.g., antibody selection, optimization) is a different process from training a machine learning model.

    9. How the Ground Truth for the Training Set Was Established

    Since a "training set" in the context of machine learning is not explicitly mentioned or applicable for this type of rapid diagnostic test, the method for establishing its ground truth is not applicable/not described. The analytical studies (LoD, inclusivity, specificity) use laboratory-prepared, characterized samples with known viral concentrations or presence/absence, serving as the "ground truth" for those specific analytical evaluations.

    Ask a Question

    Ask a specific question about this device

    K Number
    K241188
    Date Cleared
    2025-04-18

    (354 days)

    Product Code
    Regulation Number
    866.3328
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Trade/Device Name: Acucy Influenza A&B Test with the Acucy 2 System
    Regulation Number: 21 CFR 866.3328
    the qualitative detection of influenza A and B nucleoprotein antigens

    Regulation number: 21 CFR 866.3328
    professional and healthcare professional in vitro diagnostic use only. |
    | Regulation | Same | 21 CFR 866.3328

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Acucy Influenza A&B Test is a rapid chromatographic immunoassay for the qualitative detection and differentiation of influenza A and B viral nucleoprotein antigens directly from anterior nasal and nasopharyngeal swabs from patients with signs and symptoms of respiratory infection. The test is intended for use with the Acucy or Acucy 2 Reader as an aid in the diagnosis of influenza A and B viral infections. The test is not intended for the detection of influenza C viruses. Negative test results are presumptive and should be confirmed by viral culture or an FDA-cleared influenza A and B molecular assay. Negative test results do not preclude influenza viral infection and should not be used as the sole basis for treatment or other patient management decisions.

    Performance characteristics for influenza A were established during the 2017-2018 influenza season when influenza A/H3N2 and A/H1N1pdm09 were the predominant influenza A viruses in circulation. When other influenza A viruses are emerging, performance characteristics may vary.

    If an infection with a novel influenza A virus is suspected based on current clinical and epidemiological screening criteria recommended by public health authorities, specimens should be collected with appropriate infection control precautions for novel virulent influenza viruses and sent to state or local health department for testing. Viral culture should not be attempted in these cases unless a BSL 3+ facility is available to receive and culture specimens.

    Device Description

    The Acucy Influenza A&B Test allows for the differential detection of influenza A and influenza B antigens, when used with the Acucy 2 Reader. The patient sample is placed in the Extraction Buffer vial, during which time the virus particles in the sample are disrupted, exposing internal viral nucleoproteins. After disruption, the sample is dispensed into the Test Cassette sample well. From the sample well, the sample migrates along the membrane surface. If influenza A or B viral antigens are present, they will form a complex with mouse monoclonal antibodies to influenza A and/or B nucleoproteins conjugated to colloidal gold. The complex will then be bound by a rat anti-influenza A and/or mouse anti-influenza B antibody coated on the nitrocellulose membrane.

    Depending upon the operator's choice, the Test Cassette is either placed inside the Acucy 2 Reader for automatically timed development mode (WALK AWAY Mode) or placed on the counter or bench top for a manually timed development and then placed into Acucy 2 Reader to be scanned (READ NOW Mode).

    The Acucy 2 Reader will scan the Test Cassette and measure the absorbance intensity by processing the results using method-specific algorithms. The Acucy 2 Reader will display the test results POS (+), NEG (-), or INVALID on the screen. The results can also be automatically printed on the optional Printer if this option is selected.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the studies performed for the Acucy Influenza A&B Test with the Acucy 2 System, based on the provided FDA 510(k) clearance letter:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state "acceptance criteria" for each study, but rather presents the results and the implication that these results demonstrate the equivalence and performance of the device. For the purpose of this table, I will infer the implicit acceptance criteria from the expected outcomes and the conclusion that the device is "substantially equivalent."

    Performance MetricImplicit Acceptance Criteria (Inferred)Reported Device Performance
    Within-Laboratory Repeatability (Acucy)All positive samples (MP, LP) detect as positive (100% agreement); All negative samples (HN, N) detect as negative (100% agreement).Flu A MP: 100% (80/80)
    Flu A LP: 100% (80/80)
    Flu A HN: 100% (80/80)
    Flu B MP: 100% (80/80)
    Flu B LP: 100% (80/80)
    Flu AB HN: 100% (80/80)
    Negative: 100% (80/80)
    Within-Laboratory Repeatability (Acucy 2)All positive samples (MP, LP) detect as positive (100% agreement); All negative samples (HN, N) detect as negative (100% agreement).Flu A MP: 100% (80/80)
    Flu A LP: 100% (80/80)
    Flu A HN: 100% (80/80)
    Flu B MP: 100% (80/80)
    Flu B LP: 100% (80/80)
    Flu AB HN: 100% (80/80)
    Negative: 100% (80/80)
    Instrument-to-Instrument PrecisionAll positive samples detect as positive (100% agreement); All negative samples detect as negative (100% agreement).Flu A M (2x LoD): 75/75 (Pass)
    Flu A L (0.95x LoD): 75/75 (Pass)
    Flu A HN (0.05x LoD): 0/75 (Pass - expected negative)
    Flu B M (2x LoD): 75/75 (Pass)
    Flu B L (0.95x LoD): 75/75 (Pass)
    Flu B HN (0.05x LoD): 0/75 (Pass - expected negative)
    Negative: 0/75 (Pass - expected negative)
    Test Mode EquivalencyAll positive samples detect as positive; All negative samples detect as negative, and results are equivalent between READ NOW and WALK AWAY modes.Flu A+/B-: 20/20 POS Flu A, 20/20 NEG Flu B for both READ NOW and WALK AWAY modes.
    Flu A-/B+: 20/20 NEG Flu A, 20/20 POS Flu B for both READ NOW and WALK AWAY modes.
    Flu A-/B- Negative: 20/20 NEG Flu A, 20/20 NEG Flu B for both READ NOW and WALK AWAY modes.
    Limit of Detection (LoD)Acucy 2 LoD should be equivalent to Acucy LoD (e.g., ≥95% detection rate at the lowest concentration).Influenza A/Michigan Strain: LoD 2.82E+02 TCID50/mL (Acucy: 20/20, Acucy 2: 20/20 for both devices A & B)
    Influenza A/Singapore Strain: LoD 3.16E+03 TCID50/mL (Acucy: 20/20, Acucy 2: 20/20 for both devices A & B)
    Influenza B/Phuket Strain: LoD 2.09E+02 TCID50/mL (Acucy: 20/20, Acucy 2: 20/20 for Device A); LoD 4.17E+02 TCID50/mL (Acucy: 20/20, Acucy 2: 20/20 for Device B)
    Influenza B/Colorado Strain: LoD 2.82E+02 TCID50/mL (Acucy: 20/20, Acucy 2: 20/20 for Device A); LoD 7.05E+02 TCID50/mL (Acucy: 20/20, Acucy 2: 20/20 for Device B)
    Analytical Cutoff (LoB)All blank samples should be negative (0 mABS) and the cutoff values should be consistent with the predicate device.All blank samples showed 0 mABS. Analytical cut-off values for Acucy 2 were set to match the previously established cut-off of 6.4 mABS for Flu A Line and 5.4 mABS for Flu B Line (from the predicate Acucy system).
    Cross ContaminationNo cross-contamination (high titer positives detect as positive, negatives detect as negative).Flu A High Positive: 30/30 (Pass)
    Flu B High Positive: 30/30 (Pass)
    Negative: 60/60 (Pass)
    Method Comparison (Acucy vs. Acucy 2)High Positive Percent Agreement (PPA) and Negative Percent Agreement (NPA) compared to the Acucy Reader should be high (close to 100%).Influenza A: PPA: 100% (30/30), NPA: 98.3% (59/60)
    Influenza B: PPA: 100% (30/30), NPA: 100% (60/60)
    Flex StudiesAll hazards and sources of potential error are controlled.All tests showed expected results, indicating that the device performs correctly under various "flex" conditions (temperature, humidity, vibrations, lighting, air draft, altitude, non-level position, cassette read window contamination, movement in WALK AWAY mode, test cassette movement/vertical incubation, reader drawer positioning). Conclusion: All hazards controlled through design and labeling mitigations.
    External Multi-Site Reproducibility (Acucy)High agreement (close to 100%) for all panel members across sites and operators.Influenza A HN: 98.9% (89/90)
    Influenza A LP: 100% (90/90)
    Influenza A MP: 100% (90/90)
    Negative: 100% (90/90)
    Influenza B HN: 100% (90/90)
    Influenza B LP: 100% (90/90)
    Influenza B MP: 98.9% (89/90)
    Negative: 100% (90/90)
    External Multi-Site Reproducibility (Acucy 2)High agreement (close to 100%) for all panel members across sites and operators.Influenza A HN: 100% (90/90)
    Influenza A LP: 100% (90/90)
    Influenza A MP: 100% (90/90)
    Influenza B HN: 100% (90/90)
    Influenza B LP: 100% (90/90)
    Influenza B MP: 100% (90/90)
    Influenza A & B Negative: 100% (90/90)

    2. Sample Size Used for the Test Set and Data Provenance

    • Precision Study (Repeatability & Instrument-to-Instrument): These studies primarily used contrived samples (prepared in the laboratory by spiking virus into clinical matrix) rather than naturally occurring patient samples.
      • Repeatability: 80 replicates per panel member (Flu A MP, LP, HN; Flu B MP, LP, HN; Negative) for both Acucy and Acucy 2. Total of 7 x 80 = 560 tests per device (Acucy and Acucy 2).
      • Instrument-to-Instrument Precision: 75 replicates per panel member (7 panel members). Total of 7 x 75 = 525 tests per device.
      • Data Provenance: Laboratory-generated, in vitro data. The origin of the clinical matrix used for preparing contrived samples is described as "nasal swab samples... collected from healthy donors and confirmed Flu negative by PCR" for LoD studies, likely similar for precision studies. No specific country of origin is mentioned, but typically for FDA submissions, this data is generated in the US or under comparable quality systems. It is retrospective in the sense that the samples were prepared and tested.
    • Test Mode Equivalency: 20 replicates each of contrived positive Flu A, 20 replicates of contrived positive Flu B, and 20 Flu A and Flu B negative samples. Total of 60 tests (3 x 20 replicates). Data provenance is laboratory-generated/contrived.
    • Limit of Detection (LoD):
      • Range Finding: 5 replicates per concentration for multiple strains and dilutions (as shown in Table 5).
      • Confirmation Testing: 20 replicates per concentration for established LoD.
      • Data Provenance: Contrived samples using pooled negative clinical matrix from healthy donors (confirmed Flu negative by PCR). Laboratory-generated, in vitro data.
    • Analytical Cutoff Study: 60 replicates of a blank sample per lot. Total of 2 lots, so 120 tests. Data provenance is laboratory-generated/contrived.
    • Cross-Contamination Study: 30 high titer Flu A positive, 30 high titer Flu B positive, and 60 negative samples. Total of 120 tests. Data provenance is laboratory-generated/contrived.
    • Method Comparison (Acucy Reader vs. Acucy 2 Reader):
      • Test Set: 30 PCR-confirmed Flu A positive clinical samples, 30 PCR-confirmed Flu B positive clinical samples, and 30 Flu A and Flu B negative clinical samples.
      • Total N for Flu A analysis: 30 Flu A positive + (30 Flu B positive + 30 double negative) = 90 samples.
      • Total N for Flu B analysis: 30 Flu B positive + (30 Flu A positive + 30 double negative) = 90 samples.
      • Data Provenance: Clinical samples (retrospective, given they are PCR-confirmed and a specific count is provided). No country of origin is explicitly stated.
    • CLIA Waiver Studies (Flex Studies): 5 replicates for each flex condition (Negative, Low Positive Flu A, Low Positive Flu B). Number of flex conditions is not explicitly totaled but over 10 types are listed. Data provenance is laboratory-generated/contrived.
    • Reproducibility Studies (External Multi-Site):
      • Acucy System: Panel of 7 samples (Flu A HN, LP, MP; Flu B HN, LP, MP; Negative). Two operators per site, 3 sites, over 5 non-consecutive days. This means 30 replicates per sample type per site (2 operators * 5 days * 3 replicates assumed for LP/HN based on typical studies, though not explicitly stated as count of 30, but total is 90). So, 90 replicates per sample type across all sites (3 sites * 30 replicates). Overall N for Flu A or Flu B: 4 sample types * 90 replicates = 360 tests.
      • Acucy 2 System: Same design as above. 90 replicates per sample type (Flu A HN, LP, MP; Flu B HN, LP, MP; Influenza A & B Negative), across 3 sites. Overall N for Flu A or Flu B: 4 sample types * 90 replicates = 360 tests.
      • Data Provenance: Contrived samples (negative, high negative, low positive, moderate positive) with coded, randomized, and masked conditions. Tested at 3 "point-of-care (POC) sites" for Acucy and 3 "laboratory sites" for Acucy 2. This suggests real-world testing environments, but with contrived samples.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    For most analytical studies (precision, LoD, analytical cutoff, cross-contamination, flex studies), the ground truth is established by known concentrations of spiked viral material in a controlled laboratory setting. Therefore, dedicated "experts" for ground truth adjudication in these cases are not applicable in the same way as for clinical studies.

    For the Method Comparison study (Acucy Reader vs. Acucy 2 Reader), the ground truth for the clinical samples was established by PCR confirmation. The document does not specify the number of experts or their qualifications for interpreting these PCR results, but PCR results are generally considered a high standard for viral detection.

    For the Reproducibility Studies, the ground truth for the test panel was established by the known composition of the contrived samples (e.g., negative, high negative, low positive, moderate positive).

    4. Adjudication Method for the Test Set

    • Analytical Studies (Precision, LoD, Analytical Cutoff, Cross-Contamination, Flex Studies, Reproducibility): Adjudication is inherently by known input concentration or known sample composition. There isn't an "adjudication method" in the sense of multiple human reviewers; rather, it's a comparison to the predefined true state of the contrived sample.
    • Method Comparison Study: The ground truth for clinical samples was established by PCR confirmed results. The device's results were compared against these PCR results. There is no mention of human expert adjudication (e.g., 2+1 or 3+1 consensus) for the PCR results themselves or for resolving discrepancies between the device and PCR.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    There is no MRMC comparative effectiveness study described in this document.

    • The device is an automated reader for a rapid chromatographic immunoassay. It does not appear to involve human interpretation of images or complex data that would typically benefit from AI assistance in the way an MRMC study evaluates.
    • The study focuses on the performance of the device (Acucy 2 System) only compared to a predicate device (Acucy System) and against laboratory-defined ground truths. There's no "human-in-the-loop" aspect being evaluated in terms of improved human reader performance with AI.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

    Yes, the studies presented are primarily standalone performance studies for the Acucy 2 System. The device (Acucy 2 Reader) automatically scans the test cassette and processes results using "method-specific algorithms" (Page 6). The output is "POS (+), NEG (-), or INVALID" displayed on the screen. The entire workflow described (from sample application to reader result) represents the standalone performance of the device and its embedded algorithms.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    The ground truth used varied depending on the study:

    • Analytical Studies (Precision, LoD, Analytical Cutoff, Cross-Contamination, Flex Studies, Reproducibility): Ground truth was based on known concentrations of spiked viral material in contrived samples. For negative controls, it was the absence of the target virus.
    • Method Comparison Study: Ground truth for clinical samples was established by PCR confirmation.

    8. The sample size for the training set

    The document does not explicitly describe a training set or its sample size. The reported studies are primarily verification and validation studies to demonstrate performance and equivalence of the Acucy 2 System compared to the predicate Acucy System. For medical devices, especially immunoassay readers, algorithms are often developed and locked down before these validation studies are performed. If machine learning or AI was used in the algorithm development, the training data would precede these clearance studies and is typically not fully disclosed in a 510(k) summary unless directly relevant to a specific "software change" or unique characteristic being validated.

    9. How the ground truth for the training set was established

    As no training set is explicitly mentioned, the method for establishing its ground truth is also not specified in this document. If algorithmic development involved a training phase, it's highly probable that contrived samples with known viral concentrations and PCR-confirmed clinical samples with known outcomes would have been utilized for this purpose.

    Ask a Question

    Ask a specific question about this device

    K Number
    K232434
    Manufacturer
    Date Cleared
    2023-12-05

    (116 days)

    Product Code
    Regulation Number
    866.3328
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Device Name: BD Veritor System for Rapid Detection of Flu A+B CLIA-Waived Kit Regulation Number: 21 CFR 866.3328
    |

    REGULATION INFORMATION:

    | Regulatory Section | 21 CFR 866.3328

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The BD Veritor™ System for Rapid Detection of Flu A+B CLIA-Waived Kit is a rapid chromatographic immunoassay for the direct and qualitative detection of influenza A and B viral nucleoprotein antigens from nasal and nasopharyngeal swabs of symptomatic patients. The BD Veritor™ System for Rapid Detection of Flu A+B CLIA-Waived Kit (also referred to as the BD Veritor System and BD Veritor System Flu A+B) is a differentiated test, such that influenza A viral antigens can be distinguished from influenza B viral antigens from a single processed sample using a single device. The test is to be used as an aid in the diagnosis of influenza A and B viral infections. A negative test is presumptive, and it is recommended that these results be confirmed by viral culture or an FDA-cleared influenza A and B molecular assay. Negative test results do not preclude influenza viral infection and should not be used as the sole basis for treatment or other patient management decisions. The test is not intended to detect influenza C antigens.

    Performance characteristics for influenza A and B were established during January through March of 2011 when influenza viruses A/2009 H1N1, A/H3N2, B/Victoria lineage, and B/Yamagata lineage were the predominant influenza viruses in circulation according to the Morbidity and Mortality Weekly Report from the CDC entitled "Update: Influenza Activity-United States, 2010-2011 Season, and Composition of the 2011-2012 Influenza Vaccine." Performance characteristics may vary against other emerging influenza viruses.

    If infection with a novel influenza virus is suspected based on current clinical and epidemiological screening criteria recommended by public health authorities, specimens should be collected with appropriate infection control precautions for novel virulent influenza viruses and sent to the state or local health department for testing. Virus culture should not be attempted in these cases unless a BSL 3+ facility is available to receive and culture specimens.

    Device Description

    The BD Veritor™ System for Rapid Detection of Flu A+B CLIA Waived Kit is a rapid chromatographic immunoassay for the direct and qualitative detection of influenza A and B viral antigens from nasopharyngeal and nasal swabs of symptomatic patients. The test is to be used as an aid in the diagnosis of influenza A and B viral infections. It is a differentiated test, such that influenza A viral antigens can be distinguished from influenza B viral antigens from a single processed sample using a single test device. Negative test results do not preclude influenza viral infection and should not be used as the sole basis for treatment or other management decisions. All negative test results should be confirmed by another methodology, such as a nucleic acid-based method.

    BD Veritor™ System Flu A+B test devices are interpreted by a BD Veritor™ Plus Analyzer. When using the BD Veritor™ Plus Analyzer, workflow steps depend on the selected operational mode and the Analyzer configuration settings. In Analyze Now mode, the instrument evaluates assay devices after manual timing of their development. In Walk Away mode, devices are inserted immediately after application of the specimen, and timing of assay development and analysis is automated.

    The BD Veritor™ System Flu A+B CLIA-Waived Kit is an immuno-chromatographic assay for detection of influenza A and B viral antigens in samples processed from respiratory specimens. The viral antigens detected by the BD Flu A+B test are nucleoprotein, not hemagglutinin (HA) or neuraminidase (NA) proteins. Flu viruses are prone to minor point mutations (i.e., antigenic drift) in either one or both of the surface proteins (i.e., HA or NA). The BD Flu A+B test is not affected by antigenic drift or shift because it detects the highly conserved nucleoprotein of the influenza viruses. To perform the test, the patient specimen swab is treated in a supplied reaction tube prefilled with a lysing agent that serves to expose the target viral antigens, and then expressed through a filter tip into the sample well on a BD Veritor"10 Flu A+B test device. Any influenza A or influenza B viral antigens present in the specimen bind to anti-influenza antibodies conjugated to colloidal gold micro-particles on the BD Veritor™ Flu A+B test strip. The antigen-conjugate complex then migrates across the test strip to the capture zone and reacts with either Anti-Flu A or Anti-Flu B antibodies that are immobilized on the two test lines on the membrane.

    The BD Flu A+B test device shown in Figure 1 is designed with five spatially distinct zones including positive and negative control line positions, separate test line positions for the target analytes, and a background zone. The test lines for the target analytes are labeled on the test device as 'A' for flu A position, and 'B' for flu B position. The onboard positive control ensures the sample has flowed correctly and is indicated on the test device as 'C'. Two of the five distinct zones on the test device are not labeled. These two zones are an onboard negative control line and an assay background zone. The active negative control feature in each test identifies and compensates for specimen-related, nonspecific signal generation. The remaining zone is used to measure the assay background.

    The BD Veritor™ Plus Analyzer is a digital immunoassay instrument that uses a reflectance-based measurement method and applies assay specific algorithms to determine the presence or absence of the target analyte. The Analyzer supports the use of different assays by reading an assay-specific barcode on the test device. Depending on the configuration chosen by the operator, the instrument communicates status and results to the operator via a liquid crystal display (LCD) on the instrument, a connected printer, or through a secure connection to the facility's information system.

    In the case of the Flu A + B test, the BD Veritor™ Plus Analyzer subtracts nonspecific signal at the negative control line from the signal present at both the Flu A and Flu B test lines. If the resultant line signal is above a pre-selected assay cutoff, the specimen scores as positive. If the resultant line signal is below the cutoff, the specimen scores as negative. Use of the active negative control feature allows the BD Veritor™ Plus Analyzer to correctly interpret test results that cannot be scored visually because the human eye is unable to accurately perform the subtraction of the nonspecific signal. The measurement of the assay background zone is an important factor during test interpretation as the reflectance is compared to that of the control and test zones. A background area that is white to light pink indicates the device has performed correctly.

    AI/ML Overview

    The provided text describes a 510(k) premarket notification for modifications to the BD Veritor™ System for Rapid Detection of Flu A+B CLIA-Waived Kit. The current submission (K232434) focuses on changes to the BD Veritor™ Plus Analyzer, not the assay itself. Therefore, the details about acceptance criteria and clinical performance studies relate to the predicate device (K223016) and the assay's original clearance (K180438), as the modifications in K232434 do not impact the assay's analytical or clinical performance.

    Here's a breakdown based on the provided input:

    1. Table of Acceptance Criteria and Reported Device Performance

    The current submission (K232434) is for modifications to the analyzer, not a new or modified assay. Therefore, it does not present new acceptance criteria or device performance data for the assay's diagnostic accuracy. Instead, it relies on the previously established performance of the predicate device (K223016) which itself relies on the performance established in K180438. The acceptance criteria for the current submission are related to the safety and electromagnetic compatibility of the modified analyzer.

    Acceptance Criteria (for Analyzer Modifications in K232434)Reported Device Performance (for Analyzer Modifications in K232434)
    Compliance with Safety Requirements for Electrical Equipment (IEC 61010-1:2010, IEC 61010-1:2010/AMD 1:2016, IEC 61010-2-101:2018)Demonstrated compliance with specified standards
    Compliance with Electromagnetic Compatibility and Electrical Safety (EN IEC 61326-1:2020, EN IEC 61326-2-6:2021, EN 60601-1-2:2015 + A1: 2021 [equivalence of ANSI AAMI IEC 60601-1-2:2014 including AMD 1:2021])Demonstrated compliance with specified standards; No EMI nor ESD susceptibility observed during compliance testing. Analyzer functionalities remained the same, and operations and performance were not impacted.

    Performance Characteristics for Influenza A and B (as established for K180438 / K223016 and referenced here):

    The document mentions that performance characteristics for influenza A and B were established during January through March of 2011. While specific sensitivity and specificity values are not provided in this document excerpt, the "Indications for Use" and "Intended Use" sections clearly state that the device is for "direct and qualitative detection of influenza A and B viral nucleoprotein antigens."

    2. Sample Size Used for the Test Set and Data Provenance

    The provided text does not include the specific sample size for the test set used to establish the clinical performance of the Flu A+B assay, nor does it explicitly state the data provenance (e.g., retrospective or prospective studies) in detail, beyond mentioning:

    • "Performance characteristics for influenza A and B were established during January through March of 2011"
    • This period corresponded to when "influenza viruses A/2009 H1N1, A/H3N2, B/Victoria lineage, and B/Yamagata lineage were the predominant influenza viruses in circulation" in the United States, according to CDC reports. This implies real-world, clinical data from symptomatic patients in the US during that influenza season was used.

    Since the current submission is for analyzer modifications and explicitly states, "Clinical Performance: Clinical performance testing was not required because the changes made to the Analyzer do not have an impact on the assay-specific clinical performance," this information would be found in the original submission (K180438) that cleared the assay.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts

    This information is not provided in the given text. It would be part of the detailed clinical study report from the original assay clearance (K180438).

    4. Adjudication Method for the Test Set

    This information is not provided in the given text. It would be part of the detailed clinical study report from the original assay clearance (K180438).

    5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    This is not applicable to the BD Veritor System. This device is a rapid chromatographic immunoassay read by a digital analyzer (BD Veritor™ Plus Analyzer), not an AI system designed to assist human readers in interpreting complex images or data. The analyzer automatically interprets the test results based on its algorithms and provides a positive, negative, or invalid result, rather than providing input to a human reader for their interpretation.

    6. If a Standalone (i.e. algorithm only without human-in-the loop performance) was done

    Yes, in essence, the BD Veritor™ Plus Analyzer operates as a standalone algorithm without human-in-the-loop performance for result interpretation. The text states:

    • "The Analyzer is a digital immunoassay instrument that uses a reflectance-based measurement method and applies assay specific algorithms to determine the presence or absence of the target analyte."
    • "The BD Veritor™ Plus Analyzer subtracts nonspecific signal at the negative control line from the signal present at both the Flu A and Flu B test lines. If the resultant line signal is above a pre-selected assay cutoff, the specimen scores as positive. If the resultant line signal is below the cutoff, the specimen scores as negative."
    • "Use of the active negative control feature allows the BD Veritor™ Plus Analyzer to correctly interpret test results that cannot be scored visually because the human eye is unable to accurately perform the subtraction of the nonspecific signal."

    This clearly describes an automated interpretation process by the device's algorithms.

    7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)

    The document states, "A negative test is presumptive, and it is recommended that these results be confirmed by viral culture or an FDA-cleared influenza A and B molecular assay." This implies that viral culture or an FDA-cleared influenza A and B molecular assay served as the reference standard (ground truth) for establishing performance characteristics during the original clinical studies.

    8. The Sample Size for the Training Set

    This information is not provided in the given text, and it's less relevant for an immunoassay where "training data" for a machine learning model might not be explicitly defined in the same way as for complex AI algorithms. For immunoassays, the "training" involves optimizing the assay reagents and conditions, and setting cutoff values, which are then validated with clinical samples.

    9. How the Ground Truth for the Training Set Was Established

    This information is not provided in the given text. As mentioned above, the concept of a "training set ground truth" might not apply directly in the traditional sense for this type of immunoassay. Instead, the ground truth for establishing performance (and implicitly for setting cutoffs) would be established using the reference methods mentioned in point 7.

    Ask a Question

    Ask a specific question about this device

    K Number
    K223016
    Manufacturer
    Date Cleared
    2023-01-27

    (120 days)

    Product Code
    Regulation Number
    866.3328
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Device Name: BD Veritor System for Rapid Detection of Flu A+B CLIA-Waived Kit Regulation Number: 21 CFR 866.3328
    Detecting Influenza A, B, and C Virus Antigens

    Regulatory Information

    Regulation section: 21 CFR 866.3328

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The BD Veritor System for Rapid Detection of Flu A+B CLIA waived assay is a rapid chromatographic immunoassay for the direct and qualitative detection of influenza A and B viral nucleoprotein antigens from nasal and nasopharyngeal swabs of symptomatic patients. The BD Veritor System for Rapid Detection of Flu A+B (also referred to as the BD Veritor System and BD Veritor System Flu A+B) is a differentiated test, such that influenza A viral antigens can be distinguished from influenza B viral antigens from a single processed sample using a single device. The test is to be used as an aid in the diagnosis of influenza A and B viral infections. A negative test is presumptive and it is recommended that these results be confirmed by viral culture or an FDA-cleared influenza A and B molecular assay. Negative test results do not preclude influenza viral infection and should not be used as the sole basis for treatment or other patient management decisions. The test is not intended to detect influenza C antigens.

    Device Description

    The BD Veritor™ System for Rapid Detection of Flu A+B is a rapid chromatographic immunoassay for the direct and qualitative detection of influenza A and B viral antigens from nasopharyngeal and nasal swabs of symptomatic patients. The test is to be used as an aid in the diagnosis of influenza A and B viral infections. It is a differentiated test, such that influenza A viral antigens can be distinguished from influenza B viral antigens from a single processed sample using a single test device. Negative test results do not preclude influenza viral infection and should not be used as the sole basis for treatment or other management decisions. All negative test results should be confirmed by another methodology, such as a nucleic acid-based method.

    BD Veritor™ System Flu A+B test devices are interpreted by a BD Veritor™ Plus Analyzer. When using the BD Veritor™ Plus Analyzer, workflow steps depend on the selected operational mode and the Analyzer configuration settings. In Analyze Now mode, the instrument evaluates assay devices after manual timing of their development. In Walk Away mode, devices are inserted immediately after application of the specimen, and timing of assay development and analysis is automated. Depending on the configuration chosen by the operator, the instrument communicates status and results to the operator via a liquid crystal display (LCD) on the instrument, a connected printer, or through a secure connection to the facility's information system.

    AI/ML Overview

    This document (K223016) is a 510(k) Premarket Notification for a modified version of the BD Veritor System for Rapid Detection of Flu A+B CLIA-Waived Kit. The core assay (the rapid immunoassay for Influenza A and B detection) remains unchanged from previous clearances (K180438 and earlier). The modifications focus on the accompanying instrument, the BD Veritor™ Plus Analyzer.

    Therefore, the document explicitly states "There have been no changes to the analytical performance of the BD Veritor™ System for Rapid Detection of Flu A+B CLIA-Waived Kit since the assay was last cleared in K180438. The modifications to the Analyzer do not have an impact on the assay-specific analytical performance. " and "There have been no changes to the clinical performance of the BD Veritor™ System for Rapid Detection of Flu A+B CLIA-Waived Kit since the assay was last cleared in K180438. The modifications to the Analyzer do not have an impact on the assay-specific clinical performance."

    This means that no new performance studies (analytical or clinical) were conducted for the assay itself as part of this specific 510(k) submission. The acceptance criteria and performance data for the assay would refer to the studies presented in the previous 510(k)s, most recently K180438.

    The testing performed for this K223016 submission focuses solely on the modifications made to the BD Veritor™ Plus Analyzer:

    1. Overvoltage Protection Circuitry: Testing confirms that the added circuitry does not affect the function of the trigger board (recognition of a cartridge and USB connection).
    2. Extended Lifetime: Verification that the Analyzer performs up to 10,000 cycles (tests) within current specifications (increased from 3,500 tests).
    3. InfoWiFi Module Functionality: Verification of the general functionalities of the new InfoWiFi module, which provides wireless communication capabilities.

    Since the provided document does not contain the details of the analytical and clinical performance studies for the assay, and instead refers to previous submissions, I cannot extract the specific acceptance criteria and detailed study data for the assay itself from this text.

    Assuming the request is for the acceptance criteria and study proving the changes to the analyzer meet the criteria, the following applies:

    1. A table of acceptance criteria and the reported device performance:

    Feature ModifiedAcceptance Criteria (Implicit from testing purpose)Reported Device Performance (as stated in document)
    Analyzer Trigger Board (Overvoltage Protection)The added overvoltage protection circuitry must not affect the function of the trigger board, meaning it must still properly recognize an inserted cartridge and maintain USB connection functionality.Testing was performed to confirm that the added overprotection circuitry does not affect the function of the trigger board (recognition of a cartridge and the USB connection).
    Analyzer LifetimeThe Analyzer must perform up to 10,000 cycles (tests) while maintaining current specifications (an increase from the previous 3,500 test specified lifetime).Testing was performed to confirm that the Analyzer performs up to 10,000 cycles within the current specifications.
    InfoWiFi ModuleThe InfoWiFi module must operate according to its intended design, providing general functionalities such as wireless communication capability (same functional features as InfoScan, plus wireless).Testing was performed to verify general functionalities of the InfoWiFi module.

    2. Sample size used for the test set and the data provenance:

    • Sample Size: Not explicitly stated in the document for any of the new tests. The document refers to "testing was performed."
    • Data Provenance: Not explicitly stated (e.g., country of origin, retrospective/prospective). This would typically be non-clinical, in-house verification testing by the manufacturer.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Not applicable as this is a device modification verification, not a clinical study requiring expert ground truth for diagnostic accuracy. The "truth" here is engineering functionality.

    4. Adjudication method for the test set:

    • Not applicable. This is not a study requiring adjudication of diagnostic results.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done:

    • No. This type of study is for evaluating human reader performance with or without AI assistance. The modifications here are to the hardware of an automated test reader, not an AI diagnostic algorithm.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    • The BD Veritor System itself operates as a standalone diagnostic device interpreted by the analyzer. However, the algorithm for interpreting the test results (analyzing line intensity) was established in previous submissions (K180438) and is stated as "Same" and "Original" in the comparison table, meaning it was not changed or re-evaluated in this submission. The testing done for this 510(k) relates to the hardware modifications of the analyzer, not a new or modified interpretation algorithm.

    7. The type of ground truth used:

    • For the hardware modifications, the "ground truth" is defined by engineering specifications and expected functionality. For example, for "overvoltage protection," the ground truth is "does the circuit protect against overvoltage without impairing core function?" For "lifetime," the ground truth is "does the device successfully complete 10,000 tests?" For "InfoWiFi," the ground truth is "does it perform the specified wireless functionalities?"

    8. The sample size for the training set:

    • Not applicable. This is hardware verification, not a machine learning model.

    9. How the ground truth for the training set was established:

    • Not applicable.
    Ask a Question

    Ask a specific question about this device

    K Number
    K192719
    Date Cleared
    2020-04-03

    (190 days)

    Product Code
    Regulation Number
    866.3328
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    California 92121

    Re: K192719

    Trade/Device Name: OSOM ULTRA PLUS FLU A&B Test Regulation Number: 21 CFR 866.3328
    Regulation:
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The OSOM® ULTRA PLUS FLU A&B Test is an in vitro rapid diagnostic immunochromatographic assay intended for the qualitative detection of influenza type A and type B nucleoprotein antigens directly from nasal and nasopharyngeal swab specimens from patients with signs and symptoms of respiratory infection.

    It is intended to aid in the rapid differential diagnosis of influenza A and B viral infections. This test is not intended for the detection of influenza C viruses.

    A negative test result is presumptive, and it is recommended these results be confirmed by viral culture or an FDA-cleared influenza A and B molecular assay. Negative test results do not preclude influenza virus infection and should not be used as the sole basis for treatment or other patient management decisions.

    Performance characteristics for influenza A were established during the US 2018-2019 influenza season when A/H1N1pdm09 and influenza A/H3N2 were the predominant influenza A viruses in circulation, and the influenza B Yamagata and Victoria lineages were in co-circulation. When other influenza A or B viruses are emerging, performance characteristics may vary.

    If infection with a novel influenza virus is suspected based on current clinical and epidemiological screening criteria recommended by public health authorities, specimens should be collected with appropriate infection control precautions for novel virulent influenza viruses and sent to state or local health department for testing. Viral culture should not be attempted in these cases unless a BSL 3+ facility is available to receive and culture specimens.

    Device Description

    The OSOM® ULTRA PLUS FLU A&B Test consists of a test stick that separately detects influenza A and B. The test procedure requires the solubilization of the nucleoproteins from a swab by mixing the swab in Extraction Buffer. The test stick is then placed in the sample mixture, which then migrates along the membrane surface. If influenza A and/or B viral antigens are present in the sample, it will form a complex with mouse monoclonal IgG antibodies to influenza A and/or B nucleoproteins conjugated to colloidal gold. The complex will then be bound by another a rat anti-influenza A and/or mouse anti-influenza B antibody coated on the nitrocellulose membrane. A pink to purple control line must appear in the control region of the stick for results to be valid. The appearance of a second and possibly a third light pink to purple line in the test line region indicates an A, B or A and B positive result. A visible control line with no test line is a negative result.

    AI/ML Overview

    The Sekisui Diagnostics OSOM ULTRA PLUS FLU A&B Test is an in vitro rapid diagnostic immunochromatographic assay intended for the qualitative detection of influenza A and B nucleoprotein antigens from nasal and nasopharyngeal swab specimens.

    Here's an analysis of the acceptance criteria and study data:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state pre-defined acceptance criteria in terms of numerical thresholds for sensitivity and specificity. Instead, the study results are presented as the device's performance characteristics. Regulatory bodies typically expect high sensitivity and specificity for such diagnostic tests. For the purpose of summarization, we will extract the reported clinical performance as the device's demonstrated performance.

    Performance CharacteristicTarget (Implicit/Assumed for FDA Clearance)Reported Device Performance (Influenza A)Reported Device Performance (Influenza B)
    SensitivityHigh (e.g., typically >80%)90.3% (95% CI: 87.0%-92.8%)88.0% (95% CI: 81.8%-92.3%)
    SpecificityHigh (e.g., typically >95%)96.7% (95% CI: 95.5%-97.6%)99.2% (95% CI: 98.6%-99.6%)
    Invalid RateLow (e.g., 95%)98.9% - 100% (for different categories)100% (for different categories)
    Analytical Sensitivity (LoD)Defined for specific strainsA/H1N1: 7.1x10¹ TCID50/mL; A/H3N2: 2.2x10⁵ CEID50/mLB/Victoria: 3.5x10³ TCID50/mL; B/Yamagata: 1.6x10² TCID50/mL
    Analytical Reactivity (Detection of various strains)100% detection of tested strains at LoDDetected all 16 tested influenza A strainsDetected all 8 tested influenza B strains
    Analytical Specificity (Cross-Reactivity)0% cross-reactivity with non-target organisms and human DNANo cross-reactivity with 41 tested organisms/DNANo cross-reactivity with 41 tested organisms/DNA
    Interfering SubstancesNo interference at specified concentrationsNo interference observed for 26 tested substancesNo interference observed for 26 tested substances
    Competitive InterferenceNo competitive interferenceNo competitive interference observedNo competitive interference observed

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size:
      • Prospective study: 1210 evaluable prospective samples.
      • Banked samples: 316 evaluable banked samples.
      • Total evaluable samples for performance evaluation: 1526 samples (1210 prospective + 316 banked).
    • Data Provenance:
      • Country of Origin: United States. The prospective study was conducted at 21 point-of-care (POC) sites across the United States.
      • Retrospective or Prospective: The primary clinical study was prospective, collecting samples from January 2019 to May 2019. This prospective dataset was supplemented with 317 banked samples (retrospective) collected from previous influenza seasons due to atypically low prevalence of influenza B in the prospective study period.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The document does not explicitly state the number of experts used or their specific qualifications for establishing the ground truth. However, the ground truth was established by "an FDA-cleared molecular test" (a reference method) and further "another FDA-cleared molecular test for discrepant analysis." This implies that the ground truth was determined by validated laboratory methods rather than direct expert interpretation of the rapid test results. The operators at the CLIA waived sites were "untrained operators with no laboratory training or experience," but they were performing the device test, not establishing ground truth.

    4. Adjudication Method for the Test Set

    The adjudication method used was a "discrepant analysis" involving a second FDA-cleared molecular test.

    • For Influenza A: If the OSOM ULTRA PLUS FLU A&B Test results differed from the initial FDA-cleared molecular comparator method, a second FDA-cleared molecular test was used for adjudication. For instance, out of 37 false positive specimens, Flu A was detected in 23 using the second molecular test. For 39 false negative specimens, Flu A was not detected in 7 using the second molecular test.
    • For Influenza B: Similarly, FLu B was detected in 3 of 11 false positive specimens using a second FDA-cleared molecular test, and not detected in 2 of 18 false negative specimens using the second test.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not explicitly described in the provided text in the context of comparing human readers with and without AI assistance. This device is a rapid diagnostic immunochromatographic assay, which is read visually by a single operator (or potentially multiple operators for reproducibility studies, but not in a comparative effectiveness study against AI). The study involved "untrained operators" at POC sites, but their performance was compared against a molecular reference method, not against an AI.

    6. If a Standalone (i.e. algorithm only, without human-in-the-loop performance) was done

    This question is not applicable as the device is a manual, visually interpreted rapid diagnostic test, not an AI-powered algorithm. The device's performance is its standalone performance without human input beyond sample application and visual interpretation. The performance metrics presented (sensitivity, specificity) reflect the device's ability to accurately detect the antigen when operated by intended users (untrained operators).

    7. The Type of Ground Truth Used

    The ground truth for the clinical performance study was established using:

    • An FDA-cleared molecular test (as the primary comparator method).
    • Another FDA-cleared molecular test for discrepant analysis.

    This indicates a highly reliable laboratory-based ground truth, often considered superior to expert consensus for objective biological markers like viral antigens.

    8. The Sample Size for the Training Set

    The document does not provide information on a training set sample size. This is expected for laboratory diagnostic devices where development involves analytical studies (LoD, reactivity, specificity) to optimize the assay, followed by clinical validation. 'Training set' is a term primarily used in machine learning and AI development, which does not apply directly to this type of traditional in vitro diagnostic device validation.

    9. How the Ground Truth for the Training Set Was Established

    As there is no mention of a "training set" in the context of AI or machine learning for this device, information on how its ground truth was established is not applicable/provided. The analytical studies (LoD, reactivity, specificity, interference) involve preparing samples with known concentrations of analytes, where the "ground truth" is determined by the preparation method itself and confirmed by standard laboratory techniques.

    Ask a Question

    Ask a specific question about this device

    K Number
    K191514
    Manufacturer
    Date Cleared
    2020-02-18

    (256 days)

    Product Code
    Regulation Number
    866.3328
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    New Jersey 08873

    Re: K191514

    Trade/Device Name: CareStart Flu A&B Plus Regulation Number: 21 CFR 866.3328
    -Common Name: Influenza virus antigen detection test system

    • -Device Class: Class II under 21 CFR 866.3328
      |
      | Regulation
      Number | 21 CFR 866.3328
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The CareStart™ Flu A&B Plus is an in vitro rapid immunochromatographic assay for the qualitative detection of influenza virus type A and B nucleoprotein antigens directly from nasopharyngeal swab specimens of symptomatic patients.

    The test is intended for use as an aid in the rapid differential diagnosis of acute influenza type A and B viral infections. This test is intended to distinguish between influenza type A and/or B virus in a single test. This test is not intended to detect influenza type C viral antigens. Negative test results are presumptive and should be confirmed by viral culture or an FDA-cleared influenza A and B molecular assay. Negative results do not preclude influenza virus infections and should not be used as the basis for treatment or other patient management decisions.

    Performance characteristics for influenza A and B were established during the 2018-2019 influenza season when influenza A/H3N2, A/H1N1pdm09, and B/Victoria were the predominant influenza viruses in circulation. When other influenza viruses are emerging, performance characteristics may vary.

    If infection with a novel influenza virus is suspected based on current clinical and epidemiological screening criteria recommended by public health authorities, specimens should be collected with appropriate infection control precautions for novel virulent influenza viruses and sent to the state or local health department for testing. Viral culture should not be attempted in these cases unless a BSL 3+ facility is available to received and culture specimens.

    Device Description

    The CareStart™ Flu A&B Plus test is an immunochromatographic assay for detection of extracted influenza type A and B virus nucleoprotein antigens in nasopharyngeal specimens.

    Nasopharyngeal swabs require a sample preparation step in which the sample is eluted and washed off into the extraction buffer solution. Extracted swab sample is added to the sample well of the test device to initiate the test. When the swab sample migrates in the test strip, influenza A or B viral antigens bind to anti-influenza antibodies conjugated to indicator particles in the test strip forming an immune complex. The immune complex is then captured by each test line and control line on the membrane as it migrates through the strip.

    Test results are interpreted at 10 minutes. The presence of two colored lines, a purplecolored line in the control region "C" and a red-colored line in the influenza A test region "A", indicates influenza A positive. The presence of two colored lines, a purplecolored line in the control region "C" and a blue-colored line in the influenza B test region "B", indicates influenza B positive. The presence of three colored lines, a purple-colored line in the control region "C", a red-colored line in the influenza A test region "A", and a blue-colored line in the influenza B test region "B indicates, influenza A and B dual positive result. The absence of a line on both influenza A and B test regions with a purple-colored line in the control region "C" indicates negative. No appearance of purple-colored line in the control region "C" indicates invalid test.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    Device Name: CareStart™ Flu A&B Plus (Influenza Virus Antigen Detection Test System)

    General Acceptance Criteria (Implied by FDA 510(k) Clearance):
    The primary acceptance criterion for a 510(k) submission is that the device is substantially equivalent to a legally marketed predicate device. This is demonstrated by showing that the new device has the same intended use and technological characteristics, and that its performance is at least as safe and effective as the predicate device. Specific performance criteria are established through analytical and clinical studies.


    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state pre-defined acceptance criteria values (e.g., "PPA must be >X%"). Instead, the performance estimates are presented, and the implication is that these results demonstrate substantial equivalence to the predicate device. For the purpose of this analysis, I will treat the reported performance as the "met criteria" that led to substantial equivalence.

    Performance MetricAcceptance Criteria (Implied/Demonstrated)Reported Device Performance
    Clinical Performance (Influenza A)Sufficiently high PPA and NPA compared to molecular assayPPA: 79.9% (95% CI: 75.7% – 83.7%)
    NPA: 98.4% (95% CI: 97.0% – 99.2%)
    Clinical Performance (Influenza B)Sufficiently high PPA and NPA compared to molecular assayPPA: 88.2% (95% CI: 65.7% – 96.7%) (Prospective)
    PPA: 96.6% (95% CI: 91.5% – 98.7%) (Retrospective)
    NPA: 100.0% (95% CI: 99.6% – 100.0%) (Prospective)
    NPA: 97.8% (95% CI: 88.4% – 99.6%) (Retrospective)
    Analytical Sensitivity (LoD)Detection of virus strains at specified concentrationsAll tested strains detected at concentrations ranging from $2.0 \times 10^{5.2}$ to $1.6 \times 10^{6.4}$ for Influenza A and $2.0 \times 10^{5.5}$ to $1.6 \times 10^{6.4}$ for Influenza B, with 95-100% reactivity.
    Reactivity (Inclusivity)Detection of various influenza A and B strainsAll 15 Influenza A and 10 Influenza B strains detected in 3/3 replicates at specified concentrations.
    Analytical Specificity (Cross-Reactivity/Interference)No false positives (cross-reactivity); no interference with positive samplesNo false positives with 31 bacteria and 15 non-influenza viruses. No interference with influenza A/B positive samples. Biotin concentrations >500 ng/ml can cause false negative influenza A results (caveat).
    ReproducibilityHigh agreement across sites, operators, and days100.0% overall agreement for all sample categories across 3 sites, 3 operators, and 5 days.
    Lot-to-Lot PrecisionConsistent results across different reagent lots100.0% agreement across 3 reagent lots for all sample categories.

    2. Sample Size Used for the Test Set and Data Provenance

    • Clinical Test Set:

      • Prospective Clinical Study: 944 evaluable nasopharyngeal swab specimens.
        • Provenance: Collected from symptomatic patients during the 2018-2019 influenza season at 10 Point-of-Care investigational sites throughout the U.S. (prospective, U.S. origin).
      • Retrospective Study (supplemental for Influenza B): 162 swab samples prepared from archived respiratory specimens.
        • Provenance: Archived respiratory specimens from patients with influenza-like symptoms, confirmed positive or negative by an FDA-cleared molecular assay. Samples were distributed among four investigational sites (retrospective, likely U.S. origin, as it supplemental for the U.S. prospective study).
    • Analytical Sensitivity (LoD): 20 replicates per virus strain (8 strains tested).

    • Analytical Sensitivity (Reactivity/Inclusivity): 3 replicates per virus strain (25 strains tested).

    • Analytical Specificity (Cross-Reactivity/Interference): 3 replicates per organism/virus, both with and without influenza viruses (31 bacteria, 15 non-influenza viruses, 30 interfering substances).

    • Reproducibility Study: 7 sample categories tested in a blinded manner by 3 operators at 3 sites on 5 non-consecutive days (7 samples * 3 operators * 3 sites * 5 days = 315 tests, per virus type if applicable). Counted as 135/135 for each category overall.

    • Lot-to-Lot Precision: 7 sample categories tested across 3 reagent lots (counted as 27/27 for each category overall).


    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    The document does not specify the number or qualifications of experts for establishing ground truth.

    • Clinical Study Ground Truth: The ground truth for the clinical studies (prospective and retrospective) was established using an FDA-cleared influenza A and B molecular assay (comparator method). For discrepant results in the prospective study, an alternative FDA-cleared molecular assay was used for investigation. These are laboratory-based, highly sensitive and specific molecular tests for influenza, which are considered a gold standard for pathogen detection.

    • Analytical Studies Ground Truth: For analytical studies (LoD, Reactivity, Cross-Reactivity, Interference, Reproducibility, Lot-to-Lot Precision), the ground truth was inherent to the carefully prepared samples (e.g., known virus concentrations, known interfering substances).


    4. Adjudication Method for the Test Set

    • Clinical Studies: For the prospective clinical study, discrepancies between the CareStart™ Flu A&B Plus results and the initial molecular comparator assay results were investigated using an alternative FDA-cleared molecular influenza A/B assay. The results of this secondary testing were captured in footnotes but not included in the primary calculations of the performance estimates. This suggests a form of 2+1 adjudication not directly applied to the final performance metrics but used for understanding discrepancies.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, what was the effect size of how much human readers improve with AI vs without AI assistance

    • Not applicable. This device is an in vitro rapid immunochromatographic assay (a rapid diagnostic test kit) and does not involve AI assistance or human readers interpreting AI outputs. The results are interpreted visually by a human or using a simple instrument (like the predicate, though the proposed device is visually interpreted). Therefore, an MRMC study related to AI assistance for human readers was not performed.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Yes, in essence, the "device performance" in the clinical and analytical studies represents the standalone performance of the assay. The CareStart™ Flu A&B Plus itself is a rapid test that produces visual results. The clinical studies compare the device's output (positive/negative for Flu A/B) directly against a molecular truth, effectively testing its "standalone" diagnostic capabilities without a human interpretation model beyond reading a test line. The detection format is "Visual determination of presence or absence of colored line indicators" for the proposed device, indicating it functions as a standalone diagnostic tool without requiring an additional human interpretation step or an algorithm to read the result in a complex way.

    7. The type of ground truth used

    • Molecular Assay: The primary ground truth for clinical performance evaluation (prospective and retrospective studies) was an FDA-cleared influenza A and B molecular assay. An alternative FDA-cleared molecular assay was used to investigate discrepant results.
    • Known Concentrations/Absence of Analytes: For analytical studies (LoD, Reactivity, Cross-Reactivity, Interference, Reproducibility, Lot-to-Lot Precision), the ground truth was based on samples with known concentrations of specific virus strains or known absence of target analytes/presence of interfering substances.

    8. The sample size for the training set

    The document does not explicitly describe a separate "training set" in the context of machine learning or AI. As a rapid diagnostic test, the device's development typically involves iterative design and testing using various lab-prepared and clinical samples, which implicitly serve as a development/training phase. However, there isn't a formally described training set as one might see for an AI algorithm. The performance evaluation presented focuses on the validation of the device.


    9. How the ground truth for the training set was established

    As there is no explicitly defined "training set" in the provided document, the method for establishing its ground truth is not described. The analytical and clinical studies described serve as validation of the final device. The ground truth for those validation studies was established through FDA-cleared molecular assays or by controlled preparation of samples with known viral content.

    Ask a Question

    Ask a specific question about this device

    K Number
    K182001
    Date Cleared
    2018-12-17

    (144 days)

    Product Code
    Regulation Number
    866.3328
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    K182001

    Trade/Device Name: Acucy Influenza A&B Test with the Acucy System Regulation Number: 21 CFR 866.3328
    Classification

    Trade Name: Acucy™ Influenza A&B Test with the Acucy™ System Classification of Device: 21 CFR 866.3328

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Acucy™ Influenza A&B Test for the rapid qualitative detection of influenza A and B is composed of a rapid chromatographic immunoassay for the direct and qualitative detection of influenza A and B viral nucleoprotein antigens from nasal and nasopharyngeal swabs of symptomatic patients that is automatically analyzed on the Acucy Reader. The Acucy Influenza A&B Test is a differentiated test, such that influenza A viral antigens can be distinguished from influenza B viral antigens from a single processed sample using a single Test Cassette. The test is intended for use with the Acucy System as an aid in the diagnosis of influenza A and B viral infections. The test is not intended for the detection of influenza C viruses. Negative test results are presumptive and should be confirmed by viral culture or an FDA-cleared influenza A and B molecular assay. Negative test results do not preclude influenza viral infection and should not be used as the sole basis for treatment or other patient management decisions.

    Device Description

    The Acucy™ Influenza A&B Test is a lateral flow immunochromatographic assay in the sandwich immunoassay format. The Acucy Influenza A&B Test consists of a Test Cassette that detects and differentiates influenza A and influenza B viral antigens from a patient sample. The test sample, a nasal swab or nasopharyngeal swab, is processed to extract nucleoproteins by mixing the swab in Acucy Influenza A&B Extraction Buffer. The mixture is then added to the sample well of the Test Cassette. From there, the sample migrates along the membrane surface. If influenza A or B viral antigens are present, they form a complex with mouse monoclonal antibodies to influenza A and/or B nucleoproteins conjugated to colloidal gold. The complex is then bound by a rat anti-influenza A and/or mouse anti-influenza B antibody coated on the nitrocellulose membrane. The Acucy Reader is an optoelectronic instrument that uses a reflectance-based measurement method to evaluate the line signal intensities in the results window of the Test Cassette. The Reader scans the Test Cassette and measures the absorbance intensity by processing the results using method-specific algorithms. The Acucy Reader displays the test results POS (+), NEG (-), or INVALID on the screen. The results can also be automatically printed on the Acucy Printer if this option is selected.

    AI/ML Overview

    The document describes the performance of the Acucy™ Influenza A&B Test with the Acucy™ System, which is a rapid in-vitro diagnostic test. Here's a breakdown of the acceptance criteria and the study that proves the device meets them:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document doesn't explicitly list "acceptance criteria" for sensitivity and specificity in a separate table. Instead, it presents the achieved performance, which implicitly serves as the successful outcome of the clinical study. The performance is compared against a Composite Reference.

    Here's a table summarizing the reported device performance for the combined nasal and nasopharyngeal swab samples from the clinical study, which are the primary results for overall performance:

    MetricInfluenza A PerformanceInfluenza B Performance
    Sensitivity96.4% (95% CI: 93.1% - 98.2%)82.3% (95% CI: 75.6% - 87.4%)
    Specificity96.0% (95% CI: 94.4% - 97.2%)98.1% (95% CI: 96.9% - 98.8%)

    Other "Acceptance Criteria" implicitly met through other studies:

    Study CategoryAcceptance Criteria (Implicit from Results)Reported Device Performance
    ReproducibilityHigh agreement across sites, operators, and days for various concentrations.Overall Percent Agreement for Influenza A samples (High Negative, Low Positive, Moderate Positive, True Negative) ranged from 98.9% to 100%. Overall Percent Agreement for Influenza B samples (High Negative, Low Positive, Moderate Positive, True Negative) ranged from 98.9% to 100%.
    Limit of Detection (LoD)Consistently positive results >95% of the time at specified concentrations.LoD for Influenza A (H1N1pdm09): 1.41 x 10^1 TCID50/mL. LoD for Influenza A (H3N2): 7.06 x 10^1 TCID50/mL. LoD for Influenza B (Victoria): 2.35 x 10^1 TCID50/mL. LoD for Influenza B (Yamagata): 3.40 x 10^1 TCID50/mL.
    Analytical ReactivityAll tested influenza A strains yield positive A and negative B results; all influenza B strains yield positive B and negative A results.All 28 tested influenza A and B strains at or near LoD met this criterion.
    Analytical Specificity / Cross-ReactivityNo cross-reactivity with tested organisms; no interference with influenza A or B detection from microorganisms or human DNA.All 41 tested bacterial, viral, fungal organisms and human DNA showed no cross-reactivity or interference.
    Interfering SubstancesNo interference observed with common respiratory substances.No interference observed for any of the 20+ tested substances at specified concentrations.
    Performance Near CutoffUntrained users can accurately interpret and perform the test at and below the LoD.Agreement for low positive and high negative samples ranged from 96.83% to 100% across multiple sites and operators.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set:
      • Clinical Performance Study: 1003 evaluable nasal or nasopharyngeal swab samples were included in the primary analysis. A total of 1053 subjects were enrolled initially.
      • Near Cutoff Study: A panel of 84 samples was tested at each of the three CLIA-waived sites, totaling 252 tests.
      • Reproducibility Study: Data for each sample type (e.g., Flu A High Negative) involved 30 replicates per site for 3 sites, totaling 90 replicates per sample type.
    • Data Provenance:
      • Country of Origin: The clinical study was conducted at sixteen investigational sites across the U.S. ("across the U.S.").
      • Retrospective or Prospective: The primary clinical study was prospective, conducted during the 2017-2018 influenza season. The "Assay Cutoff" section mentions a clinical dataset comprised of 1252 "prospectively and retrospectively collected samples" was used for ROC analysis, indicating a mix for cutoff optimization, but the main performance evaluation uses prospective data. The Near Cutoff Study was also prospective, conducted during a "normal testing day" over "non-consecutive days."

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The document does not specify the number or qualifications of experts for establishing ground truth for the clinical test set. Instead, it states that the ground truth was established by a "Composite Reference" consisting of:

    • Two FDA-cleared molecular influenza A&B assays
    • Cell culture

    A sample was considered positive or negative if two or three of these comparative reference methods agreed. This implies an objective, laboratory-based ground truth rather than a panel of human expert review.

    4. Adjudication Method for the Test Set

    The adjudication method for the clinical test set was a Composite Reference approach.

    • "A sample was considered positive for influenza A or influenza B by the Composite Reference if two or three of the comparative reference methods gave a positive result."
    • "A sample was considered negative for influenza A or influenza B by the composite reference if two or three of the comparative reference methods gave a negative result."

    This is a form of 2-out-of-3 or 3-out-of-3 majority vote from the reference methods, ensuring robust ground truth.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

    This document describes an in vitro diagnostic device (a rapid test and its reading system), not an AI-assisted diagnostic imaging device for human readers. Therefore, an MRMC comparative effectiveness study involving human readers and AI assistance is not applicable and was not performed. The "Acucy Reader" is an optoelectronic instrument that automatically processes the lateral flow test, so human interpretation of the test line is not the primary mechanism of reading.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    Yes, the primary clinical performance evaluation and all analytical studies (LoD, reactivity, specificity, interfering substances) represent standalone performance of the device (Acucy™ Influenza A&B Test with the Acucy™ System). The Acucy Reader is an automated system that "scans the Test Cassette and measures the absorbance intensity by processing the results using method-specific algorithms" and "displays the test results POS (+), NEG (-), or INVALID on the screen." This is purely algorithm/device driven, without human interpretation of the test lines themselves.

    7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)

    The ground truth used for the clinical performance evaluation was a Composite Reference based on laboratory methods:

    • Two FDA-cleared molecular influenza A&B assays
    • Cell culture

    This is a robust and objective ground truth commonly used for infectious disease diagnostics.

    8. The Sample Size for the Training Set

    The document describes a 510(k) submission for a diagnostic test kit involving a lateral flow immunoassay and an optical reader. This is a traditional IVD device, not an AI/ML model that typically requires a large "training set" in the machine learning sense.

    However, the "Assay Cutoff" section mentions:

    • Initial determination using "contrived influenza A samples, influenza B samples, and negative samples prepared in clinical nasal matrix," tested in "replicates of 60 with two lots of reagents (a total of 360 test results)." This internal testing likely contributed to the initial algorithm development or parameter setting.
    • "To validate the primary cutoff values, a clinical dataset comprised of 1252 prospectively and retrospectively collected samples was tested with the Acucy Influenza A&B Test, and Receiver Operator Characteristic (ROC) analysis was performed to determine the optimal values for sensitivity and specificity." This dataset of 1252 samples was used for optimizing and validating the assay cutoffs, which is analogous to a development/validation set in an ML context, though not a "training set" in the iterative learning sense.

    9. How the Ground Truth for the Training Set Was Established

    For the 1252 samples used for ROC analysis and cutoff adjustment:

    • The document implies that these samples were "clinical samples," and it's highly probable their true status (positive/negative for Flu A/B) would have been determined by similar reference methods (molecular assays, cell culture) as used for the main clinical study, or other highly accurate laboratory methods.
    • For the "contrived samples" used for initial cutoff determination, the ground truth was known by design (i.e., whether the sample was spiked with influenza virus and at what concentration).
    Ask a Question

    Ask a specific question about this device

    K Number
    K182157
    Device Name
    BioSign Flu A+B
    Date Cleared
    2018-09-18

    (40 days)

    Product Code
    Regulation Number
    866.3328
    Reference & Predicate Devices
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Junction, New Jersey 08852

    Re: K182157

    Trade/Device Name: BioSign Flu A+B Regulation Number: 21 CFR 866.3328

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Not Found

    Device Description

    Not Found

    AI/ML Overview

    I am sorry, but the provided text does not contain information about acceptance criteria, device performance, study design, or ground truth establishment. The document is an FDA 510(k) clearance letter for the BioSign Flu A+B device, which indicates that the device has been deemed substantially equivalent to a legally marketed predicate device. It discusses regulatory information, but not the technical data from performance studies.

    Therefore, I cannot fulfill your request to describe the acceptance criteria and the study that proves the device meets them based on the given input.

    Ask a Question

    Ask a specific question about this device

    K Number
    K181853
    Date Cleared
    2018-08-08

    (28 days)

    Product Code
    Regulation Number
    866.3328
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    K181853

    Trade/Device Name: Alere BinaxNOW Influenza A & B Card 2, Alere Reader Regulation Number: 21 CFR 866.3328
    Test Analyzer

    CLASSIFICATION NAME

    Influenza virus antigen detection test system (per 21 CFR 866.3328

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Alere BinaxNOW Influenza A & B Card 2 is an in vitro immunochromatographic assay for the qualitative detection of influenza A and B nucleoprotein antigens in nasopharyngeal (NP) swab and nasal swab specimens. It is intended to aid in the rapid differential diagnosis of influenza A and B viral infections. Negative test results are presumptive and should be confirmed by cell culture or an FDA-cleared influenza A and B molecular assay. Negative test results do not preclude influenza viral infection and should not be used as the sole basis for treatment or other patient management decisions. Alere BinaxNOW Influenza A & B Card 2 must be read by the Alere Reader.

    Performance characteristics for influenza A were established during the 2015-2016 influenza A/H3N2 and A/H1N1 pandemic were the predominant influenza A viruses in circulation. When other influenza A viruses are emerging, performance characteristics may vary.

    If infection with a novel influenza A virus is suspected based on current clinical and epidemiological screening criteria recommended by public health authorities, specimens should be collected with appropriate infection control precautions for novel virulent Influenza viruses and sent to state or local health department for testing. Viral culture should not be attempted in these cases unless a BSL 3+ facility is available to receive and culture specimens.

    Device Description

    The Alere BinaxNOW® Influenza A & B Card 2 is an immunochromatographic membrane assay that detects influenza type A and B nucleoprotein antigens in respiratory specimens. Influenza specific antibodies and a control antibody are immobilized onto a membrane support as three distinct lines and combined with other reagents/pads to construct a test strip. The test strip is mounted inside a cardboard, book-shaped hinged test card.

    Swab specimens require a sample preparation step, in which the sample is eluted off the swab into Elution Solution. Sample is added to the top of the test card is closed. Test results are interpreted at 15 minutes based on the presence or absence of Sample Lines. Alere BinaxNOW® Influenza A & B Card 2 test results must be read by the Alere™ Reader.

    The Alere™ Reader is an easy to use bench top instrument that can be used near patient and in laboratory settings which will interpret, capture and transmit test results. The Alere™ Reader is a camera based instrument that detects the presence and identity of the Alere BinaxNOW® Influenza A& B Card 2 assay, analyzes the intensity of the test and control lines and displays the results (positive, neqative or invalid) on a display screen. The screen is intended as a means of user interface informing the user how to operate the Reader and to display test results, including any errors. Data can be retrieved and downloaded by the operator at any time after testing and uploaded to the hospital LIS/LIM system, if desired. Operator ID and Subject ID can be entered manually or via the provided barcode scanner. An external printer can be attached via USB to the Alere™ Reader to print test results.

    AI/ML Overview

    A table of the acceptance criteria and the reported device performance is not explicitly provided in the text. However, the study aims to demonstrate that a software modification to the Alere™ Reader does not negatively impact the performance of the Alere BinaxNOW® Influenza A & B Card 2.

    The study compared the performance of the Alere BinaxNOW® Influenza A & B Card 2 with the new software modification to the legally marketed predicate device (K173502). The core of the study involved testing clinical samples and comparing the results from the modified device with the predicate device.

    Here's an analysis of the provided information concerning the study and ground truth:

    1. Table of Acceptance Criteria and Reported Device Performance: This information is not directly presented as a table in the provided text. The submission describes a software modification to mitigate false positive results and states that the new device was compared to the predicate. To fully describe the acceptance criteria, one would typically need access to the original 510(k) submission (K173502) for the predicate device, as the current submission is a "Special 510k" focused on a software change, implying the goal is to maintain the performance of the predicate. The text states the study was performed to show that the modified software does not negatively impact performance and shows no statistically significant performance difference between the modified software and the predicate.

    2. Sample Size used for the test set and the data provenance:

      • The text states: "An evaluation was performed using 123 positive and 115 negative clinical samples..." This indicates a total of 238 clinical samples were used in the evaluation.
      • Data Provenance: Not explicitly stated (e.g., country of origin). The data is from retrospective clinical samples as they were "clinical samples that were previously tested" with the predicate device.
    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not specified in the provided text.

    4. Adjudication method for the test set: Not specified in the provided text.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: Not applicable. The Alere Reader is an automated instrument, and the study compares two versions of the software in the instrument, not human readers with or without AI assistance. The reader performs the interpretation, so there's no human reading component for comparison in this context.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Yes, the evaluation described is a standalone performance of the algorithm within the Alere™ Reader. The device reads and interprets the test result without human intervention for the reading step (the user interface displays the result, but the reading itself is automated).

    7. The type of ground truth used: The ground truth for the clinical samples was based on "previously tested" results with the predicate device, implying a comparison to the established performance of the predicate. For the initial predicate device (K173502), the ground truth for influenza detection would typically be established by cell culture or an FDA-cleared influenza A and B molecular assay, as stated in the Indications for Use: "Negative test results are presumptive and should be confirmed by cell culture or an FDA-cleared influenza A and B molecular assay."

    8. The sample size for the training set: Not specified. The submission describes a modification to existing software, implying the initial training would have occurred for the predicate device's software. This particular submission does not detail creation of a new training set.

    9. How the ground truth for the training set was established: Not specified, as training set details are not provided in this document focused on a software modification. For the original validation of the predicate reader, the ground truth for training would likely have been established using well-characterized samples (e.g., confirmed by cell culture or molecular assay).

    Ask a Question

    Ask a specific question about this device

    K Number
    K180438
    Manufacturer
    Date Cleared
    2018-03-20

    (28 days)

    Product Code
    Regulation Number
    866.3328
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Name: BD Veritor System for Rapid Detection of Flu A + B CLIA Waived Kit Regulation Number: 21 CFR 866.3328
    |
    | DEVICE CLASSIFICATION: | 21 CFR 866.3328

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The BD Veritor System for Rapid Detection of Flu A+B is a rapid chromatographic immunoassay for the direct and qualitative detection of influenza A and B viral nucleoprotein antigens from nasal and nasopharyngeal swabs of symptomatic patients. The BD Veritor System for Rapid Detection of Flu A+B (also referred to as the BD Veritor System and BD Veritor System Flu A+B) is a differentiated test, such that influenza A viral antigens can be distinguished from influenza B viral antigens from a single processed sample using a single device. The test is to be used as an aid in the diagnosis of influenza A and B viral infections. A negative test is presumptive and it is recommended that these results be confirmed by viral culture or an FDA-cleared influenza A and B molecular assay. Outside the U.S., a negative test is presumptive and it is recommended that these results be confirmed by viral culture or a molecular assay cleared for diagnostic use in the country of use. FDA has not cleared this device for use outside of the U.S. Negative test results do not preclude influenza viral infection and should not be used as the sole basis for treatment or other patient management decisions. The test is not intended to detect influenza C antigens.

    Device Description

    The BD Veritor System for Rapid Detection of Flu A+B is a rapid chromatographic immunoassay for the direct and qualitative detection of influenza A and B viral antigens from nasopharyngeal and nasal swabs of symptomatic patients. The test is to be used as an aid in the diagnosis of influenza A and B viral infections. It is a differentiated test, such that influenza A viral antigens can be distinguished from influenza B viral antigens from a single processed sample using a single test device. Negative test results do not preclude influenza viral infection and should not be used as the sole basis for treatment or other management decisions. All negative test results should be confirmed by another methodology, such as a nucleic acid based method. All BD Veritor System Flu A+B test devices are interpreted by a BD Veritor System Instrument, either a BD Veritor Reader or BD Veritor Plus Analyzer.

    The BD Veritor Flu A+B test is an immuno-chromatographic assay for detection of influenza A and B viral antigens in samples processed from respiratory specimens. The viral antigens detected by the BD Flu A+B test are nucleoprotein, not hemagglutinin (HA) or neuraminidase (NA) proteins. Flu viruses are prone to minor point mutations (i.e., antigenic drift) in either one or both of the surface proteins (i.e., HA or NA). The BD Flu A+B test is not affected by antigenic drift or shift because it detects the highly conserved nucleoprotein of the influenza viruses 12. To perform the test, the patient specimen swab is treated in a supplied reaction tube prefilled with a lysing agent that serves to expose the target viral antigens, and then expressed through a filter tip into the sample well on a BD Veritor Flu A+B test device. Any influenza A or influenza B viral antigens present in the specimen bind to anti-influenza antibodies conjugated to colloidal gold micro-particles on the Veritor Flu A+B test strip. The antigen-coniugate complex then migrates across the test strip to the capture zone and reacts with either Anti-Flu A or Anti-Flu B antibodies that are immobilized on the two test lines on the membrane.

    The BD Flu A+B test device shown in Figure 1 is designed with five spatially-distinct zones including positive and negative control line positions, separate test line positions for the target analytes, and a background zone. The test lines for the target analytes are labeled on the test device as 'A' for flu A position, and 'B' for flu B position. The onboard positive control ensures the sample has flowed correctly and is indicated on the test device as 'C'. Two of the five distinct zones on the test device are not labeled. These two zones are an onboard negative control line and an assay background zone. The active negative control feature in each test identifies and compensates for specimen-related, nonspecific signal generation. The remaining zone is used to measure the assay background.

    The Veritor System is made up of assay kits with analyte specific reagents and an optoelectronic interpretation instrument.

    The BD Veritor System instruments use a reflectance-based measurement method and apply assay specific algorithms to determine the presence or absence of the target analyte. In the case of the Flu A + B test. the BD Veritor System instruments subtract nonspecific signal at the negative control line from the signal present at both the Flu A and Flu B test lines. If the resultant line signal is above a pre-selected assay cutoff, the specimen scores as positive. If the resultant line signal is below the cutoff, the specimen scores as negative. Use of the active negative control feature allows the BD Veritor System instruments to correctly interpret test results that cannot be scored visually because the human eye is unable to accurately perform the subtraction of the nonspecific signal. The measurement of the assay background zone is an important factor during test interpretation as the reflectance is compared to that of the control and test zones. A background area that is white to light pink indicates the device has performed correctly. Sample preparation is the same for use with both instruments, and both can utilize the same kit components. Neither instrument requires calibration.

    The Veritor Reader and the Veritor Plus Analyzer use the functional components and decision algorithm in the firmware. The BD Veritor Plus Analyzer has the flexibility of an optional bar code scanning module and cellular connectivity designed to facilitate record keeping as well as the addition of a "Walk Away" work flow mode. Depending on the configuration chosen by the operator, the Veritor Plus Analyzer communicates status and results to the operator via a liquid crystal display (LCD) on the instrument, a connected printer, or through a secure connection to the facility's information system.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the BD Veritor System for Rapid Detection of Flu A + B CLIA Waived Kit, based on the provided document:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state "acceptance criteria" numerical targets. Instead, it presents performance data compared to a reference method (PCR) which implies these are the achieved performance metrics considered acceptable for substantial equivalence. The key performance indicators are Positive Percent Agreement (PPA) and Negative Percent Agreement (NPA) for influenza A and B.

    Performance MetricAcceptance Criteria (Implied)Reported Performance (All Swabs - All Sites)
    Influenza A
    PPANot explicitly stated83.6% (95% CI: 76.1%, 89.1%)
    NPANot explicitly stated97.5% (95% CI: 95.7%, 98.5%)
    Influenza B
    PPANot explicitly stated81.3% (95% CI: 71.1%, 88.5%)
    NPANot explicitly stated98.2% (95% CI: 95.7%, 99.3%)

    2. Sample Size and Data Provenance for the Test Set

    The reported performance data in the table (PPA and NPA) are derived from a test set with the following characteristics:

    • Sample Size for Influenza A: 736 total samples (226 PCR positive, 510 PCR negative).
    • Sample Size for Influenza B: 736 total samples (171 PCR positive, 565 PCR negative).
    • Data Provenance: The document states the performance characteristics were established "during January through March of 2011" and summarizes data "across all age groups, clinical testing sites and sample types." This indicates a prospective clinical study involving collection of symptomatic patient samples. The country of origin is not explicitly stated for the "All Sites" data, but given it's an FDA submission, it's highly likely to include data from the United States. It is a prospective study during the influenza season.

    3. Number of Experts and Qualifications for Ground Truth

    The document does not mention the use of human experts to establish ground truth for the primary clinical performance data. The reference method for ground truth was a Molecular Assay (PCR).

    4. Adjudication Method for the Test Set

    Not applicable, as the ground truth was established by a molecular assay (PCR), not expert consensus.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    The document does not describe an MRMC comparative effectiveness study involving human readers with and without AI assistance. The device is a rapid chromatographic immunoassay interpreted by an instrument (BD Veritor Reader or Veritor Plus Analyzer), not an AI-assisted diagnostic for human readers.

    The device itself is an automated system for interpreting rapid tests, not a tool to assist human readers in interpreting complex images or data.

    6. Standalone (Algorithm Only) Performance

    Yes, the study focuses on the standalone performance of the BD Veritor System (the rapid immunoassay device interpreted by the Veritor Reader or Veritor Plus Analyzer). The reported PPA and NPA values represent the performance of the device itself against a reference standard (PCR) without human interpretation.

    The "Principle of the Test" section explains: "All BD Veritor System Flu A+B test devices are interpreted by a BD Veritor System Instrument, either a BD Veritor Reader or BD Veritor Plus Analyzer." The instrument's algorithms make the determination.

    7. Type of Ground Truth Used

    The ground truth used for the clinical performance evaluation was PCR (Polymerase Chain Reaction), which is a molecular assay for detecting influenza A and B. It is referred to as "Reference PCR" in the performance tables.

    8. Sample Size for the Training Set

    The document does not provide specific details on the sample size used for the training set of the device's inherent algorithms or cutoff thresholds. It mentions that "performance characteristics for influenza A and B were established during January through March of 2011," implying a dataset used for development and validation. For the comparison between Veritor Reader and Veritor Plus Analyzer, the following samples were assessed:

    • 102 Flu A-/B- samples
    • 52 Flu A+ samples
    • 52 Flu B+ samples

    These samples were used to confirm equivalency between the interpreting instruments, not necessarily as a "training set" for the assay itself.

    9. How the Ground Truth for the Training Set Was Established

    The document does not explicitly describe how the ground truth for any "training set" was established. However, given the context, it's highly probable that if a training set was used for algorithm development, the ground truth would also have been established by a highly sensitive and specific reference method like PCR or viral culture, similar to how the ground truth for the test set was established.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 2