Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K092223
    Manufacturer
    Date Cleared
    2009-08-12

    (20 days)

    Product Code
    Regulation Number
    866.3328
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    MODIFICATION TO: BINAXNOW INFLUENZA A & B TEST

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The BinaxNOW® Influenza A & B Test is an in vitro immunochromatographic assay for the qualitative detection of influenza A and B nucleoprotein antigens in nasopharyngeal (NP) swab, nasal swab, and nasal wash/aspirate specimens. It is intended to aid in the rapid differential diagnosis of influenza A and B viral infections. Negative test results should be confirmed by cell culture.

    Device Description

    The BinaxNOW* Influenza A & B Test is an immunochromatographic membrane assay that uses highly sensitive monoclonal antibodies to detect influenza type A & B nucleoprotein antigens in nasopharyngeal (NP) swab, nasal swab, and nasal wash/aspirate specimens. These antibodies and a control antibody are immobilized onto a membrane support as three distinct lines and combined with other reagents/pads to construct a test strip. This test strip is mounted inside a cardboard, book-shaped hinged test device. Swab specimens require a sample preparation step, in which the sample is eluted off the swab into elution solution or transport media. Nasal wash/aspirate samples require no preparation. Sample is added to the top of the test strip and the test device is closed. Test results are interpreted at 15 minutes based on the presence or absence of pink-to-purple colored Sample Lines. The blue Control Line turns pink in a valid assay.

    AI/ML Overview

    Here's an analysis of the BinaxNOW® Influenza A & B Test, detailing its acceptance criteria and the supporting studies:

    Acceptance Criteria and Device Performance for BinaxNOW® Influenza A & B Test

    The provided document describes a 510(k) submission to expand the claims of the BinaxNOW® Influenza A & B Test. While explicit "acceptance criteria" in a numerical target format are not directly stated, the document presents detailed performance data from clinical and analytical studies. The implied acceptance criteria are that the device demonstrates adequate sensitivity and specificity for the detection of influenza A and B antigens in various sample types, comparable to the reference standard (cell culture/DFA). The analytical studies further establish the device's limit of detection, reactivity to various strains, and cross-reactivity.

    1. Table of Acceptance Criteria (Implied) and Reported Device Performance

    Given the nature of the submission (expansion of claims for an existing device), the "acceptance criteria" are implicitly set by regulatory expectations for diagnostics and comparison to the predicate device and reference methods. The reported performance is directly from the clinical studies presented.

    Clinical Performance vs. Cell Culture/DFA (Prospective Study)

    Sample TypeAnalyteImplied Acceptance Criterion (e.g., ≥X%)Reported % Sensitivity (95% CI)Reported % Specificity (95% CI)
    NP SwabFlu AGood Sensitivity/Specificity77% (65-86%)99% (97-100%)
    Nasal SwabFlu AGood Sensitivity/Specificity83% (74-90%)96% (93-98%)
    Overall (Flu A)Flu AGood Sensitivity/Specificity81% (74-86%)97% (96-98%)
    NP SwabFlu BGood Sensitivity/Specificity50% (9-91%)100% (99-100%)
    Nasal SwabFlu BGood Sensitivity/Specificity69% (39-90%)100% (98-100%)
    Overall (Flu B)Flu BGood Sensitivity/Specificity65% (39-85%)100% (99-100%)

    Clinical Performance vs. Cell Culture/DFA (Retrospective Study)

    Sample TypeAnalyteImplied Acceptance Criterion (e.g., ≥X%)Reported % Sensitivity (95% CI)Reported % Specificity (95% CI)
    NP SwabFlu AGood Sensitivity/Specificity70% (50-86%)90% (81-95%)
    Wash/AspirateFlu AGood Sensitivity/Specificity89% (78-96%)95% (89-98%)
    Overall (Flu A)Flu AGood Sensitivity/Specificity83% (73-90%)93% (88-96%)
    NP SwabFlu BGood Sensitivity/SpecificityN/A (0/0)98% (93-100%)
    Wash/AspirateFlu BGood Sensitivity/Specificity53% (27-78%)94% (89-97%)
    Overall (Flu B)Flu BGood Sensitivity/Specificity53% (27-78%)96% (92-98%)

    Analytical Sensitivity (Limit of Detection - LOD)

    AnalyteImplied Acceptance Criterion (e.g., LOD at 95% detection)Reported LODReported % Detected at LOD
    Flu A/BeijingIdentify concentration for 95% detection$1.03 \times 10^2$ ng/ml96% (23/24)
    Flu B/HarbinIdentify concentration for 95% detection$6.05 \times 10^1$ ng/ml96% (23/24)

    Reactivity Testing

    AnalyteImplied Acceptance CriterionReported Performance
    Various Flu A strainsDetect at specified concentrationsPositive at $10^2-10^6$ CEID50/ml or $10^2-10^5$ TCID50/ml or $10^4-10^5$ EID50/ml
    Various Flu B strainsDetect at specified concentrationsPositive at $10^2-10^6$ CEID50/ml
    Swine-lineage Flu A (H1N1)Detect at specified concentrationsPositive at $5.63 \times 10^4$ TCID50/ml or $1.0 \times 10^5$ TCID50/ml

    Analytical Specificity (Cross Reactivity)

    Interfering AgentsImplied Acceptance CriterionReported Performance
    36 commensal & pathogenic microorganismsNo cross-reactivityAll identified microorganisms were negative at concentrations $10^5-10^6$ TCID/ml (viruses), $10^7-10^8$ organisms/ml (bacteria), $10^8$ organisms/ml (yeast).

    Interfering Substances

    Interfering SubstancesImplied Acceptance CriterionReported Performance
    Various OTC drugs, bloodNo interference with test interpretationNo interference found for listed substances at specified concentrations. Whole blood (1%) interfered with Flu A LOD positive samples, but not negative results.

    Transport Media

    Transport MediaImplied Acceptance CriterionReported Performance
    Various mediaNo impact on test performanceMedia alone tested negative; media inoculated with LOD Flu A & B tested positive on appropriate test line. Sucrose-Phosphate Buffer may not be suitable.

    Reproducibility

    Performance AspectImplied Acceptance Criterion (e.g., High agreement)Reported Performance
    Overall agreement with expected resultsHigh agreement97% (242/250) agreement
    Differences (within run, between run, between sites)No significant differencesNo significant differences observed

    Study Details:

    2. Sample Size Used for the Test Set and Data Provenance:

    • Clinical Studies (Prospective):
      • Sample Size: 846 prospective specimens.
      • Data Provenance: Not explicitly stated, but the mention of "patients presenting with influenza-like symptoms" and demographic breakdown (male/female, pediatric/adult) suggests a general clinical population. No specific country is mentioned, implying it could be multi-site within the US or a general US population. The study is prospective.
    • Clinical Studies (Retrospective):
      • Sample Size: 293 retrospective frozen clinical samples.
      • Data Provenance: Clinical samples collected from symptomatic patients at multiple physician offices, clinics, and hospitals located in the Southern, Northeastern, and Midwestern regions of the United States, and from one hospital in Sweden. The study is retrospective.
    • Analytical Sensitivity:
      • Sample Size: 24 determinations per concentration level (12 operators x 2 devices).
      • Data Provenance: Not specified, likely internal lab studies.

    3. Number of Experts and Qualifications for Ground Truth (Clinical Studies):

    • The ground truth for the clinical studies was established by Cell Culture / DFA (Direct Fluorescent Antibody assay). This is a laboratory-based method.
    • Number of Experts: The document does not specify the number of human experts involved in interpreting the cell culture or DFA results, nor their specific qualifications. It is assumed that trained laboratory personnel performed these reference tests.

    4. Adjudication Method for the Test Set:

    • The document implies that the BinaxNOW® test results were directly compared to the Cell Culture/DFA results. There is no mention of a separate adjudication method (e.g., 2+1, 3+1 consensus) for the test set itself, as the Cell Culture/DFA is treated as the definitive ground truth.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    • No Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done comparing human reader performance with and without AI assistance. The device is a rapid diagnostic kit, not an AI-powered image analysis system interpreted by human readers. The clinical studies evaluate the device's performance against a reference standard. The "operators" mentioned in the analytical sensitivity section are performing the device test, not interpreting complex medical images.

    6. Standalone Performance:

    • Yes, standalone performance was done. The entire clinical and analytical performance sections evaluate the device (algorithm/test kit) in a standalone manner against a reference standard (Cell Culture/DFA) or known concentrations/strains. There is no human-in-the-loop component being evaluated in the reported performance. The "interpretation" of the BinaxNOW test results is based on visible lines, which is a direct reading of the device's output.

    7. Type of Ground Truth Used:

    • Clinical Studies: The type of ground truth used was Cell Culture / DFA. This is a laboratory-based diagnostic method considered a gold standard for influenza detection at the time of the study.
    • Analytical Studies: The ground truth for analytical sensitivity, reactivity, and specificity used known concentrations of inactivated viruses, specific influenza strains, or panels of other microorganisms at known concentrations.

    8. Sample Size for the Training Set:

    • The document describes a 510(k) for an existing device (BinaxNOW® Influenza A & B Test; K062109). This type of submission typically focuses on validation and verification of the device's performance, not on the explicit "training" of an algorithm in the machine learning sense.
    • Therefore, there is no identifiable "training set" sample size in the context of an algorithm or AI. The immunoassay technology relies on pre-designed antibodies, not a trained computational model.

    9. How the Ground Truth for the Training Set Was Established:

    • As there is no explicit "training set" in the context of an AI/algorithm, this question is not applicable to this device submission. The immunoassay is developed and validated through laboratory methods (antibody selection, antigen-antibody binding studies) rather than machine learning training.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1