Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K062109
    Manufacturer
    Date Cleared
    2006-11-09

    (108 days)

    Product Code
    Regulation Number
    866.3328
    Reference & Predicate Devices
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The BinaxNOW® Influenza A & B Test is an in vitro immunochromatographic assay for the qualitative detection of influenza A and B nucleoprotein antigens in nasopharyngeal (NP) swab, nasal swab, and nasal wash/aspirate specimens. It is intended to aid in the rapid differential diagnosis of influenza A and B viral infections. Negative results do not preclude influenza virus infection and should not be used as the sole basis for treatment or other management decision.

    Device Description

    The BinaxNOW® Influenza A & B Test is an immunochromatographic membrane assay that uses highly sensitive monoclonal antibodies to detect influenza type A & B nucleoprotein antigens in respiratory specimens. These antibodies and a control antibody are immobilized onto a membrane support as three distinct lines and combined with other reagents/pads to construct a test strip. This test strip is mounted inside a cardboard, book-shaped hinged test device. Swab specimens require a sample preparation step, in which the sample is eluted off the swab into elution solution, saline, or transport media. Nasal wash/aspirate samples require no preparation. Sample is added to the top of the test strip and the test device is closed. Test results are interpreted at 15 minutes based on the presence or absence of pink-to-purple colored Sample Lines. The blue Control Line turns pink in a valid assay.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the BinaxNOW® Influenza A & B Test, based on the provided 510(k) summary:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria for sensitivity and specificity are not explicitly stated as pre-defined targets within the provided text. Instead, the document presents the observed performance of the device against the reference method (cell culture/DFA) in various clinical studies. The substantial equivalence is established by comparing this observed performance to the predicate device, the BD Directigen™ Flu A+B Test (though detailed performance for the predicate is not provided in this summary).

    For the purpose of this analysis, we will present the reported device performance from the prospective and retrospective clinical studies.

    BinaxNOW® Influenza A & B Test Performance vs. Cell Culture/DFA (Prospective Study)

    TargetSample TypeReported Sensitivity (%) (95% CI)Reported Specificity (%) (95% CI)
    Influenza ANP Swab77% (65-86%)99% (97-100%)
    Nasal Swab83% (74-90%)96% (93-98%)
    Overall81% (74-86%)97% (96-98%)
    Influenza BNP Swab50% (9-91%)100% (99-100%)
    Nasal Swab69% (39-90%)100% (98-100%)
    Overall65% (39-85%)100% (99-100%)

    BinaxNOW® Influenza A & B Test Performance vs. Cell Culture/DFA (Retrospective Study)

    TargetSample TypeReported Sensitivity (%) (95% CI)Reported Specificity (%) (95% CI)
    Influenza ANP Swab70% (50-86%)90% (81-95%)
    Wash/Aspirate89% (78-96%)95% (89-98%)
    Overall83% (73-90%)93% (88-96%)
    Influenza BNP SwabN/A (0/0 positive)98% (93-100%)
    Wash/Aspirate53% (27-78%)94% (89-97%)
    Overall53% (27-78%)96% (92-98%)

    Analytical Sensitivity (Limit of Detection - LOD)

    Influenza StrainConcentration (ng/ml)# Detected% Detected
    Flu A/Beijing (LOD)1.03 x 10^223/2496%
    Flu B/Harbin (LOD)6.05 x 10^123/2496%

    2. Sample Sizes Used for the Test Set and Data Provenance

    • Prospective Study Test Set:

      • Total Specimens: 846
      • Provenance: Multi-center, "central testing laboratory outside the US during the 2004 respiratory season and at three US trial sites during the 2005-2006 respiratory season." Data is prospective.
      • Patient demographics: 44% male, 54% female, 54% pediatric (<18 years), 46% adult (≥18 years).
      • Sample types: Nasopharyngeal (NP) swabs, nasal swabs.
    • Retrospective Study Test Set:

      • Total Specimens: 293
      • Provenance: Collected from symptomatic patients at multiple physician offices, clinics, and hospitals in the Southern, Northeastern, and Midwestern regions of the United States, and from one hospital in Sweden. Data is retrospective (frozen clinical samples).
      • Patient demographics: 53% male, 47% female, 62% pediatric (<18 years), 38% adult (≥18 years).
      • Sample types: Nasal wash/aspirate (61%), NP swabs (39%).

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts

    The document does not explicitly state the number or qualifications of experts used to establish the ground truth. It refers to "Cell Culture / DFA" as the reference method. In a typical clinical setting, cell culture and Direct Fluorescent Antibody (DFA) testing would be performed and interpreted by trained laboratory professionals, such as medical technologists or microbiologists. Specific expert qualifications (e.g., years of experience, board certification) are not provided.

    4. Adjudication Method for the Test Set

    The document does not describe an adjudication method for disagreements or indeterminate results between different readers or between the device and the ground truth. The "ground truth" (Cell Culture/DFA) is treated as the definitive reference.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance

    No MRMC comparative effectiveness study is mentioned, as this device is a rapid diagnostic test (immunochromatographic assay) and not an AI-assisted diagnostic tool that would typically involve human readers interpreting images. The closest mention of human involvement in interpretation is in the analytical sensitivity study, where "Twelve (12) different operators each interpreted 2 devices run at each concentration," which is an operator variability assessment, not an MRMC study for diagnostic improvement.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    This device is an in vitro diagnostic test, meaning its performance is inherently standalone in the sense that the test itself (reagents, membrane, etc.) produces the result. Human interpretation is required to read the pink-to-purple colored Sample Lines and the blue Control Line. However, there is no "algorithm" in the modern AI sense described. The "standalone" performance here refers to the device's ability to detect antigens in specimens without comparison to human interpretation of the same device output; rather, its output is compared to a gold standard (cell culture). The clinical study data presented (sensitivity and specificity) can be considered the standalone performance of the device as interpreted by an operator.

    7. The Type of Ground Truth Used

    The type of ground truth used is Cell Culture / DFA (Direct Fluorescent Antibody testing). This is a common and accepted laboratory reference method for influenza virus detection.

    8. The Sample Size for the Training Set

    The document does not explicitly mention a "training set" in the context of device development or algorithm training. Since this is an immunochromatographic assay and not an AI/ML-based device, there isn't a traditional "training set" as understood in machine learning. The clinical and analytical studies serve to validate the device's performance against established methods.

    9. How the Ground Truth for the Training Set Was Established

    As there is no traditional "training set" for an AI/ML algorithm, this question is not directly applicable. The "ground truth" for the performance evaluation (test sets) was established using Cell Culture/DFA, which are established laboratory techniques performed by trained personnel.

    Ask a Question

    Ask a specific question about this device

    K Number
    K053126
    Manufacturer
    Date Cleared
    2005-11-30

    (23 days)

    Product Code
    Regulation Number
    866.3328
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The BinaxNOW® Influenza A & B Test is an in vitro immunochromatographic assay for the qualitative detection of influenza A and B nucleoprotein antigens in nasopharyngeal swab and nasal wash/aspirate specimens. It is intended to aid in the rapid differential diagnosis of influenza A and B viral infections. Negative test results should be confirmed by culture.

    Device Description

    The BinaxNOW® Influenza A & B Test is an immunochromatographic membrane assay that uses highly sensitive monoclonal antibodies to detect influenza type A & B nucleoprotein antigens in nasopharyngeal specimens. These antibodies and a control antibody are immobilized onto a membrane support as three distinct lines and combined with other reagents/pads to construct a test strip. This test strip is mounted inside a cardboard, bookshaped hinged test device.

    Swab specimens require a sample preparation step, in which the sample is eluted off the swab into elution solution or transport media. Nasal wash/aspirate samples require no preparation. Sample is added to the top of the test strip and the test device is closed. Test results are interpreted at 15 minutes based on the presence of pink-to-purple colored Sample Lines. The blue Control Line turns pink in a valid assay.

    AI/ML Overview

    Here's an analysis of the provided text regarding the BinaxNOW® Influenza A & B Test, focusing on acceptance criteria, study details, and data provenance:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria for the BinaxNOW® Influenza A & B Test are implicitly derived from its comparison to a clinical reference method (cell culture/DFA) and to its predicate devices (Binax NOW® Flu A Test and Binax NOW® Flu B Test). The study aimed to demonstrate acceptable sensitivity and specificity.

    Metric (vs. Culture/DFA)Acceptance Criteria (Implied)Reported Device Performance
    Influenza A:
    SensitivityHigh relative to clinical need75% (3/4)
    SpecificityHigh100% (110/110)
    Influenza B:
    SensitivityHigh relative to clinical need50% (1/2)
    SpecificityHigh100% (112/112)
    Metric (vs. NOW® Flu A Test)Acceptance Criteria (Implied)Reported Device Performance
    Influenza A:
    SensitivityHigh100%
    SpecificityHigh96%
    Metric (vs. NOW® Flu B Test)Acceptance Criteria (Implied)Reported Device Performance
    Influenza B:
    SensitivityHigh93%
    SpecificityHigh97%

    Note: The document does not explicitly state numerical acceptance criteria in the typical "must achieve X% sensitivity and Y% specificity" format. Instead, the performance is presented to demonstrate substantial equivalence to established methods and predicate devices. The clinical study against culture/DFA has very small numbers of positive cases, leading to wide confidence intervals and potentially lower apparent sensitivity. The studies against the predicate Binax NOW® Flu A and B tests show much stronger performance, suggesting the extended claim focuses on maintaining equivalence to those previously cleared devices.

    2. Sample Size Used for the Test Set and Data Provenance

    • Clinical Study (BinaxNOW® Influenza A & B Test Performance vs. Cell Culture / DFA):
      • Sample Size: 114 specimens (113 NP swab, 1 wash/aspirate).
      • Data Provenance: Prospective study conducted in 2004 outside the US. Specimens collected from children (<18 years) and adults (>=18 years) presenting with influenza-like symptoms.
    • Clinical Study (BinaxNOW® Influenza A & B Test Performance vs. Binax NOW® Flu A and Flu B Tests):
      • Sample Size: 306 retrospective frozen clinical samples for Flu A comparison; 303 retrospective frozen clinical samples for Flu B comparison.
      • Data Provenance: Retrospective frozen clinical samples. Collected from symptomatic patients at multiple physician offices, clinics, and hospitals in the Southern, Northeastern, and Midwestern regions of the United States, and one hospital in Sweden.
    • Clinical Study (Binax NOW® Flu A and Flu B Test Performance vs. Cell Culture - for predicate devices):
      • Sample Size: 373 prospective clinical samples.
      • Data Provenance: Multi-center prospective study conducted during the 2002 Flu season at physician offices and clinics in the Western and mid-Atlantic United States.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The document does not specify the number or qualifications of experts used to establish the ground truth.

    • For the prospective study comparing the BinaxNOW® combined test to cell culture/DFA, "cell culture and/or DFA" serves as the reference standard (ground truth). It is assumed these reference methods were performed by qualified laboratory personnel, but no specifics are given.
    • For the retrospective study comparing the BinaxNOW® combined test to the individual NOW® Flu A and Flu B Tests, those individual tests are treated as the reference standard.
    • For the predicate device studies, "cell culture" served as the reference.

    4. Adjudication Method for the Test Set

    The document does not specify an adjudication method for the test set. It mentions that "Test results are interpreted at 15 minutes based on the presence of pink-to-purple colored Sample Lines. The blue Control Line turns pink in a valid assay." This suggests a single interpretation per device, without multi-reader adjudication outlined. For the analytical sensitivity, 12 different operators interpreted devices to determine the LOD.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No MRMC comparative effectiveness study was done. This device is an in vitro diagnostic (IVD) rapid immunoassay, not an AI-assisted diagnostic tool for interpretation by human readers. The output is a visual presence or absence of a line on a test strip.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    Yes, the performance data presented are for the device (the BinaxNOW® Influenza A & B Test) as a standalone diagnostic tool. It is a rapid immunoassay that outputs a visual result, which is then interpreted by a human, but the performance metrics provided (sensitivity, specificity) reflect the device's ability to detect the antigen itself, not the human interpreter's performance. The analytical studies (analytical sensitivity, reactivity testing, analytical specificity) are also standalone performance evaluations of the device.

    7. The Type of Ground Truth Used

    • Clinical Study (BinaxNOW® Influenza A & B Test Performance vs. Cell Culture / DFA): Cell Culture and/or Direct Fluorescent Antibody (DFA). This is a laboratory-based reference standard.
    • Clinical Study (BinaxNOW® Influenza A & B Test Performance vs. Binax NOW® Flu A and Flu B Tests): The individual Binax NOW® Flu A and Flu B Tests (predicate devices) were used as the reference standard.
    • Clinical Study (Binax NOW® Flu A and Flu B Test Performance vs. Cell Culture - for predicate devices): Cell Culture.

    8. The Sample Size for the Training Set

    The document does not explicitly mention a "training set" in the context of device development (e.g., machine learning training). As a rapid immunoassay, this device relies on biological interactions (antibody-antigen binding) rather than a trained algorithm. The various analytical and clinical studies serve to validate its performance.

    9. How the Ground Truth for the Training Set Was Established

    Not applicable, as there is no mention of a "training set" in the machine learning sense for this immunochromatographic device. The development and optimization of such assays would involve extensive in-house testing using characterized positive and negative samples, but these are not typically referred to as a "training set" in the regulatory context for IVDs.

    Ask a Question

    Ask a specific question about this device

    K Number
    K041049
    Manufacturer
    Date Cleared
    2004-08-10

    (110 days)

    Product Code
    Regulation Number
    866.3328
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The BinaxNOW® Influenza A & B Test is an in vitro immunochromatographic assay for the qualitative detection of influenza A and B nucleoprotein antigens in nasopharyngeal (NP) swab and nasal wash/aspirate specimens. It is intended to aid in the rapid differential diagnosis of influenza A and B viral infections. Negative test results should be confirmed by culture.

    Device Description

    The BinaxNOW® Influenza A & B Test is an immunochromatographic membrane assay that uses highly sensitive monoclonal antibodies to detect influenza type A & B nucleoprotein antigens in nasopharyngeal specimens. These antibodies and a control antibody are immobilized onto a membrane support as three distinct lines and combined with other reagents/pads to construct a test strip. This test strip is mounted inside a cardboard, book-shaped hinged test device. Swab specimens require a sample preparation step, in which the sample is eluted off the swab into elution solution or transport media. Nasal wash/aspirate samples require no preparation. Sample is added to the top of the test strip and the test device is closed. Test results are interpreted at 15 minutes based on the presence or absence of pink-to-purple colored Sample Lines. The blue Control Line turns pink in a valid assay.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state pre-defined acceptance criteria in terms of numerical thresholds for sensitivity and specificity. Instead, it demonstrates performance by comparing the new BinaxNOW® Influenza A & B Test to existing predicate devices (Binax NOW® Flu A Test and Binax NOW® Flu B Test) and viral culture. The key indication of "acceptance" is the determination of "substantial equivalence" to the predicate devices by the FDA.

    Based on the performance data presented, here's a summary:

    Performance MetricTarget/Comparison for "Acceptance"Reported Device Performance (BinaxNOW® Influenza A & B Test)
    Clinical PerformanceEquivalent to individual NOW® Flu A and Flu B TestsVs. NOW® Flu A Test (for Influenza A):- Sensitivity: 100%- Specificity: 96%Vs. NOW® Flu B Test (for Influenza B):- Sensitivity: 93%- Specificity: 97%
    Compared to viral culture (historical data from original A & B tests)Historical Data (from original A & B tests vs. viral culture, 2002 study):- Flu A Sensitivity (nasal wash): 82%- Flu A Sensitivity (NP swab): 78%- Flu B Sensitivity (nasal wash): 71%- Flu B Sensitivity (NP swab): 58%- Specificity (washes and swabs): 92% to 97%
    Analytical SensitivityEquivalent to individual NOW® Flu A and Flu B TestsLOD for Flu A/Beijing: 1.03 x 10^2 ng/mlLOD for Flu B/Harbin: 6.05 x 10^1 ng/ml"Cutoff" sample detection rates comparable to predicate devices (50% for Flu A, 46% vs. 10% for Flu B)
    ReactivityPositive detection for common influenza A and B strainsPositive detection for 7 live influenza A strains and 5 live influenza B strains at various concentrations.
    Analytical Specificity / Cross-ReactivityNo cross-reactivity with common respiratory microorganismsNo cross-reactivity with 27 bacteria, 8 viruses, and 1 yeast.
    Interfering SubstancesNo interference with test interpretationNo interference with listed substances at specified concentrations (except 1% whole blood interfering with Flu A LOD negative samples).
    Transport MediaNo impact on test performanceMedia alone tested negative; media inoculated with LOD levels tested positive.
    ReproducibilityHigh agreement between runs, operators, and sites97% agreement with expected test results across multiple runs, operators, and 3 sites.

    2. Sample Sizes Used for the Test Set and Data Provenance

    • Clinical Sample Comparison (BinaxNOW® A & B vs. individual NOW® A & B tests):

      • Influenza A comparison: 306 retrospective frozen clinical samples.
      • Influenza B comparison: 303 retrospective frozen clinical samples.
      • Data Provenance: Retrospective frozen clinical samples collected from symptomatic patients at multiple physician offices, clinics, and hospitals in the Southern, Northeastern, and Midwestern regions of the United States, and one hospital in Sweden.
    • Original Multi-site Prospective Clinical Study (comparing individual NOW® Flu A & B Tests to viral culture, 2002):

      • 191 nasal wash specimens
      • 182 nasopharyngeal (NP) swab specimens
      • Data Provenance: Multi-center prospective study during the 2002 flu season at physician offices and clinics located in the United States.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The document does not specify the number or qualifications of experts used to establish the ground truth for the clinical comparison directly assessing the BinaxNOW® Influenza A & B Test.

    • For the comparison against the predicate devices: The predicate devices (individual NOW® Flu A and NOW® Flu B Tests) were used as the reference standard (ground truth). The ground truth for these predicate devices themselves would have been established historically (likely via viral culture).

    • For the historical 2002 multi-site prospective clinical study: The ground truth was viral culture, which is considered an objective laboratory method, not reliant on expert interpretation of the rapid test results.

    4. Adjudication Method for the Test Set

    The document does not describe an adjudication method for the clinical test sets in terms of resolving discrepancies between readers or between the device and ground truth. The comparisons are presented as direct measures against a reference standard (predicate devices or viral culture).

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

    • No, an MRMC comparative effectiveness study was not done as described in the context of assistance from AI.
    • This device is a rapid diagnostic test (immunochromatographic assay), not an AI-powered diagnostic imaging or interpretation system. The "readers" for the analytical sensitivity experiment were "operators" interpreting the device results, not expert diagnosticians being assisted by AI.

    6. If a Standalone (i.e. Algorithm Only Without Human-in-the-Loop Performance) Was Done

    • Yes, standalone performance was assessed in the context of a rapid diagnostic test. The results (sensitivity, specificity) presented for the BinaxNOW® Influenza A & B Test were generated by having operators read the test strip, not by integrating it with human interpretive assistance beyond the basic instruction of how to read the pink-to-purple lines.
    • The "Analytical Sensitivity Comparison" section involved 12 different operators interpreting devices for LOD and cutoff levels. This is a form of standalone performance evaluation for the test itself.

    7. The Type of Ground Truth Used

    • Clinical Sample Comparison (BinaxNOW® A & B vs. individual NOW® A & B tests): The ground truth was the result from the predicate devices (Binax NOW® Flu A Test and Binax NOW® Flu B Test). This implies that the predicate devices were considered the accepted standard for influenza detection in these samples.
    • Original Multi-site Prospective Clinical Study (of individual NOW® Flu A & B Tests): The ground truth was viral culture. Viral culture is generally considered a gold standard for influenza diagnosis.

    8. The Sample Size for the Training Set

    The document does not specify a training set in the context of machine learning or algorithm development. This device is an immunochromatographic assay, which is a chemical and biological test, not typically "trained" in the way an AI algorithm is. The "development" of the test would involve optimization of its biological components and chemical reactions.

    9. How the Ground Truth for the Training Set Was Established

    As there is no mention of a training set for an algorithm, this question is not applicable. The development of the immunochromatographic assay relies on chemical and biological principles and optimization, not on a "training set" with established ground truth in the AI sense.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1