Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K120049
    Date Cleared
    2012-03-23

    (77 days)

    Product Code
    Regulation Number
    866.3328
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K053146, K092698

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The BD Veritor™ System for Rapid Detection of Flu A+B is a rapid chromatographic immunoassay for the direct and qualitative detection of influenza A and B viral nucleoprotein antigens from nasopharyngeal wash/aspirates of symptomatic patients. The BD Veritor System for Rapid Detection of Flu A+B is a differentiated test, such that influenza A viral antigens can be distinguished from influenza B viral antigens from a single processed sample using a single device. The test is to be used as an aid in the diagnosis of influenza A and B viral infections. A negative test is presumptive and it is recommended that these results be confirmed by viral culture or an FDA-cleared influenza A and B molecular assay. Negative test results do not preclude influenza viral infection and should not be used as the sole basis for treatment or other patient management decisions. The test is not intended to detect influenza C antigens.

    Device Description

    The BD Flu A+B test is a chromatographic assay to qualitatively detect influenza A and B viral antigens in samples processed from respiratory specimens. The processed specimen is added to the test device where influenza A or influenza B viral antigens bind to antiinfluenza antibodies conjugated to detector particles on the A+B test strip. The antigenconjugate complex migrates across the test strip to the reaction area and is captured by an antibody line on the membrane. Results are interpreted by the BD Veritor™ System Reader, a portable electronic device which uses a reflectance-based measurement method to evaluate the line signal intensities on the assay test strip, and applies specific algorithms to determine the presence or absence of any target analyte(s). A liquid crystal display (LCD) on the instrument communicates the results to the operator.

    AI/ML Overview
    1. Table of Acceptance Criteria and Reported Device Performance:

    The document doesn't explicitly state "acceptance criteria" as a set of predefined thresholds. However, it presents the performance characteristics of the BD Veritor™ System for Rapid Detection of Flu A+B test during clinical studies. The "performance" column will reflect the PPA and NPA values achieved by the device.

    Performance MetricAcceptance Criteria (Implicit)Reported Device Performance (Prospective Data)Reported Device Performance (Retrospective Data)
    Influenza A(Not explicitly stated, generally high PPA/NPA expected for diagnostic devices)PPA: 83.0% (95% C.I. 78.0%- 87.0%)PPA: 92.1% (95% C.I. 82.7%- 96.6%)
    NPA: 97.6% (95% C.I. 96.6%- 98.3%)NPA: 98.9% (95% C.I. 96.2%- 99.7%)
    Influenza B(Not explicitly stated, generally high PPA/NPA expected for diagnostic devices)PPA: 81.3% (95% C.I. 72.1%- 88.0%)PPA: 74.0% (95% C.I. 58.9%- 85.4%)
    NPA: 99.8% (95% C.I. 99.4%- 99.9%)NPA: 99.0% (95% C.I. 96.6%- 99.7%)

    Note: While specific acceptance criteria are not called out, the FDA's clearance indicates that the observed performance was deemed acceptable for the intended use.

    1. Sample Size Used for the Test Set and Data Provenance:
    • Prospective Test Set:
      • Sample Size: 1471 evaluable clinical specimens (from an initial 1502 collected)
      • Data Provenance: Multi-center clinical studies conducted at two U.S. trial sites and one Hong Kong trial site during the 2010-2011 respiratory season.
    • Retrospective Test Set:
      • Sample Size: 249 evaluable retrospective specimens (from an initial 263 collected)
      • Data Provenance: Retrospective specimens, likely from banked samples, but specific origin beyond "retrospective" is not detailed.
    1. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications:

    The document does not mention the use of human experts to establish the ground truth for the clinical test sets. The ground truth was established by an "FDA-cleared influenza A and B molecular assay (PCR)."

    1. Adjudication Method for the Test Set:

    Not applicable, as the ground truth was established by a laboratory assay (PCR) and not by expert review requiring adjudication.

    1. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    No, an MRMC comparative effectiveness study was not conducted. This study focuses on the standalone performance of the BD Veritor™ System compared to PCR, not on how human readers' performance improves with or without the device.

    1. Standalone Performance (Algorithm Only without Human-in-the-Loop Performance):

    Yes, the study primarily assessed the standalone performance of the BD Veritor™ System for Rapid Detection of Flu A+B test. The device uses an "opto-electronic reader" to interpret results and apply algorithms, reporting a positive, negative, or invalid result on an LCD screen. This means the interpretation is automated by the device, making it a standalone assessment.

    1. The Type of Ground Truth Used:

    The ground truth used for the clinical studies (both prospective and retrospective) was an FDA-cleared Influenza A and B molecular assay (PCR).

    1. The Sample Size for the Training Set:

    The document does not explicitly state a separate training set size for the device's algorithms. The performance data presented is for the evaluation of the final device. For in-vitro diagnostic devices like this, the "training" (development and optimization) often happens with internal validation sets and analytical studies (like LOD, specificity, cross-reactivity) rather than a distinctly separated "training set" in the machine learning sense, before being tested on independent clinical cohorts.

    1. How the Ground Truth for the Training Set Was Established:

    As a distinct "training set" is not explicitly mentioned for algorithmic development in the provided document, the method for establishing its ground truth is also not described. However, for any internal development and validation, the ground truth would typically be established using confirmed reference methods, similar to the PCR used for the clinical evaluation. Analytical studies (LOD, specificity, cross-reactivity) involved known concentrations of viral strains and specific microorganisms, which serve as a form of ground truth for characterizing the device's analytical capabilities.

    Ask a Question

    Ask a specific question about this device

    K Number
    K112277
    Date Cleared
    2011-10-28

    (80 days)

    Product Code
    Regulation Number
    866.3328
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K053146,K092698

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The BD Veritor™ System for Rapid Detection of Flu A+B is a rapid chromatographic immunoassay for the direct and qualitative detection of influenza A and B viral nucleoprotein antigens from nasopharyngeal and nasal swabs of symptomatic patients. The BD Veritor System for Rapid Detection of Flu A+B is a differentiated test, such that influenza A viral antigens can be distinguished from influenza B viral antigens from a single processed sample using a single device. The test is to be used as an aid in the diagnosis of influenza A and B viral infections. A negative test is presumptive and it is recommended that these results be confirmed by viral culture or an FDA-cleared influenza A and B molecular assay. Negative test results do not preclude influenza viral infection and should not be used as the sole basis for treatment or other patient management decisions. The test is not intended to detect influenza C antigens.

    Device Description

    The BD Flu A+B test is a chromatographic assay to qualitatively detect influenza A and B viral antigens in samples processed from respiratory specimens. The processed specimen is added to the test device where influenza A or influenza B viral anti-influenza antibodies conjugated to detector particles on the A+B test strip. The antigen-conjugate complex migrates across the test strip to the reaction area and is captured by an antibody line on the membrane. Results are interpreted by the BD Veritor™ System Reader, a portable electronic device which uses a reflectance-based measurement method to evaluate the line signal intensities on the assay test strip, and applies specific algorithms to determine the presence or absence of any target analyte(s). A liquid crystal display (LCD) on the instrument communicates the results to the operator.

    AI/ML Overview

    Here's a summary of the acceptance criteria and the study proving the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance (Clinical Performance - All Swabs)

    Criterion (Implicit - based on study results presented)Influenza A Reported Device Performance (PPA)Influenza A Reported Device Performance (NPA)Influenza B Reported Device Performance (PPA)Influenza B Reported Device Performance (NPA)
    Clinical PPA (United States)78.7% (95% C.I. 71.6%, 84.4%)N/A74.3% (95% C.I. 65%, 81.8%)N/A
    Clinical NPA (United States)N/A97.8% (95% C.I. 95.7%, 98.9%)N/A99.5% (95% C.I. 98.3%, 99.9%)
    Clinical PPA (Japan)94.4% (95% C.I. 86.4%-97.8%)N/A91.4% (95% C.I. 82.5%-96%)N/A
    Clinical NPA (Japan)N/A96.7% (95% C.I. 92.4%-98.6%)N/A94.7% (95% C.I. 89.9%-97.3%)

    Note: The document doesn't explicitly state numerical acceptance criteria thresholds for PPA and NPA; rather, it presents the results from the clinical study as evidence of performance. The implicit acceptance is that these performance characteristics are adequate for its intended use and compare favorably to the predicate device, which is not detailed in terms of its performance here.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Clinical Performance: A total of 736 prospective specimens (515 in the U.S. and 221 in Japan).
    • Data Provenance: Prospective, collected at five U.S. trial sites and eight Japan trial sites during the 2010-2011 respiratory season.

    3. Number of Experts Used to Establish Ground Truth and Qualifications of Experts

    • The document does not specify the number of experts used or their qualifications for establishing the ground truth. The ground truth was established by Polymerase Chain Reaction (PCR), which is a molecular assay, not an expert panel.

    4. Adjudication Method for the Test Set

    • The document does not describe an adjudication method for the test set in the context of human interpretation. The comparison is between the device's results and a reference PCR method. However, for discordant results (PCR positive, BD Veritor negative), a second swab specimen from the same patient (the reference method specimen) was tested with the BD Veritor assay to investigate the discrepancy. This isn't a traditional adjudication with human readers, but a re-testing mechanism.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size

    • No, a multi-reader multi-case (MRMC) comparative effectiveness study was not done. This device is an automated diagnostic test (an optical reader interprets the test strip), not an AI system designed to assist human readers. Thus, there is no "human readers improve with AI vs without AI assistance" effect size.

    6. If a Standalone Study Was Done

    • Yes, a standalone study was performed. The clinical performance data (PPA, NPA) directly reflects the algorithm's performance (the BD Veritor system's interpretation) without human intervention in the result determination once the sample is loaded.

    7. The Type of Ground Truth Used

    • The ground truth used was an FDA-cleared Influenza A and B molecular assay (PCR).

    8. The Sample Size for the Training Set

    • The document does not explicitly separate or mention a training set sample size for the device's algorithms. The clinical study samples (736 specimens) are identified as the test set for performance evaluation. For a point-of-care immunoassay with an optical reader, the algorithms are typically developed and validated internally during the device's development phase, rather than through external "training sets" in the typical machine learning sense for image analysis.

    9. How the Ground Truth for the Training Set Was Established

    • As a training set is not explicitly referred to in the document, how its ground truth was established is not detailed. However, for diagnostic devices, the development of algorithms would typically involve internal validation using characterized samples with known influenza status, likely determined by methods such as viral culture or validated molecular assays.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1