Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K121797
    Date Cleared
    2012-09-07

    (80 days)

    Product Code
    Regulation Number
    866.3328
    Reference & Predicate Devices
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The BD Veritor™ System for Rapid Detection of Flu A+B is a rapid chromatographic immunoassay for the direct and qualitative detection of influenza A and B viral nucleoprotein antigens from nasopharyngeal wash/aspirate and swab samples in transport media from symptomatic patients. The BD Veritor System for Rapid Detection of Flu A+B is a differentiated test, such that influenza A viral antigens can be distinguished from influenza B viral antigens from a single processed sample using a single device. The test is to be used as an aid in the diagnosis of influenza A and B viral infections. A negative test is presumptive and it is recommended that these results be confirmed by viral culture or an FDA-cleared influenza A and B molecular assay. Negative test results do not preclude influenza viral infection and should not be used as the sole basis for treatment or other patient management decisions. The test is not intended to detect influenza C antigens.

    Device Description

    The BD Flu A+B test is a chromatographic assay to qualitatively detect influenza A and B viral antigens in samples processed from respiratory specimens. The processed specimen is added to the test device where influenza A or influenza B viral antigens bind to antiinfluenza antibodies conjugated to detector particles on the A+B test strip. The antigenconjugate complex migrates across the test strip to the reaction area and is captured by an antibody line on the membrane. Results are interpreted by the BD Veritor™ System Reader, a portable electronic device which uses a reflectance-based measurement method to evaluate the line signal intensities on the assay test strip, and applies specific algorithms to determine the presence or absence of any target analyte(s). A liquid crystal display (LCD) on the instrument communicates the results to the operator.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the BD Veritor™ System for Rapid Detection of Flu A+B, based on the provided document:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document doesn't explicitly state "acceptance criteria" numerical targets in the same way a device like an imaging AI might. However, performance metrics for clinical studies are presented as primary indicators of the device's capability. The "reference method" of PCR is the de facto gold standard against which the device's accuracy is measured. Implicitly, these reported performance values are what were deemed acceptable for FDA clearance.

    MetricAcceptance Criteria (Implicit from Context)Reported Device Performance (NP Swab in Transport Media - U.S. and Japan Combined)
    Influenza A:
    Positive Percent Agreement (PPA) / SensitivityAdequate agreement with FDA-cleared molecular assay (PCR)81.3% (95% CI: 70.0%, 88.9%)
    Negative Percent Agreement (NPA) / SpecificityAdequate agreement with FDA-cleared molecular assay (PCR)97.4% (94.4%, 98.8%)
    Influenza B:
    Positive Percent Agreement (PPA) / SensitivityAdequate agreement with FDA-cleared molecular assay (PCR)85.6% (76.8%, 91.4%)
    Negative Percent Agreement (NPA) / SpecificityAdequate agreement with FDA-cleared molecular assay (PCR)99.0% (96.5%, 99.7%)

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size:
      • Combined U.S. and Japan (NP Swab in Transport Media): 292 samples
      • U.S. Only (NP Swab in Transport Media): 217 prospective specimens (201 evaluable after exclusions)
      • Japan Only (NP Swab in Transport Media): 93 prospective specimens (91 evaluable after exclusions)
    • Data Provenance:
      • Countries of Origin: United States (six clinical trial sites) and Japan (five clinical sites).
      • Retrospective or Prospective: The clinical studies were prospective. This is explicitly stated: "A total of 217 prospective specimens were evaluated..." for the U.S. study, and "A total of 93 prospective specimens were evaluated..." for the Japan study.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    • The document does not explicitly state the number of experts used to establish the ground truth or their qualifications.
    • The ground truth was established by an "FDA-cleared influenza A and B molecular assay (PCR)." This means the ground truth was based on the results of a validated laboratory test, not on human expert consensus or interpretation of images. Therefore, the concept of "experts establishing ground truth" in the traditional sense (e.g., radiologists, pathologists) is not directly applicable here. The expertise lies in the validated PCR method.

    4. Adjudication Method for the Test Set

    • The document does not describe an adjudication method for the test set.
    • Since the ground truth was derived from an FDA-cleared molecular assay (PCR), human adjudication as typically understood (e.g., 2+1, 3+1 consensus reads) would not be part of establishing the ground truth for this type of device. The PCR result is the definitive reference.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and the effect size of how much human readers improve with AI vs without AI assistance

    • No, an MRMC comparative effectiveness study was not performed.
    • This study evaluates a standalone diagnostic device (an in-vitro diagnostic test) directly against a reference laboratory method (PCR), not a device designed to assist human readers or interpreters. Therefore, the concept of human readers improving with or without AI assistance is not applicable to this submission.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    • Yes, a standalone performance study was done.
    • The BD Veritor™ System for Rapid Detection of Flu A+B test is interpreted by the "BD Veritor™ System Reader, a portable electronic device which uses a reflectance-based measurement method to evaluate the line signal intensities on the assay test strip, and applies specific algorithms to determine the presence or absence of any target analyte(s)." The performance data (PPA, NPA) directly reflects the output of this automated system as compared to the PCR reference, making it a standalone evaluation.

    7. The Type of Ground Truth Used

    • The ground truth used was an FDA-cleared influenza A and B molecular assay (PCR). This is a laboratory-based, objective measure, often considered a highly accurate gold standard for viral detection.

    8. The Sample Size for the Training Set

    • The document does not explicitly state a separate "training set" sample size for the device's algorithm.
    • For in-vitro diagnostics (IVDs) like this, algorithm development typically involves laboratory-based studies (e.g., analytical sensitivity, specificity, cross-reactivity, interfering substances, reproducibility) using characterized samples and controlled experiments, rather than a distinct "training set" of patient data in the way a machine learning algorithm might be trained. The summary provides details on:
      • Analytical Sensitivity (LOD): Used 7 influenza strains (4 Flu A, 3 Flu B) tested with 20-60 replicates each.
      • Analytical Specificity: 52 influenza viral strains tested in triplicate.
      • Cross Reactivity: 51 microorganisms tested in triplicate.
      • Reproducibility: 30 simulated influenza A or B samples tested at three sites.
    • The "clinical performance data" presented are effectively the test set data, used to evaluate the final, developed device.

    9. How the Ground Truth for the Training Set Was Established

    • As noted above, the concept of a "training set" in the machine learning sense is not explicitly presented for this IVD. Instead, the device's detection algorithms and parameters would have been developed and refined using controlled laboratory studies.
      • For analytical sensitivity (LOD), the ground truth for determining the lowest detectable concentration for each strain would have been based on establishing known concentrations of the target virus.
      • For analytical specificity and cross-reactivity, the ground truth would have been based on the known presence or absence of specific influenza strains or other microorganisms in the tested samples.
    • These rigorous laboratory studies ensure the underlying scientific validity and robust performance of the detection mechanism and its associated algorithm.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1