Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K121797
    Date Cleared
    2012-09-07

    (80 days)

    Product Code
    Regulation Number
    866.3328
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Device Name :

    BD VERITOR(TM) SYSTEM FOR RAPID DETECTION OF FLU A+B

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The BD Veritor™ System for Rapid Detection of Flu A+B is a rapid chromatographic immunoassay for the direct and qualitative detection of influenza A and B viral nucleoprotein antigens from nasopharyngeal wash/aspirate and swab samples in transport media from symptomatic patients. The BD Veritor System for Rapid Detection of Flu A+B is a differentiated test, such that influenza A viral antigens can be distinguished from influenza B viral antigens from a single processed sample using a single device. The test is to be used as an aid in the diagnosis of influenza A and B viral infections. A negative test is presumptive and it is recommended that these results be confirmed by viral culture or an FDA-cleared influenza A and B molecular assay. Negative test results do not preclude influenza viral infection and should not be used as the sole basis for treatment or other patient management decisions. The test is not intended to detect influenza C antigens.

    Device Description

    The BD Flu A+B test is a chromatographic assay to qualitatively detect influenza A and B viral antigens in samples processed from respiratory specimens. The processed specimen is added to the test device where influenza A or influenza B viral antigens bind to antiinfluenza antibodies conjugated to detector particles on the A+B test strip. The antigenconjugate complex migrates across the test strip to the reaction area and is captured by an antibody line on the membrane. Results are interpreted by the BD Veritor™ System Reader, a portable electronic device which uses a reflectance-based measurement method to evaluate the line signal intensities on the assay test strip, and applies specific algorithms to determine the presence or absence of any target analyte(s). A liquid crystal display (LCD) on the instrument communicates the results to the operator.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the BD Veritor™ System for Rapid Detection of Flu A+B, based on the provided document:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document doesn't explicitly state "acceptance criteria" numerical targets in the same way a device like an imaging AI might. However, performance metrics for clinical studies are presented as primary indicators of the device's capability. The "reference method" of PCR is the de facto gold standard against which the device's accuracy is measured. Implicitly, these reported performance values are what were deemed acceptable for FDA clearance.

    MetricAcceptance Criteria (Implicit from Context)Reported Device Performance (NP Swab in Transport Media - U.S. and Japan Combined)
    Influenza A:
    Positive Percent Agreement (PPA) / SensitivityAdequate agreement with FDA-cleared molecular assay (PCR)81.3% (95% CI: 70.0%, 88.9%)
    Negative Percent Agreement (NPA) / SpecificityAdequate agreement with FDA-cleared molecular assay (PCR)97.4% (94.4%, 98.8%)
    Influenza B:
    Positive Percent Agreement (PPA) / SensitivityAdequate agreement with FDA-cleared molecular assay (PCR)85.6% (76.8%, 91.4%)
    Negative Percent Agreement (NPA) / SpecificityAdequate agreement with FDA-cleared molecular assay (PCR)99.0% (96.5%, 99.7%)

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size:
      • Combined U.S. and Japan (NP Swab in Transport Media): 292 samples
      • U.S. Only (NP Swab in Transport Media): 217 prospective specimens (201 evaluable after exclusions)
      • Japan Only (NP Swab in Transport Media): 93 prospective specimens (91 evaluable after exclusions)
    • Data Provenance:
      • Countries of Origin: United States (six clinical trial sites) and Japan (five clinical sites).
      • Retrospective or Prospective: The clinical studies were prospective. This is explicitly stated: "A total of 217 prospective specimens were evaluated..." for the U.S. study, and "A total of 93 prospective specimens were evaluated..." for the Japan study.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    • The document does not explicitly state the number of experts used to establish the ground truth or their qualifications.
    • The ground truth was established by an "FDA-cleared influenza A and B molecular assay (PCR)." This means the ground truth was based on the results of a validated laboratory test, not on human expert consensus or interpretation of images. Therefore, the concept of "experts establishing ground truth" in the traditional sense (e.g., radiologists, pathologists) is not directly applicable here. The expertise lies in the validated PCR method.

    4. Adjudication Method for the Test Set

    • The document does not describe an adjudication method for the test set.
    • Since the ground truth was derived from an FDA-cleared molecular assay (PCR), human adjudication as typically understood (e.g., 2+1, 3+1 consensus reads) would not be part of establishing the ground truth for this type of device. The PCR result is the definitive reference.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and the effect size of how much human readers improve with AI vs without AI assistance

    • No, an MRMC comparative effectiveness study was not performed.
    • This study evaluates a standalone diagnostic device (an in-vitro diagnostic test) directly against a reference laboratory method (PCR), not a device designed to assist human readers or interpreters. Therefore, the concept of human readers improving with or without AI assistance is not applicable to this submission.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    • Yes, a standalone performance study was done.
    • The BD Veritor™ System for Rapid Detection of Flu A+B test is interpreted by the "BD Veritor™ System Reader, a portable electronic device which uses a reflectance-based measurement method to evaluate the line signal intensities on the assay test strip, and applies specific algorithms to determine the presence or absence of any target analyte(s)." The performance data (PPA, NPA) directly reflects the output of this automated system as compared to the PCR reference, making it a standalone evaluation.

    7. The Type of Ground Truth Used

    • The ground truth used was an FDA-cleared influenza A and B molecular assay (PCR). This is a laboratory-based, objective measure, often considered a highly accurate gold standard for viral detection.

    8. The Sample Size for the Training Set

    • The document does not explicitly state a separate "training set" sample size for the device's algorithm.
    • For in-vitro diagnostics (IVDs) like this, algorithm development typically involves laboratory-based studies (e.g., analytical sensitivity, specificity, cross-reactivity, interfering substances, reproducibility) using characterized samples and controlled experiments, rather than a distinct "training set" of patient data in the way a machine learning algorithm might be trained. The summary provides details on:
      • Analytical Sensitivity (LOD): Used 7 influenza strains (4 Flu A, 3 Flu B) tested with 20-60 replicates each.
      • Analytical Specificity: 52 influenza viral strains tested in triplicate.
      • Cross Reactivity: 51 microorganisms tested in triplicate.
      • Reproducibility: 30 simulated influenza A or B samples tested at three sites.
    • The "clinical performance data" presented are effectively the test set data, used to evaluate the final, developed device.

    9. How the Ground Truth for the Training Set Was Established

    • As noted above, the concept of a "training set" in the machine learning sense is not explicitly presented for this IVD. Instead, the device's detection algorithms and parameters would have been developed and refined using controlled laboratory studies.
      • For analytical sensitivity (LOD), the ground truth for determining the lowest detectable concentration for each strain would have been based on establishing known concentrations of the target virus.
      • For analytical specificity and cross-reactivity, the ground truth would have been based on the known presence or absence of specific influenza strains or other microorganisms in the tested samples.
    • These rigorous laboratory studies ensure the underlying scientific validity and robust performance of the detection mechanism and its associated algorithm.
    Ask a Question

    Ask a specific question about this device

    K Number
    K112277
    Date Cleared
    2011-10-28

    (80 days)

    Product Code
    Regulation Number
    866.3328
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    BD VERITOR(TM) SYSTEM FOR RAPID DETECTION OF FLU A+B

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The BD Veritor™ System for Rapid Detection of Flu A+B is a rapid chromatographic immunoassay for the direct and qualitative detection of influenza A and B viral nucleoprotein antigens from nasopharyngeal and nasal swabs of symptomatic patients. The BD Veritor System for Rapid Detection of Flu A+B is a differentiated test, such that influenza A viral antigens can be distinguished from influenza B viral antigens from a single processed sample using a single device. The test is to be used as an aid in the diagnosis of influenza A and B viral infections. A negative test is presumptive and it is recommended that these results be confirmed by viral culture or an FDA-cleared influenza A and B molecular assay. Negative test results do not preclude influenza viral infection and should not be used as the sole basis for treatment or other patient management decisions. The test is not intended to detect influenza C antigens.

    Device Description

    The BD Flu A+B test is a chromatographic assay to qualitatively detect influenza A and B viral antigens in samples processed from respiratory specimens. The processed specimen is added to the test device where influenza A or influenza B viral anti-influenza antibodies conjugated to detector particles on the A+B test strip. The antigen-conjugate complex migrates across the test strip to the reaction area and is captured by an antibody line on the membrane. Results are interpreted by the BD Veritor™ System Reader, a portable electronic device which uses a reflectance-based measurement method to evaluate the line signal intensities on the assay test strip, and applies specific algorithms to determine the presence or absence of any target analyte(s). A liquid crystal display (LCD) on the instrument communicates the results to the operator.

    AI/ML Overview

    Here's a summary of the acceptance criteria and the study proving the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance (Clinical Performance - All Swabs)

    Criterion (Implicit - based on study results presented)Influenza A Reported Device Performance (PPA)Influenza A Reported Device Performance (NPA)Influenza B Reported Device Performance (PPA)Influenza B Reported Device Performance (NPA)
    Clinical PPA (United States)78.7% (95% C.I. 71.6%, 84.4%)N/A74.3% (95% C.I. 65%, 81.8%)N/A
    Clinical NPA (United States)N/A97.8% (95% C.I. 95.7%, 98.9%)N/A99.5% (95% C.I. 98.3%, 99.9%)
    Clinical PPA (Japan)94.4% (95% C.I. 86.4%-97.8%)N/A91.4% (95% C.I. 82.5%-96%)N/A
    Clinical NPA (Japan)N/A96.7% (95% C.I. 92.4%-98.6%)N/A94.7% (95% C.I. 89.9%-97.3%)

    Note: The document doesn't explicitly state numerical acceptance criteria thresholds for PPA and NPA; rather, it presents the results from the clinical study as evidence of performance. The implicit acceptance is that these performance characteristics are adequate for its intended use and compare favorably to the predicate device, which is not detailed in terms of its performance here.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Clinical Performance: A total of 736 prospective specimens (515 in the U.S. and 221 in Japan).
    • Data Provenance: Prospective, collected at five U.S. trial sites and eight Japan trial sites during the 2010-2011 respiratory season.

    3. Number of Experts Used to Establish Ground Truth and Qualifications of Experts

    • The document does not specify the number of experts used or their qualifications for establishing the ground truth. The ground truth was established by Polymerase Chain Reaction (PCR), which is a molecular assay, not an expert panel.

    4. Adjudication Method for the Test Set

    • The document does not describe an adjudication method for the test set in the context of human interpretation. The comparison is between the device's results and a reference PCR method. However, for discordant results (PCR positive, BD Veritor negative), a second swab specimen from the same patient (the reference method specimen) was tested with the BD Veritor assay to investigate the discrepancy. This isn't a traditional adjudication with human readers, but a re-testing mechanism.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size

    • No, a multi-reader multi-case (MRMC) comparative effectiveness study was not done. This device is an automated diagnostic test (an optical reader interprets the test strip), not an AI system designed to assist human readers. Thus, there is no "human readers improve with AI vs without AI assistance" effect size.

    6. If a Standalone Study Was Done

    • Yes, a standalone study was performed. The clinical performance data (PPA, NPA) directly reflects the algorithm's performance (the BD Veritor system's interpretation) without human intervention in the result determination once the sample is loaded.

    7. The Type of Ground Truth Used

    • The ground truth used was an FDA-cleared Influenza A and B molecular assay (PCR).

    8. The Sample Size for the Training Set

    • The document does not explicitly separate or mention a training set sample size for the device's algorithms. The clinical study samples (736 specimens) are identified as the test set for performance evaluation. For a point-of-care immunoassay with an optical reader, the algorithms are typically developed and validated internally during the device's development phase, rather than through external "training sets" in the typical machine learning sense for image analysis.

    9. How the Ground Truth for the Training Set Was Established

    • As a training set is not explicitly referred to in the document, how its ground truth was established is not detailed. However, for diagnostic devices, the development of algorithms would typically involve internal validation using characterized samples with known influenza status, likely determined by methods such as viral culture or validated molecular assays.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1