Search Results
Found 1 results
510(k) Data Aggregation
(238 days)
The cobas® SARS-CoV-2 & Influenza A/B Nucleic acid test for use on the cobas® Liat® System (cobas® SARS-CoV-2 & Influenza A/B) is an automated rapid multiplex real-time, reverse transcriptase polymerase chain reaction (RT-PCR) test intended for the simultaneous qualitative detection and differentiation of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), influenza A, and influenza B virus nucleic acid in nasopharyngeal swab (NPS) and anterior nasal swab (ANS) specimens from individuals with signs and symptoms of respiratory tract infection. Clinical signs and symptoms of respiratory tract infection due to SARS-CoV-2 and influenza can be similar.
cobas® SARS-CoV-2 & Influenza A/B is intended for use as an aid in the differential diagnosis of SARS-CoV-2, influenza A, and/or influenza B infection if used in conjunction with other clinical and epidemiological information, and laboratory findings. SARS-CoV-2, influenza A and influenza B viral nucleic acid are generally detectable in NPS and ANS specimens during the acute phase of infection.
Positive results do not rule out co-infection with other organisms. The agent(s) detected by the cobas SARS-CoV-2 & Influenza A/B may not be the definite cause of disease.
Negative results do not preclude SARS-COV-2, influenza A, and/or influenza B infection. The results of this test should not be used as the sole basis for diagnosis, treatment, or other patient management decisions.
cobas® SARS-CoV-2 & Influenza A/B assay uses real-time reverse transcriptase polymerase chain reaction (RT-PCR) technology to rapidly (approximately 20 minutes) detect and differentiate between SARS-CoV-2, influenza A, and influenza B viruses from nasopharyngeal and nasal swabs. The automation, small footprint, and easy-to-use interface of the cobas® Liat® System enable performance of this test to occur at the POC or in a clinical laboratory setting.
The provided text describes the acceptance criteria and the study that proves the device meets those criteria for the "cobas SARS-CoV-2 & Influenza A/B Nucleic acid test for use on the cobas Liat System". This is a diagnostic test, not an AI/ML device, so many of the requested elements (like "number of experts used to establish ground truth" or "multi reader multi case comparative effectiveness study") are not applicable or described in the same way they would be for an AI/ML product. However, I will extract and present the information as per the prompt's structure, noting where the information is N/A or conceptually different due to the nature of the device.
Device Name: cobas® SARS-CoV-2 & Influenza A/B Nucleic acid test for use on the cobas® Liat® System
Device Type: Automated rapid multiplex real-time, reverse transcriptase polymerase chain reaction (RT-PCR) test for qualitative detection and differentiation of SARS-CoV-2, influenza A, and influenza B virus nucleic acid.
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state "acceptance criteria" as a separate, pre-defined table. Instead, the performance metrics, such as Limit of Detection (LoD), inclusivity, cross-reactivity, and clinical performance (PPA, NPA), serve as the de facto acceptance criteria. The reported device performance is presented against these metrics.
Implicit Acceptance Criteria and Reported Performance for Key Metrics:
| Feature/Metric | Acceptance Criteria (Implicit) | Reported Device Performance |
|---|---|---|
| Analytical Sensitivity (LoD) | - SARS-CoV-2 (WHO Standard): Lowest detectable concentration where ≥95% of replicates give "SARS-CoV-2 Detected". | SARS-CoV-2 (WHO Standard): LoD determined at 62.5 IU/mL (100% hit rate at 62.5 IU/mL). SARS-CoV-2 (USA-WA1/2020 strain): LoD determined at 0.012 TCID50/mL (12 copies/mL) with 100% hit rate. Influenza A: LoD 2×10⁻² - 2×10⁻³ TCID50/mL depending on strain. Influenza B: LoD 2×10⁻³ - 4×10⁻³ TCID50/mL depending on strain. |
| Reactivity/Inclusivity | - Ability to detect various strains/variants of SARS-CoV-2, Influenza A, and Influenza B at specified concentrations. | SARS-CoV-2: Detected 16 isolates/variants at various concentrations (e.g., 5.0E+00 to 3.60E+01 copies/mL). In silico analysis predicted >99.9% detection of known sequences. Influenza A/B: Detected 28 Influenza A and 15 Influenza B strains at tested concentrations (e.g., 2.0x10⁻² to 4.0x10² CEID50/mL or TCID50/mL). In silico analysis predicted detection of all recorded circulating strains as of Jan 2023. |
| Cross-Reactivity (Exclusivity) | - No false positive results from a panel of potentially cross-reacting microorganisms. | No false positive results observed for SARS-CoV-2, Influenza A, or Influenza B when tested against 36 common microorganisms (viruses, bacteria, fungi) and human genomic DNA (at high concentrations: e.g., 1.00E+05 units/mL for viruses, 1.00E+06 units/mL for bacteria, 1.00E+04 copies/mL for human DNA). |
| Microbial Interference | - No false negative results in the presence of potentially interfering microorganisms at 3x LoD concentrations of target viruses. | No interference observed with the detection of SARS-CoV-2, influenza A, or influenza B, except for SARS-CoV-1 (SARS Coronavirus). SARS-CoV-1 at 1.00E+05 pfu/mL interfered with SARS-CoV-2 detection (3x LoD SARS-CoV-2 not detected), but not influenza A/B detection. At 1.00E+04 pfu/mL, SARS-CoV-1 did not interfere with SARS-CoV-2 detection. The likelihood of co-infection with SARS-CoV-1 is considered remote as the last confirmed case was in 2004. |
| Endogenous/Exogenous Interference | - No interference from common substances found in respiratory specimens (e.g., mucin, blood, nasal sprays, antibiotics) with target detection at ~3x LoD. | No interference observed from a panel of 10 potential interferents (e.g., Mucin, Blood, Nasal sprays, Corticosteroids, Zicam, Cepacol, Bactroban, Relenza, Tamiflu, Tobramycin) at specified physiologically relevant concentrations. |
| Competitive Inhibition | - Ability to detect target viruses at low concentrations (~3x LoD) even in the presence of other panel targets at high concentrations. | 3x LoD of SARS-CoV-2 was detected in presence of high Influenza A and B levels. 3x LoD of Influenza A was detected in presence of high Influenza B and SARS-CoV-2 levels. 3x LoD of Influenza B was detected in presence of high Influenza A and SARS-CoV-2 levels. Note: High SARS-CoV-2 levels (Ct < 16) were noted to inhibit Influenza A/B detection. The assay is concluded to detect coinfection at determined concentrations. |
| Matrix Equivalency | - Correct detection of viral targets in different acceptable collection and transport media (UTM, M4RT, Saline). | The assay correctly detected targets (SARS-CoV-2, Infl. A, Infl. B) in all tested matrices (UTM, M4RT, Saline) at 2x LoD (≥95% hit rate) and 5x LoD (100% hit rate) for positive samples, and 0% false positives for negative samples. |
| Clinical Performance (NPS - Prospective) | - High Positive Percent Agreement (PPA) and Negative Percent Agreement (NPA) compared to comparator methods. | SARS-CoV-2: PPA 95.3% (101/106; 95% CI: 89.4% - 98.0%), NPA 99.4% (507/510; 95% CI: 98.3% - 99.8%). Influenza A: PPA 94.7% (18/19; 95% CI: 75.4% - 99.1%), NPA 99.7% (595/597; 95% CI: 98.8% – 99.9%). Influenza B: PPA not calculable (no fresh positive specimens), NPA 100.0% (616/616; 95% CI: 99.4% - 100.0%). |
| Clinical Performance (NPS - Retrospective) | - High PPA and NPA for influenza A and B compared to comparator methods. | Influenza A: PPA 97.7% (43/44; 95% CI: 88.2% - 99.6%), NPA 99.2% (131/132; 95% CI: 95.8% – 99.9%). Influenza B: PPA 100.0% (22/22; 95% CI: 85.1% - 100.0%), NPA 100.0% (151/151; 95% CI: 97.5% - 100.0%). |
| Clinical Performance (NS - Prospective) | - High PPA and NPA compared to comparator methods. | SARS-CoV-2: PPA 96.3% (105/109; 95% CI: 90.9% - 98.6%), NPA 99.2% (503/507; 95% CI: 98.0% - 99.7%). Influenza A: PPA 100.0% (20/20; 95% CI: 83.9% - 100.0%), NPA 99.8% (595/596; 95% CI: 99.1% - 100.0%). Influenza B: PPA not calculable (no fresh positive specimens), NPA 100.0% (616/616; 95% CI: 99.4% - 100.0%). |
| Clinical Performance (NS - Retrospective) | - High PPA and NPA for influenza A and B compared to comparator methods. | Influenza A: PPA 97.2% (35/36; 95% CI: 85.8% - 99.5%), NPA 100.0% (150/150; 95% CI: 97.5% - 100.0%). Influenza B: PPA 100.0% (32/32; 95% CI: 89.3% - 100.0%), NPA 100.0% (154/154; 95% CI: 97.6% - 100.0%). |
| Reproducibility | - Consistent results across operators, sites, days, analyzers, and reagent lots for negative, low positive, and moderate positive samples. | SARS-CoV-2: Overall Hit Rate: Negative 100.0%, Low Positive 98.9%, Moderate Positive 99.6%. Low %CV for Ct values (3.0-3.5%). Influenza A: Overall Hit Rate: Negative 100.0%, Low Positive 98.5%, Moderate Positive 100.0%. Low %CV for Ct values (2.5-2.9%). Influenza B: Overall Hit Rate: Negative 100.0%, Low Positive 100.0%, Moderate Positive 99.6%. Low %CV for Ct values (3.1-3.6%). |
2. Sample Size Used for the Test Set and Data Provenance
The "test set" here refers to the clinical performance evaluation.
-
Prospective Clinical Specimens:
- Total Evaluable Subjects: 640 symptomatic individuals.
- Specimen Type: Nasopharyngeal Swab (NPS) and Nasal Swab (NS).
- NPS Evaluable Samples: 616 for SARS-CoV-2, 792 for Influenza A, and 789 for Influenza B.
- NS Evaluable Samples: 616 for SARS-CoV-2, 802 for Influenza A, and 802 for Influenza B.
- Data Provenance: Freshly collected at 10 point-of-care healthcare facilities (e.g., emergency rooms, outpatient clinics, and physician offices) in the United States during February-June 2022. This is prospective data.
- NS Collection: Comprised of healthcare provider-collected (n=325, 50.8%) or self-collected swabs (n=315, 49.2%) with healthcare provider instructions.
-
Retrospective Clinical Specimens (to supplement prospective data):
- Total Evaluable Retrospective Samples: NPS (n=178) and NS (n=190).
- NPS Evaluable Samples (after exclusions): 176 for Influenza A, 173 for Influenza B.
- NS Evaluable Samples (after exclusions): 186 for Influenza A and Influenza B.
- Data Provenance: Frozen positive and negative NPS and NS specimens. Prospectively obtained during the 2013-2014, 2014-2015, and 2019-2020 flu seasons and during the COVID-19 pandemic (March–June 2021). Distributed to 4 of the 10 sites for testing. This is retrospective data.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
This is a molecular diagnostic test (RT-PCR), not an AI/ML product that relies on expert review of images or signals for ground truth. Therefore, the concept of "experts" establishing ground truth in the same way as an AI/ML study (e.g., radiologists annotating images) is not directly applicable.
Instead, the ground truth for the clinical performance evaluation was established by comparator methods, which are highly sensitive, FDA-authorized laboratory-based RT-PCR assays. These laboratory tests are themselves considered the "gold standard" for nucleic acid detection of these viruses. The clinical samples were tested by these established methods, and those results served as the reference for comparison with the investigational device's performance.
- SARS-CoV-2 Ground Truth: Results from three highly sensitive FDA-authorized laboratory-based RT-PCR EUA assays (composite comparator method).
- Influenza A/B Ground Truth: Results from an acceptable molecular comparator for influenza (specific assay not named but implied to be a laboratory-based molecular test).
The qualifications of the personnel performing these comparator tests would be standard laboratory technicians with appropriate training and certifications for molecular diagnostics.
4. Adjudication Method for the Test Set
The document does not explicitly describe an "adjudication method" in the context of resolving discordant results between the investigational device and the comparator method. However, it notes:
- "All discordant SARS-CoV-2 results showed late Ct values, which are indicative of NPS specimens from individuals with viral loads near or below the limit of detection of both cobas® SARS-CoV-2 & Influenza A/B and the composite comparator methods." This implies that while there was no formal "adjudication" (like a third expert review in imaging), the characteristics of the discordant samples (low viral load indicated by late Ct values) provided an explanation for the discrepancies. For molecular tests, samples near LoD are inherently challenging.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was Done
N/A. This is a standalone diagnostic test performed by an instrument, not an AI/ML algorithm intended to assist human readers (like radiologists). Therefore, an MRMC comparative effectiveness study involving human readers is not relevant.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was Done
Yes, the primary performance evaluation of the cobas® SARS-CoV-2 & Influenza A/B test is a standalone (algorithm only) performance evaluation. The device, through its RT-PCR technology and automated analysis, generates a qualitative "Detected" or "Not Detected" result for each target (SARS-CoV-2, Influenza A, Influenza B). This performance is directly compared to established lab-based molecular methods (the ground truth), demonstrating its accuracy without human interpretation or intervention in the diagnostic call itself, beyond sample preparation and loading.
7. The Type of Ground Truth Used
The ground truth used was molecular diagnostic test results from highly sensitive, FDA-authorized laboratory-based RT-PCR assays. This is considered the gold standard for detecting the presence of specific viral nucleic acids. It is closest to outcome data in the sense that a positive test indicates the presence of the pathogen, which leads to a clinical outcome (e.g., diagnosis of infection). It's not expert consensus (in the sense of subjective interpretation), and not pathology (histology based), but rather an objective molecular assay result.
8. The Sample Size for the Training Set
This document describes the validation of a molecular diagnostic assay, not an AI/ML model for which "training set" is a direct concept. The development of such an assay involves various stages of internal optimization and calibration (akin to "training" implicitly) using analytical samples and potentially some clinical samples, but these are generally not broken down into formalized "training" and "test" sets in the same way as AI/ML validation.
The document does not specify a distinct "training set" sample size. The focus is on the performance evaluation using specific clinical and analytical test sets as detailed above.
9. How the Ground Truth for the Training Set Was Established
As noted above, a formal "training set" with designated ground truth establishment as for an AI/ML algorithm is not applicable to traditional molecular diagnostic assay development. The principles for establishing the performance characteristics and optimizing the assay (the closest analogue to training) would rely on:
- Analytical Standards: Known concentrations of purified viral RNA/DNA or inactivated virus.
- Reference Materials: WHO International Standards, characterized viral isolates.
- Comparative Testing: Using established, gold-standard molecular reference assays to characterize samples during development and optimization.
- Internal Verification: Through extensive analytical and pre-clinical studies to determine parameters like LoD, inclusivity, cross-reactivity, precision, etc., which guide the assay's design and final performance characteristics.
The "ground truth" during this development phase would be established by the known characteristics of the analytical samples and the results from established reference methods used for comparison.
Ask a specific question about this device
Page 1 of 1