Search Results
Found 59 results
510(k) Data Aggregation
(272 days)
Dual Track
The BIOFIRE SPOTFIRE Respiratory/Sore Throat (R/ST) Panel Mini is an automated multiplexed polymerase chain reaction (PCR) test intended for use with the BIOFIRE SPOTFIRE System for the simultaneous, qualitative detection and identification of multiple respiratory viral and bacterial nucleic acids in nasopharyngeal swab (NPS) or anterior nasal swab (ANS) specimens obtained from individuals with signs and symptoms of respiratory tract infection, including COVID-19; (Respiratory menu) or in throat swab (TS) specimens from individuals with signs and symptoms of pharyngitis; (Sore Throat menu).
The following analytes are identified and differentiated using the BIOFIRE SPOTFIRE R/ST Panel Mini:
Respiratory Menu
Viruses
- Coronavirus SARS-CoV-2
- Human rhinovirus
- Influenza A virus
- Influenza B virus
- Respiratory syncytial virus
Sore Throat Menu
Viruses
- Human rhinovirus
- Influenza A virus
- Influenza B virus
- Respiratory syncytial virus
Bacteria
- Streptococcus pyogenes (group A Strep)
Nucleic acids from the viral and bacterial organisms identified by this test are generally detectable in NPS/ANS/TS specimens during the acute phase of infection. The detection and identification of specific viral and bacterial nucleic acids from individuals exhibiting signs and symptoms of respiratory infection and/or pharyngitis is indicative of the presence of the identified microorganism and aids in diagnosis if used in conjunction with other clinical and epidemiological information, and laboratory findings. The results of this test should not be used as the sole basis for diagnosis, treatment, or other patient management decisions.
Negative results in the setting of a respiratory illness and/or pharyngitis may be due to infection with pathogens that are not detected by this test, or a respiratory tract infection that may not be detected by an NPS, ANS, or TS specimen. Positive results do not rule out co-infection with other organisms. The agent(s) detected by the BIOFIRE SPOTFIRE R/ST Panel Mini may not be the definite cause of disease.
Additional laboratory testing (e.g., bacterial and viral culture, immunofluorescence, and radiography) may be necessary when evaluating a patient with possible respiratory tract infection and/or pharyngitis.
The BIOFIRE SPOTFIRE R/ST Panel Mini (SPOTFIRE R/ST Panel Mini) simultaneously identifies 5 different respiratory viral pathogens in nasopharyngeal swabs (NPS) or anterior nasal swabs (ANS), or 5 different viral and bacterial pharyngitis pathogens in throat swabs (TS) from individuals with signs and symptoms of respiratory tract infections or pharyngitis, respectively, (see Table 1). The SPOTFIRE R/ST Panel Mini is compatible with the BIOFIRE® SPOTFIRE® System, a polymerase chain reaction (PCR)-based in vitro diagnostic system for infectious disease testing. The BIOFIRE SPOTFIRE System Software executes the SPOTFIRE R/ST Panel Mini test and interprets and reports the test results. The SPOTFIRE R/ST Panel Mini was designed to be used in CLIA-waived environments.
A test is initiated by loading Hydration Solution into the hydration solution injection port of the SPOTFIRE R/ST Panel Mini pouch and NPS, ANS, or TS specimen, mixed with the provided Sample Buffer, into the other sample injection port of the SPOTFIRE R/ST Panel Mini pouch and placing it in the SPOTFIRE System. The pouch contains all the reagents required for specimen testing and analysis in a freeze-dried format; the addition of Hydration Solution and Sample/Buffer Mix rehydrates the reagents. After the pouch is prepared, the SPOTFIRE System Software guides the user through the steps of placing the pouch into the instrument, scanning the pouch barcode, entering the sample identification, and initiating the run.
Nucleic acid extraction occurs within the SPOTFIRE R/ST Panel Mini pouch using mechanical and chemical lysis followed by purification using standard magnetic bead technology. After extracting and purifying nucleic acids from the unprocessed sample, the SPOTFIRE System performs a nested multiplex PCR that is executed in two stages. During the first stage, the SPOTFIRE System performs a single, large volume, highly multiplexed reverse transcription PCR (rt-PCR) reaction. The products from first stage PCR are then diluted and combined with a fresh, primer-free master mix and a fluorescent double-stranded DNA binding dye (LC Green® Plus, BioFire Diagnostics). The solution is then distributed to each well of the array. Array wells contain sets of primers designed specifically to amplify sequences internal to the PCR products generated during the first stage PCR reaction. The 2nd stage PCR, or nested PCR, is performed in singleplex fashion in each well of the array. At the conclusion of the 2nd stage PCR, the array is interrogated by melt curve analysis for the detection of signature amplicons denoting the presence of specific targets. A digital camera placed in front of the 2nd stage PCR captures fluorescent images of the PCR reactions and software interprets the data.
The SPOTFIRE System Software automatically interprets the results of each DNA melt curve analysis and combines the data with the results of the internal pouch controls to provide a test result for each organism on the SPOTFIRE R/ST Panel Mini.
The FDA 510(k) clearance letter details the acceptance criteria and study that proves the BIOFIRE SPOTFIRE Respiratory/Sore Throat Panel Mini meets these criteria, specifically for the addition of Anterior Nasal Swabs (ANS) as a sample type for the Respiratory Menu.
Here's the breakdown:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are implicitly defined by the reported performance metrics (Positive Percent Agreement - PPA and Negative Percent Agreement - NPA) in the clinical study. The device is deemed to meet these criteria if the lower bound of the 95% Confidence Interval (95% CI) for PPA and NPA is above acceptable thresholds (though specific numerical thresholds for "acceptable" are not explicitly stated as separate criteria, the observed high performance and clearance imply they were met).
SPOTFIRE R/ST Panel Mini R Menu Analyte | Performance Metric | Reported Performance (Prospective Study) |
---|---|---|
Coronavirus SARS-CoV-2 | Positive Percent Agreement (PPA) | 96.2% (95% CI: 87.0-98.9%) |
Negative Percent Agreement (NPA) | 99.6% (95% CI: 98.8-99.9%) | |
Human rhinovirus | Positive Percent Agreement (PPA) | 95.7% (95% CI: 92.2-97.6%) |
Negative Percent Agreement (NPA) | 95.0% (95% CI: 92.9-96.5%) | |
Influenza A virus | Positive Percent Agreement (PPA) | 94.3% (95% CI: 84.6-98.1%) |
Negative Percent Agreement (NPA) | 100% (95% CI: 99.5-100%) | |
Influenza B virus | Positive Percent Agreement (PPA) | 100% (95% CI: 77.2-100%) |
Negative Percent Agreement (NPA) | 100% (95% CI: 99.5-100%) | |
Respiratory syncytial virus | Positive Percent Agreement (PPA) | 95.0% (95% CI: 83.5-98.6%) |
Negative Percent Agreement (NPA) | 99.9% (95% CI: 99.3-100%) |
Archived Specimen Performance for Influenza B virus:
Analyte | Performance Metric | Reported Performance (Archived Study) |
---|---|---|
Influenza B virus | Positive Percent Agreement (PPA) | 100% (95% CI: 90.1-100%) |
Negative Percent Agreement (NPA) | 100% (95% CI: 98.2-100%) |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size:
- Clinical Performance (Prospective Study): 797 specimens (out of 820 initially enrolled, 23 excluded).
- Archived Specimen Testing: 241 specimens for Influenza B virus (35 positive, 206 negative).
- Data Provenance:
- Country of Origin: US (prospective multi-center study at five geographically distinct urgent care or emergency department study sites).
- Retrospective/Prospective: The study was primarily prospective, conducted from March 2024 to February 2025. This was supplemented with archived specimens for Influenza B due to low prevalence in the prospective study.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document does not explicitly state the number or qualifications of experts used to establish the ground truth. It states that the performance was evaluated by comparing the test results with those from a "commercially available FDA-cleared multiplexed respiratory pathogen panel." This suggests that the ground truth was established by the results of this comparator method, which themselves would have been validated. No human expert interpretation of the comparator method is described.
4. Adjudication Method for the Test Set
The document mentions "Investigations of discrepant results are summarized in the footnotes." These footnotes indicate that for discrepant cases (e.g., false positives, false negatives), "additional molecular methods" were used to re-test the specimens. This implies a form of post-hoc adjudication using a more definitive or orthogonal molecular method to resolve discrepancies between the SPOTFIRE R/ST Panel Mini and the initial comparator.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No. This study is for a diagnostic PCR test, not an AI-assisted imaging or interpretation device. Therefore, an MRMC study and analysis of human reader improvement with AI assistance are not applicable.
6. If a Standalone (i.e., Algorithm Only Without Human-in-the-Loop Performance) Was Done
Yes. The study evaluates the performance of the BIOFIRE SPOTFIRE R/ST Panel Mini as a standalone diagnostic device. The results are automatically interpreted and reported by the system software, with no human interpretation step in the primary analysis flow. The study compares the device's output directly to the comparator method.
7. The Type of Ground Truth Used
The primary ground truth was established by a commercially available FDA-cleared multiplexed respiratory pathogen panel. For discrepant results, "additional molecular methods" were used for confirmatory testing, indicating a molecular gold standard approach.
8. The Sample Size for the Training Set
The document does not provide details about a training set size. This notice is a 510(k) clearance for a PCR-based in vitro diagnostic test, not a machine learning or AI algorithm in the traditional sense that requires distinct training and test sets in the same manner. The "test set" described is the clinical validation cohort for demonstrating performance. PCR assays are generally developed and optimized through laboratory analytical studies, not typically "trained" on large datasets in the way an AI model would be.
9. How the Ground Truth for the Training Set Was Established
As noted above, the concept of a "training set ground truth" is not applicable in this context, as the device is a PCR assay. The development and optimization of such assays involve different molecular and analytical validation processes to ensure specificity and sensitivity.
Ask a specific question about this device
(211 days)
Dual Track
The Nano-Check Influenza+COVID-19 Dual Test is a lateral flow immunochromatographic assay intended for the qualitative detection and differentiation of influenza A, and influenza B nucleoprotein antigens and SARS-CoV-2 nucleocapsid antigen directly in anterior nasal swab (ANS) samples from individuals with signs and symptoms of respiratory tract infection. Clinical signs and symptoms of respiratory viral infection due to SARS-CoV-2 and influenza can be similar.
All negative results are presumptive and should be confirmed with a molecular assay, if necessary, for patient management. Negative results do not rule out infection with influenza or SARS-CoV-2 and should not be used as the sole basis for treatment or patient management decisions.
Positive results do not rule out bacterial infection or co-infection with other viruses.
The Nano-Check™ Influenza+COVID-19 Dual Test is a lateral flow immunochromatographic assay intended for in vitro rapid, simultaneous qualitative detection and differentiation of influenza A, and influenza B nucleoprotein antigens and SARS-CoV-2 nucleocapsid antigen directly from anterior nasal swab specimens.
The assay kit consists of 25 test cassette devices, 25 reagent tubes, 25 ampules containing extraction buffer, 25 anterior nasal specimen collection swabs, one positive control swab, one negative control swab, one Instructions for Use, and one Quick Reference Instruction. An external positive control swab contains noninfectious influenza A, influenza B, and SARS-CoV-2 antigens dried onto the swab and an external negative control swab contains noninfectious blank universal viral transport media dried on the swab. The kit should be stored at 2°C - 30°C.
Device Acceptance Criteria and Performance Study: Nano-Check Influenza+COVID-19 Dual Test
The Nano-Check Influenza+COVID-19 Dual Test is a lateral flow immunochromatographic assay for the qualitative detection and differentiation of influenza A, influenza B, and SARS-CoV-2 antigens in anterior nasal swab samples. The device's acceptance criteria and performance were established through extensive analytical and clinical studies.
1. Table of Acceptance Criteria and Reported Device Performance
The following table summarizes the key acceptance criteria and the performance achieved by the Nano-Check Influenza+COVID-19 Dual Test based on the provided 510(k) summary. Given that this is a qualitative assay, the primary performance metrics are Positive Percent Agreement (PPA) and Negative Percent Agreement (NPA) in clinical studies, and various measures of agreement/detection rates in analytical studies.
Performance Metric Category | Acceptance Criteria (Implicit) | Reported Device Performance |
---|---|---|
CLINICAL PERFORMANCE | ||
SARS-CoV-2 | PPA ≥ 80% (typical for antigen tests), NPA ≥ 95% | PPA: 87.6% (95% CI: 83.0% - 91.0%) |
NPA: 99.8% (95% CI: 99.5% - 99.9%) | ||
Influenza A | PPA ≥ 80%, NPA ≥ 95% | PPA: 86.9% (95% CI: 83.6% - 89.6%) |
NPA: 99.6% (95% CI: 99.1% - 99.8%) | ||
Influenza B | PPA ≥ 80%, NPA ≥ 95% | PPA: 86.8% (95% CI: 79.4% - 91.9%) |
NPA: 99.7% (95% CI: 99.4% - 99.9%) | ||
ANALYTICAL PERFORMANCE | ||
Precision (Within-Lab) | 100% agreement for TN, HN, LP, MP levels across runs/operators | 100% agreement for all levels (SARS-CoV-2, Flu A, Flu B) per operator per run. |
Precision (Between-Lot) | Consistent results across lots, especially for moderate and high positives | For C90 levels, agreement ranged from 83.3% to 100%. For 3X LOD levels, 100% agreement. |
Reproducibility (Multi-site, Multi-operator) | High agreement across sites and operators for all sample types (TN, HN, LP, MP) | TN: 100% |
HN COVID: 100% | ||
HN Flu A: 100% | ||
HN Flu B: 99.4% | ||
LP COVID: 100% | ||
LP Flu A: 99.4% | ||
LP Flu B: 100% | ||
MP COVID: 100% | ||
MP Flu A: 100% | ||
MP Flu B: 100% | ||
Cross-Reactivity/Microbial Interference | No cross-reactivity/interference at tested concentrations | No cross-reactivity/interference observed with 50 pathogens (bacteria, fungi, viruses) and negative matrix. |
Endogenous/Exogenous Interference | No interference with common substances at tested concentrations | No interference observed with various nasal sprays, pain relievers, hand sanitizers, and other biological substances (except Hand sanitizer lotion, which caused false negative Influenza B when tested at 15% w/v). |
Limit of Detection (LoD) | Specific LoD values per virus strain | SARS-CoV-2: 1.95×10² TCID₅₀/mL to 1.27×10⁴ TCID₅₀/mL (strain dependent) |
Influenza A: 2.8×10³ TCID₅₀/mL to 1.4×10⁵ CEID₅₀/mL (strain dependent) | ||
Influenza B: 1.04×10² TCID₅₀/mL to 2.25×10⁵ CEID₅₀/mL (strain dependent) | ||
WHO Standard SARS-CoV-2: 667 IU/mL | ||
Analytical Reactivity (Inclusivity) | 100% detection for various strains at specified concentrations | 100% detection (3/3 replicates) for 14 SARS-CoV-2, 31 Flu A, and 16 Flu B strains at specified concentrations. |
High Dose Hook Effect | No false negatives at high concentrations | No high-dose hook effect observed for all tested viruses at concentrations up to 3.89×10⁴ TCID₅₀/mL (SARS-CoV-2), 2.8×10⁸ CEID₅₀/mL (Flu A), and 1.8×10⁷ TCID₅₀/mL (Flu B). |
Competitive Interference | No interference between targets in co-infection scenarios | No competitive interference observed between SARS-CoV-2, Influenza A, and Influenza B at high/low titer combinations. |
Specimen Stability | Stable results for specified storage conditions/times | Nasal swab samples stable for up to 48 hours at -20°C, 2-8°C, 23.5°C, and 30°C. |
External Controls | 100% agreement with expected results for positive/negative controls | 100% agreement for all three lots of positive and negative external controls. |
2. Sample Size Used for the Test Set and Data Provenance
- Clinical Study Test Set Sample Size: A total of 1,969 subjects were enrolled in the clinical study.
- Data Provenance: The data was collected from a multi-center, prospective clinical study in the U.S. between November 2022 and February 2025.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The device being reviewed is an in vitro diagnostic (IVD) test for antigen detection. For such devices, the "ground truth" in clinical performance studies is typically established by a highly sensitive and specific molecular assay (RT-PCR), rather than by human experts interpreting images or signals from the test device itself.
- In this case, the ground truth for the clinical test set was established using an FDA-cleared RT-PCR method as the comparator.
- The document does not specify the number of experts directly involved in establishing the RT-PCR ground truth or their qualifications beyond stating it was performed at a "reference laboratory as per the cleared instruction for use." This implies that qualified laboratory personnel, adhering to standardized RT-PCR protocols, established the ground truth.
4. Adjudication Method for the Test Set
Adjudication methods (e.g., 2+1, 3+1) are typically used in studies involving human interpretation (e.g., radiology reads) where discrepancies between readers need to be resolved. Since the Nano-Check Influenza+COVID-19 Dual Test is a lateral flow immunoassay interpreted visually by an operator, and the ground truth was established by an RT-PCR molecular assay, no explicit adjudication method for the test set is described or implied in the provided text. The comparison was directly between the device's visual results and the RT-PCR results.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No, an MRMC comparative effectiveness study was not done. This type of study (MRMC) is generally conducted for imaging AI devices to evaluate the impact of AI assistance on human reader performance. The Nano-Check Influenza+COVID-19 Dual Test is an in vitro diagnostic device for antigen detection, not an imaging AI device where human readers interpret complex images. Therefore, the concept of "human readers improve with AI vs without AI assistance" is not applicable to this device.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, the performance presented for the Nano-Check Influenza+COVID-19 Dual Test in the clinical study is essentially standalone performance in the context of a rapid diagnostic test. While the test is visually interpreted by an operator, the performance metrics (PPA and NPA) are derived from the direct output of the device compared to the RT-PCR reference. There is no complex "algorithm" separate from the physical test strips' chemical reaction and visual readout. The operator simply reads the result displayed by the device. The "human-in-the-loop" here is the visual interpretation of a clear positive/negative line, not a complex decision-making process aided by AI.
7. The Type of Ground Truth Used
The type of ground truth used for the clinical performance study was an FDA-cleared molecular assay (RT-PCR method). This is a highly sensitive and specific laboratory-based test considered the gold standard for detecting viral nucleic acids, making it appropriate for establishing true positive and true negative cases of infection.
8. The Sample Size for the Training Set
The provided document describes the performance data for the test set (clinical study and analytical validation). It does not specify a separate training set sample size. This is expected because the Nano-Check Influenza+COVID-19 Dual Test is a lateral flow immunoassay, not a machine learning or AI model that requires a distinct training phase with a labeled dataset. The development and optimization of such assays rely on biochemical and immunological principles, followed by rigorous analytical and clinical validation.
9. How the Ground Truth for the Training Set Was Established
As noted above, there isn't a "training set" in the machine learning sense for this type of IVD device. The development of the assay (e.g., selecting antibodies, optimizing reagents) would involve internal R&D studies, using characterized viral samples and clinical specimens, but these are part of the development process rather than a formal "training set" with ground truth establishment for an AI algorithm. The performance data presented is from the validation against established reference methods.
Ask a specific question about this device
(181 days)
Dual Track
The BD Veritor™ System for SARS-CoV-2 is a chromatographic digital immunoassay for the rapid, qualitative detection of SARS-CoV-2 nucleocapsid protein antigens directly in anterior nasal swab specimens from individuals with signs and symptoms of upper respiratory infection (i.e., symptomatic). The test is intended for use as an aid in the diagnosis of SARS-CoV-2 infections (COVID-19) in symptomatic individuals when either: tested at least twice over three days with at least 48 hours between tests; or when tested once, and negative by the BD Veritor™ System for SARS-CoV-2 and followed up with a molecular test.
A negative test result is presumptive and does not preclude SARS-CoV-2 infection; it is recommended these results be confirmed by a molecular SARS-CoV-2 assay.
Positive results do not rule out co-infection with other bacteria or viruses and should not be used as the sole basis for diagnosis, treatment, or other patient management decisions.
Performance characteristics for SARS-CoV-2 were established between April 2024 and August 2024 when SARS-CoV-2 Omicron was the predominant SARS-CoV-2 variant in circulation. Performance characteristics may vary with newly emerging SARS-CoV-2 virus variants.
The BD Veritor™ System for SARS-CoV-2 is a rapid (approximately 15 minutes) chromatographic digital immunoassay for the direct detection of the presence or absence of SARS-CoV-2 antigens in anterior nasal swab specimens taken from patients with signs and symptoms of upper respiratory infection (i.e., symptomatic) who are suspected of COVID-19 by their healthcare provider. The test is intended for use with an opto-electronic interpretation instrument, the BD Veritor™ Plus Analyzer Instrument and is not interpreted visually.
- When specimens are processed and added to the test device, SARS‑CoV‑2 antigens present in the specimen bind to biotinylated antibodies and antibodies conjugated to detector particles in the test strip.
- The biotinylated antibody‑antigen‑conjugate complexes migrate across the test strip to the reaction area and are captured by a line of streptavidin bound on the membrane.
- A positive result is determined by the BD Veritor™ Plus Analyzer when antigen‑conjugate is deposited at the Test "T" position and a control conjugate is deposited at the Control "C" position on the assay device.
- The instrument analyzes and corrects for non‑specific binding and detects positives not recognized by the unaided eye to provide an objective result.
Procedures to evaluate test devices depend on the BD Veritor™ Plus Analyzer workflow configuration chosen. In Analyze Now mode, the instrument evaluates assay devices after manual timing of their development. In Walk Away mode, devices are inserted immediately after application of the specimen, and timing of assay development and analysis is automated. Additionally, connection of a BD Veritor™ Plus Analyzer to a printer or IT system is possible if desired. Additional result documentation capabilities are possible with the integration of a BD Veritor™ barcode scanning enabled module.
The Analyzer uses a proprietary algorithm that subtracts the nonspecific signal at the negative control line from the signal present at the test line. If the resultant test line signal is above a preselected cutoff, the specimen is scored as positive. If the resultant test line signal is below or equal to the cutoff, the specimen is scored as negative. Use of the active negative control feature allows the BD Veritor™ Plus Analyzer to correctly interpret test results that cannot be scored visually because the human eye is unable to accurately perform the subtraction of the nonspecific signal. The Analyzer measures the amount of light reflected from various zones along the assay strip. The measurement of the assay background zone is an important factor during the test interpretation as the reflectance value is compared to that of the control and test zones.
The provided FDA 510(k) clearance letter and summary describe the BD Veritor System for SARS-CoV-2. Here's an analysis of the acceptance criteria and the study that proves the device meets those criteria:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are not explicitly stated as distinct numerical targets in the document. However, based on the clinical study results and FDA clearance, the implicit acceptance criteria for clinical performance are related to the confidence intervals for Positive Percent Agreement (PPA) and Negative Percent Agreement (NPA). The reported device performance is presented below:
Performance Metric | Implicit Acceptance Criteria (based on 95% C.I. reported) | Reported Device Performance |
---|---|---|
PPA | Lower bound of 95% C.I. > 77.2% | 84.0% (C.I.: 77.2%, 89.1%) |
NPA | Lower bound of 95% C.I. > 99.0% | 99.7% (C.I.: 99.0%, 99.9%) |
Note: The document does not explicitly state numerical acceptance thresholds for PPA and NPA (e.g., "PPA must be > 80%"). Therefore, the "Implicit Acceptance Criteria" are inferred from the demonstrated performance and the fact that the device received clearance. The FDA typically evaluates these metrics within acceptable ranges for diagnostic tests.
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: 1,032 direct anterior nasal swabs.
- Data Provenance: The samples were prospectively collected from individual symptomatic patients across 15 geographically diverse areas across the United States between April and August 2024.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
The document does not specify the number of experts used to establish the ground truth. The ground truth was established by an FDA-cleared SARS-CoV-2 RT-PCR test. For the false positive/negative re-testing, it broadly states "a second RT-PCR method," implying multiple tests might have been performed to confirm results without specifying expert involvement in interpreting these specific results beyond the RT-PCR outcome itself.
4. Adjudication Method for the Test Set
The primary ground truth for the clinical study was established by an FDA-cleared SARS-CoV-2 RT-PCR test without explicit mention of expert adjudication for every case. However, there was a form of adjudication for discordant results:
- False Positive Adjudication: The three BD Veritor System for SARS-CoV-2 false positive results were retested with a second RT-PCR method and were confirmed negative. This suggests a method where initial discrepancies against the reference method were independently verified.
- False Negative Adjudication: The 23 BD Veritor System for SARS-CoV-2 false negative results were retested with a second RT-PCR method in which 14 were confirmed positive and 9 were negative.
This indicates a process of re-testing or confirmation for discordant results, which serves as a form of adjudication.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No, a multi-reader multi-case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance (or similar comparative effectiveness of human readers with vs. without the device) was not explicitly mentioned or described in the provided document. The BD Veritor System for SARS-CoV-2 uses an instrument (BD Veritor™ Plus Analyzer) for interpretation, replacing visual interpretation with an automated read. The comparison is between the device's performance and a reference RT-PCR, not between human readers with and without assistance from the device.
6. Standalone (Algorithm Only) Performance Study
Yes, a standalone study was done. The entire clinical performance study (Table 9 and 11) is a standalone study, as it evaluates the performance of the BD Veritor System for SARS-CoV-2 (algorithm/device only) compared to a reference RT-PCR without human interpretation of the lateral flow assay itself. The BD Veritor™ Plus Analyzer instrument is explicitly stated to read and interpret the results, and the device "is not interpreted visually."
7. Type of Ground Truth Used
The ground truth used for the clinical study was an FDA-cleared SARS-CoV-2 RT-PCR test (molecular test results).
8. Sample Size for the Training Set
The document does not specify a sample size for a training set. This submission is for a device, and the analytical and clinical studies described are for validation of the device's performance, not for developing or training an AI/ML algorithm in the context of a typical AI/ML development pipeline. The device uses a "proprietary algorithm" for signal subtraction and interpretation, but it's not presented as a machine learning model that requires a distinct training set in the typical sense.
9. How the Ground Truth for the Training Set Was Established
Since no specific training set and its ground truth establishment are discussed in the context of AI/ML model training, this information is not applicable/provided based on the document. The "proprietary algorithm" for the instrument is described in terms of processing reflectance data and applying a preselected cutoff, and its development process (including any data used for internal calibration or parameter setting) is not detailed here.
Ask a specific question about this device
(134 days)
Dual Track
The BinaxNOW COVID-19 Ag Card is a lateral flow immunochromatographic assay for the rapid, qualitative detection of the SARS-CoV-2 nucleocapsid protein antigen directly in anterior nasal swab specimens from individuals with signs and symptoms of upper respiratory tract infection (i.e., symptomatic). The test is intended for use as an aid in the diagnosis of SARS-CoV-2 infections (COVID-19) in symptomatic individuals when either: tested at least twice over three days with at least 48 hours between tests; or when tested once, and negative by the BinaxNOW COVID-19 Ag Card and followed up with a molecular test.
A negative test is presumptive and does not preclude SARS-CoV-2 infection; it is recommended these results be confirmed by a molecular SARS-CoV-2 assay.
Positive results do not rule out co-infection with other bacteria or viruses and should not be used as the sole basis for diagnosis, treatment, or other patient management decisions.
The BinaxNOW COVID-19 Ag Card is an immunochromatographic membrane assay that uses antibodies to detect SARS-CoV-2 nucleocapsid protein from anterior nasal swab specimens. SARS-CoV-2 specific antibodies and a control antibody are immobilized onto a membrane support as two distinct lines and combined with other reagents/pads to construct a test strip. This test strip and a well to hold the swab specimen are mounted on opposite sides of a cardboard, book-shaped hinged test card.
To perform the test, an anterior nasal swab specimen is collected from the patient, 6 drops of extraction reagent from a dropper bottle are added to the top hole of the swab well. The patient sample is inserted into the test card through the bottom hole of the swab well, and firmly pushed upwards until the swab tip is visible through the top hole. The swab is rotated 3 times clockwise and the card is closed, bringing the extracted sample into contact with the test strip. Test results are interpreted visually at 15 minutes based on the presence or absence of visually detectable pink/purple colored lines. Results should not be read after 30 minutes.
The provided document is a 510(k) summary for the BinaxNOW COVID-19 Ag Card. It does not describe a study proving a device meets acceptance criteria in the manner typically associated with AI/ML-driven medical devices, which would involve measures like sensitivity, specificity, or AUC against a ground truth, often with human readers involved (MRMC studies).
Instead, this document describes the validation of an immunochromatographic assay (a rapid antigen test) for COVID-19. The "acceptance criteria" here are typically performance targets for analytical and clinical characteristics (e.g., Limit of Detection, cross-reactivity, Positive Percent Agreement, Negative Percent Agreement). The "study" refers to the analytical and clinical studies conducted to demonstrate these performance characteristics.
Therefore, the following response will interpret "acceptance criteria" as the performance benchmarks for a diagnostic assay and describe the validation studies for the BinaxNOW COVID-19 Ag Card based on the provided text.
Here's a breakdown of the information requested, interpreted in the context of a rapid antigen test (not an AI/ML device):
Acceptance Criteria and Device Performance for BinaxNOW COVID-19 Ag Card
The BinaxNOW COVID-19 Ag Card is a lateral flow immunochromatographic assay, not an AI/ML diagnostic device. Therefore, the "acceptance criteria" are based on the analytical and clinical performance characteristics typical for such an in-vitro diagnostic (IVD) device, rather than metrics like AUC, sensitivity/specificity of an AI algorithm, or human reader improvement with AI assistance.
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category | Specific Metric/Study | Performance Target (Implicit/Typical for IVDs) | Reported Device Performance |
---|---|---|---|
Analytical Performance | Limit of Detection (LOD) | Lowest virus concentration detected ≥ 95% of the time (e.g., 19/20 replicates positive) | USA-WA1/2020: 3.5 x 10³ TCID50/mL (70 TCID/swab) |
B.1.1.529 (Omicron): 1.6 x 10³ TCID50/mL (32.06 TCID/swab) | |||
WHO International Standard (NIBSC 21/368): 375 IU/mL (7.5 IU/swab), with 100% detection (20/20) at this concentration. | |||
Analytical Reactivity (Inclusivity) | Detection of various SARS-CoV-2 strains at specified concentrations (all 5 replicates positive for a given concentration) | Detected 19 different SARS-CoV-2 variants (Alpha, Beta, Delta, Gamma, Iota, Italy-INMI1, Kappa, Zeta, Omicron variants including BA.2.3, BA.2.12.1, BA.2.75.5, BA.4.6, BA.5, BA.5.5, BF.5, BF.7, BQ.1, BQ.1.1, XBB, JN.1) at concentrations ranging from 8.75 x 10² TCID50/mL to 5.60 x 10⁴ TCID50/mL (or IU/mL for JN.1). | |
Analytical Specificity (Cross Reactivity) & Microbial Interference | No cross-reactivity or interference with common respiratory pathogens/commensals. | No cross-reactivity or interference observed with 28 tested microorganisms (9 bacteria, 17 viruses, 1 yeast, pooled human nasal wash, and 4 Coronavirus HKU1 clinical specimens). | |
In silico analysis for P. jirovecii showed very low potential for cross-reactivity. Possible susceptibility to SARS-CoV (due to homology) noted, but deemed low clinical likelihood. | |||
High Dose Hook Effect | No hook effect at high viral concentrations. | No high dose hook effect observed up to 1.4 x 10⁶ TCID50/mL. | |
Interfering Substances | No interference from specified endogenous or exogenous substances (e.g., common nasal medications, blood, mucin). | No effect on test performance found at specified concentrations for 25 substances (e.g., throat lozenges, various nasal sprays, hand sanitizer, blood, mucin). | |
Reproducibility/Near the Cut Off | High agreement across sites for negative, low, moderate positive, and high negative samples. | Moderate Positive: 100% (135/135) overall agreement (95% CI: 97.2%-100.0%). | |
Low Positive: 94.1% (127/135) overall agreement (95% CI: 88.7%–97.0%). | |||
High Negative: 99.2% (132/133) overall agreement (95% CI: 95.9%-99.9%). | |||
True Negative: 99.3% (134/135) overall agreement (95% CI: 95.9%-99.9%). | |||
Clinical Performance | Positive Percent Agreement (PPA) | High PPA against a molecular comparator (RT-PCR) in symptomatic individuals. | Overall (Combined Studies): 86.9% (186/214) with 95% CI: 81.7%, 90.8% (within 5 days symptom onset). |
Original Study: 81.6% (71/87) with 95% CI: 72.2%, 88.4%. | |||
Omicron Study: 90.6% (115/127) with 95% CI: 84.2%, 94.5%. | |||
Negative Percent Agreement (NPA) | High NPA against a molecular comparator (RT-PCR) in symptomatic individuals. | Overall (Combined Studies): 98.5% (384/390) with 95% CI: 96.7%, 99.3% (within 5 days symptom onset). | |
Original Study: 98.6% (205/208) with 95% CI: 95.8%, 99.5%. | |||
Omicron Study: 98.4% (179/182) with 95% CI: 95.3%, 99.4%. | |||
Performance by Days Post Symptom Onset (DPSO) | Performance maintained within the specified window. | PPA ranged across DPSO: |
- Day 0: 69.23% (Omicron Study)
- Day 1: 94.12% (Original), 88.24% (Omicron)
- Day 2: 73.33% (Original), 97.22% (Omicron)
- Day 3: 76.00% (Original), 100.00% (Omicron)
- Day 4: 88.89% (Original), 66.67% (Omicron)
- Day 5: 100.00% (Original), 100.00% (Omicron) |
| | Invalid Rate | Low invalid rate. | 0.68% overall (5/730). |
| User/Environmental Factors | Flex Studies (Robustness) | Device performs accurately under various usage and environmental conditions. | Demonstrated robustness to usage variation and environmental factors. Identified that direct exposure of test strip to wet cleaning solutions or excessive glove powder may cause erroneous results, leading to specific instructions for use. |
2. Sample Sizes and Data Provenance (Clinical Studies)
- Clinical Test Set Sample Size:
- Study 1 (Original): 295 evaluable subjects.
- Study 2 (Omicron): 309 evaluable subjects.
- Combined Clinical Data: 604 evaluable nasal swabs from symptomatic patients (within 5 days of symptom onset).
- Data Provenance: Clinical studies were conducted within the United States.
- Study 1: November 2020 through March 2021 (when Delta and Omicron were dominant).
- Study 2: February 2022 to July 2022 (when Omicron and its variants were prevalent).
- Retrospective/Prospective: Both clinical studies were prospective.
3. Number of Experts and Qualifications for Ground Truth for Test Set
This type of diagnostic device (lateral flow immunoassay) does not typically utilize human experts in the same way an AI/ML device would for image interpretation or clinical diagnosis. For the BinaxNOW COVID-19 Ag Card, the ground truth for the clinical studies was established by a comparator molecular test (RT-PCR). The experts involved would be the laboratory personnel performing and interpreting the RT-PCR assays. Their specific qualifications are not detailed in this summary but are implicitly assumed to be standard for clinical laboratory professionals performing EUA-authorized RT-PCR tests.
4. Adjudication Method for the Test Set
Not applicable in the typical sense for an AI/ML study involving human interpretation. The comparator method (RT-PCR) serves as the reference standard. The document mentions for the serial testing study's composite comparator method that in cases of discordant RT-PCR results, a third RT-PCR test was performed, and the final result based on majority rule.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No. This is a rapid antigen test, not an AI/ML system where human readers would interpret results "with vs. without AI assistance." The test is visually read by the user, and its performance is assessed against a molecular gold standard.
6. Standalone Performance (Algorithm Only without Human-in-the-Loop Performance)
This question is not applicable in the context of this device. The BinaxNOW COVID-19 Ag Card is a manually read, qualitative visual assay. There is no AI algorithm to evaluate for standalone performance. The "performance" tables provided in the document (PPA and NPA) essentially represent the "standalone" performance of the rapid antigen test itself when interpreted visually.
7. Type of Ground Truth Used
- For Clinical Studies: The primary ground truth for clinical performance (PPA, NPA) was an FDA Emergency Use Authorized real-time Polymerase Chain Reaction (RT-PCR) assay for the detection of SARS-CoV-2.
- For Serial Testing Study: A composite comparator method was used, involving at least two highly sensitive EUA RT-PCRs. If discordant, a third RT-PCR was performed, and the final result was based on majority rule.
- For Analytical Studies: Ground truth was established by known concentrations of heat-inactivated SARS-CoV-2 virus or WHO International Standard for SARS-CoV-2 Antigen (NIBSC 21/368) for LoD and inclusivity studies, and known presence/absence of specific microorganisms for cross-reactivity.
8. Sample Size for the Training Set
This information is not applicable for this type of IVD device. The BinaxNOW COVID-19 Ag Card is a laboratory-developed lateral flow assay, not an AI/ML model that is 'trained' on data. Its 'training' is the fundamental assay development and optimization process, not a computational training set.
9. How the Ground Truth for the Training Set Was Established
Not applicable for this device type. The manufacturing process and quality control of the reagents and test strip govern its 'performance' characteristics, which are then analytically and clinically validated.
Ask a specific question about this device
(214 days)
Dual Track
The SQA-iOw Sperm Quality Analyzer is an automated analyzer intended for in-vitro diagnostic use to determine the following parameters in semen:
Measured parameters:
- Sperm Concentration/ Total Sperm Concentration, millions/mL
- Motile Sperm Concentration (MSC), millions/mL
- Progressively Motile Sperm Concentration (PMSC), millions/mL (combines Rapidly and Slowly Progressive Motile Sperm Concentration, millions/mL)
- Normal Forms (% Normal Morphology), %
Derived parameters:
- Total Motility / Total Motile (PR + NP), %
- Progressive Motility (PR), % (combines Rapidly and Slowly Progressive, %)
- Non-Progressive (NP), %
- Immotile (IM), %
The SQA-iOw is intended for CLIA Waived settings. The SQA-iOw does not provide a comprehensive evaluation of a male's fertility status and is intended for in vitro use only.
The SQA-iOw Sperm Quality Analyzer is a PC-based analytical medical device that tests human semen samples. The device works with a computer application that manages the device, and information related to the patient, the sample, the test results and the facility.
After collection and preparation, 0.6 mL of semen sample is aspirated into a disposable SQA capillary sample delivery system and inserted into the SQA-iOw measurement chamber. The testing process takes approximately 75 seconds. The system performs an automatic self-test and auto-calibration upon start up, and checks device stability before each sample is run.
The SQA-iOw Sperm Quality Analyzer utilizes proprietary software code to both perform analysis of semen parameters and present those results on the user interface. This software is installed on a PC as a cloud-based application ("app") and is designed to perform all functions and features of the SQA-iO device, controlled by the user through a proprietary graphical user interface (GUI).
The SQA-iOw Sperm Quality Analyzer software analyzes semen parameters using signal processing technology. Sample testing is performed by capturing electrical signals as sperm moves through a light source in the SQA-iO optical block. These light disturbances are converted into electrical signals which are then analyzed by the SQA-iOw software. The SQA-iOw software applies proprietary algorithms to interpret and express these electrical signals and report them as various semen parameters.
The SQA-iOw Sperm Quality Analyzer package provides the SQA-iOw device and USB cable. SQA disposable capillaries, cleaning kits and related testing supplies and test kits are supplied individually.
Here's a breakdown of the acceptance criteria and the study proving the SQA-iOw Sperm Quality Analyzer meets them, based on the provided FDA 510(k) clearance letter:
1. Table of Acceptance Criteria and Reported Device Performance
The FDA clearance letter does not explicitly list predefined quantitative acceptance criteria in a dedicated table format. Instead, it describes two precision studies and a method comparison study, concluding that the results "met the acceptance criteria." For the method comparison, it refers to "Passing-Bablok regression" with "Slopes, y-intercepts, and correlation coefficients, along with the 95% confidence intervals, were reported." The implicit acceptance criteria are typically that these statistical measures fall within a pre-specified range demonstrating equivalence to the predicate device.
Given the information provided, we can infer the acceptance criteria for the parameters measured and the reported performance.
Parameter Category | Test Type | Acceptance Criteria (Implicit from conclusion) | Reported Device Performance (Summary) |
---|---|---|---|
Precision (Control Material) | Repeatability (Within-run), Between-day, Between-operator, Between-site, Total Imprecision | StDev and %CV met the acceptance criteria (specific values not provided in extract). | All reported SDs and %CVs for Controls Level 1, Level 2, and Negative Control were low, indicating high precision. For example, Total %CV for Control Level 1 was 1.84%, and for Level 2 was 4.01%. Total SD and %CV for Negative Control were 0.00%. |
Precision (Native Samples) | Repeatability (Within-run), Between-operator, Total Imprecision | StDev and %CV met the acceptance criteria for all reported parameters (specific values not provided in extract). | All reported SDs and %CVs for Sperm Concentration, MSC, PMSC, Morphology, Motility, Progressive Motility, Non-Progressive Motility, and Immotile were reported, with the conclusion that they "met the acceptance criteria." For instance, Total %CV for Sperm Concentration ranged from 1.5% to 14.1%, for MSC 0.0% to 41.6%, for PMSC 4.0% to 173.2% (with some very high %CVs for low-level samples), for Morphology 6.5% to 244.9% (with some very high %CVs for low-level samples), for Motility 4.2% to 11.0%, for Progressive Motility 6.1% to 261.7% (with some very high %CVs for low-level samples), for Non-Progressive Motility 6.4% to 76.7% (with some high %CVs for low-level samples), and for Immotile 1.8% to 10.4%. The conclusion states all met acceptance criteria, suggesting that higher %CV for low-level samples was considered acceptable within the context of clinical relevance for those low values. |
Method Comparison | Passing-Bablok Regression: Intercept, Slope, Correlation Coefficient | Slopes, y-intercepts, and correlation coefficients, along with the 95% confidence intervals, demonstrated clinical equivalence to the predicate device (specific ranges not provided in extract). | CONCENTRATION: Intercept 0.05 (-0.4799 to 0.2610), Slope 0.98 (0.9718 to 0.9836), Correlation 1.0 (0.9974 to 0.9982). |
MOTILITY: Intercept 2.1 (1.2174 to 3.0000), Slope 0.9 (0.9189 to 0.9565), Correlation 0.96 (0.9493 to 0.9659). | |||
PROGRESSIVE MOTILITY: Intercept -0.7 (-1.4516 to 0.0000), Slope 1.0 (0.9286 to 0.9677), Correlation 1.0 (0.9683 to 0.9787). | |||
NON-PROGRESSIVE MOTILITY: Intercept -0.3 (-1.0000 to 0.0000), Slope 1.3 (1.2500 to 1.4000), Correlation 0.7 (0.6944 to 0.7850). | |||
IMMOTILE: Intercept 4.0 (3.0417 to 5.0000), Slope 0.9 (0.9200 to 0.9583), Correlation 0.9 (0.9130 to 0.9411). | |||
MORPHOLOGY: Intercept -1.0 (-1.0000 to -0.0455), Slope 1.0 (0.9091 to 1.0000), Correlation 1.0 (0.9563 to 0.9706). | |||
MSC: Intercept 0.3 (0.05708 to 0.5580), Slope 0.9 (0.9344 to 0.9571), Correlation 1.0 (0.9889 to 0.9925). | |||
PMSC: Intercept -0.3 (-0.5450 to -0.0968), Slope 0.9 (0.9149 to 0.9364), Correlation 1.0 (0.9894 to 0.9929). |
2. Sample Size and Data Provenance
- Sample Size for Test Set:
- CLIA Waived User Precision Study (Control Material): 270 measurements in total (3 sites x 9 users (3 per site) over 3 days per site x 3 levels x 10 replicates of each level).
- CLIA Waived User Precision Study (Native Samples): 216 measurements total (9 native semen samples x 2 replicates per sample x 3 users/site x 4 time points).
- Method Comparison Study: 380 donor semen samples.
- Data Provenance (Country of Origin and Retrospective/Prospective):
- The Method Comparison Study was conducted across "Three U.S. sites."
- The Precision studies were also multi-site, with the control material study having "3 sites". The native sample precision study was "across two sites."
- The data appears to be prospectively collected for the purpose of these studies, as detailed study designs are provided, including number of sites, users, days, replicates, and samples. The samples used in the method comparison were "donor semen samples."
3. Number of Experts and Qualifications for Ground Truth
- Number of Experts:
- For the Method Comparison Study, there were "One or more TRAINED OPERATORS per site" (3 sites) who generated reference SQA-V results.
- Qualifications of Experts:
- The experts (TRAINED OPERATORS) were described as "fully trained and considered appropriate for generating reference SQA-V results." Their specific professional qualifications (e.g., medical technologists, clinical lab scientists) or years of experience are not explicitly stated.
4. Adjudication Method for the Test Set
- The document implies that the ground truth for the method comparison study was established by the "TRAINED OPERATORS" using the predicate device (SQA-V). There is no mention of an adjudication process (e.g., 2+1, 3+1 consensus) among multiple experts to establish a "true" ground truth beyond the output of the predicate device operated by trained users. The samples were assayed "in singleton and in a blinded fashion" using both methods, suggesting a direct comparison rather than multi-reader adjudication.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No explicit MRMC comparative effectiveness study was described in terms of human readers improving with AI vs. without AI assistance. The study compares the performance of a new device (SQA-iOw operated by waived users) against a predicate device (SQA-V operated by trained users). It's a method comparison for an automated device, not an AI-assisted human reader study.
6. Standalone (Algorithm Only) Performance
- The SQA-iOw is described as an "automated analyzer" that "utilizes proprietary software code to both perform analysis of semen parameters" and "applies proprietary algorithms to interpret and express these electrical signals and report them as various semen parameters." The performance measurements detailed (precision studies and method comparison) represent the standalone performance of the device/algorithm in processing samples and generating results for the specified semen parameters. There is no human-in-the-loop component in the measurement process itself.
7. Type of Ground Truth Used
- The ground truth for the Method Comparison Study was established using the results from the predicate device (SQA-V) operated by trained users. This serves as a "reference standard" or "comparative method" rather than an absolute ground truth such as pathology or outcomes data.
- For the Precision Studies, the ground truth is statistical variability around the mean measurements of control materials and native samples.
8. Sample Size for the Training Set
- The document does not provide information on the sample size used for the training set for the SQA-iOw's algorithms. The studies described are validation (test set) studies, not algorithm development or training data descriptions.
9. How Ground Truth for Training Set was Established
- The document does not provide information on how the ground truth for the training set was established, as it focuses on the validation studies. It only mentions that the device "applies proprietary algorithms" but not how these algorithms were developed or trained.
Ask a specific question about this device
(176 days)
Dual Track
The cobas liat SARS-CoV-2 & Influenza A/B v2 nucleic acid test is an automated rapid multiplex real-time reverse transcription polymerase chain reaction (RT-PCR) test intended for the simultaneous qualitative detection and differentiation of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), influenza A virus and influenza B virus nucleic acids in anterior nasal (nasal) and nasopharyngeal swab specimens from individuals exhibiting signs and symptoms of respiratory tract infection. Clinical signs and symptoms of respiratory tract infection due to SARS-CoV-2 and influenza can be similar. This test is intended to aid in the differential diagnosis of SARS-CoV-2, influenza A and influenza B infections in humans and is not intended to detect influenza C virus infections.
Nucleic acids from the viral organisms identified by this test are generally detectable in nasopharyngeal and nasal swab specimens during the acute phase of infection. The detection and identification of specific viral nucleic acids from individuals exhibiting signs and symptoms of respiratory tract infection are indicative of the presence of the identified virus, and aid in diagnosis if used in conjunction with other clinical and epidemiological information and laboratory findings.
The results of this test should not be used as the sole basis for diagnosis, treatment, or other patient management decisions. Positive results do not rule out coinfection with other organisms. The organism(s) detected by the cobas liat SARS-CoV-2 & Influenza A/B v2 nucleic acid test may not be the definite cause of disease. Negative results do not preclude SARS-CoV-2, influenza A virus or influenza B virus infections.
The cobas liat SARS-CoV-2 & Influenza A/B v2 nucleic acid test is performed on the cobas liat analyzer which automates and integrates sample purification, nucleic acid amplification, and detection of the target sequence in biological samples using real-time PCR assays. The assay targets both the ORF1 a/b non-structural region and membrane protein gene that are unique to SARS-CoV-2, a well-conserved region of the matrix gene of influenza A (Flu A target), and the nonstructural protein 1 (NS1) gene of influenza B (Flu B target). An Internal Control (IC) is included to control for adequate processing of the target virus through all steps of the assay process and to monitor the presence of inhibitors in the RT-PCR processes.
This document describes the validation study for the cobas liat SARS-CoV-2 & Influenza A/B v2 nucleic acid test.
Here's an analysis of the acceptance criteria and the study proving the device meets them:
1. Table of Acceptance Criteria and Reported Device Performance
Since this is a diagnostic test, the primary acceptance criteria revolve around analytical and clinical performance metrics like Limit of Detection, Inclusivity, Cross-Reactivity, Reproducibility, and Clinical Agreement (Positive Percent Agreement and Negative Percent Agreement). The document doesn't explicitly state "acceptance criteria" values in a separate table, but these are implied by the performance metrics reported and the general standards for diagnostic device clearance. I will extract the reported device performance from the provided text.
Performance Metric | Target Analyte | Specimen Type | Reported Performance (Value) | Implied Acceptance Criteria (Typically high for diagnostic tests) |
---|---|---|---|---|
Analytical Sensitivity (LoD) | SARS-CoV-2 | Co-spiked panels | 0.0350 TCID50/mL | Lowest detectable concentration for 95% positivity |
Influenza A | Co-spiked panels | 0.00325 TCID50/mL | Lowest detectable concentration for 95% positivity | |
Influenza B | Co-spiked panels | 0.183 TCID50/mL | Lowest detectable concentration for 95% positivity | |
Reactivity/Inclusivity | SARS-CoV-2 | Respective variants | 100% detection at 3x LoD | Detection of various strains/variants |
Influenza A | Respective variants | 100% detection at varying LoD (up to 12x) | Detection of various strains/variants | |
Influenza B | Respective variants | 100% detection at 3x LoD | Detection of various strains/variants | |
Cross-Reactivity/Microbial Interference | All targets | Various microorganisms | No cross-reactivity/interference | No false positives or interference from other common pathogens |
Competitive Inhibition | All targets | Co-spiked samples | No interference | Accurate detection of all targets even in co-infection |
Endogenous/Exogenous Interference | All targets | Various substances | No interference | Robust performance in presence of common respiratory interferents |
Reproducibility (Negative) | N/A | Negative samples | 100.0% Agreement | High agreement for negative samples across sites, lots, days |
Reproducibility (1x-2x LoD) | SARS-CoV-2 | Low Positive samples | 100.0% Agreement | High agreement for low positive samples |
Influenza A | Low Positive samples | 99.6% Agreement | High agreement for low positive samples | |
Influenza B | Low Positive samples | 99.6% Agreement | High agreement for low positive samples | |
Reproducibility (3x-5x LoD) | SARS-CoV-2 | Moderate Positive | 100.0% Agreement | High agreement for moderate positive samples |
Influenza A | Moderate Positive | 100.0% Agreement | High agreement for moderate positive samples | |
Influenza B | Moderate Positive | 100.0% Agreement | High agreement for moderate positive samples | |
*Clinical Performance (PPA)Prospective | SARS-CoV-2 | NPS | 94.5% (90.7-96.8 CI) | High sensitivity (ability to detect true positives) |
SARS-CoV-2 | ANS | 96.7% (93.4-98.4 CI) | High sensitivity (ability to detect true positives) | |
Influenza A | NPS | 100.0% (93.4-100.0 CI) | High sensitivity (ability to detect true positives) | |
Influenza A | ANS | 100.0% (93.2-100.0 CI) | High sensitivity (ability to detect true positives) | |
Influenza B | NPS | 100.0% (85.1-100.0 CI) | High sensitivity (ability to detect true positives) | |
Influenza B | ANS | 100.0% (86.2-100.0 CI) | High sensitivity (ability to detect true positives) | |
*Clinical Performance (NPA)Prospective | SARS-CoV-2 | NPS | 97.6% (96.7-98.3 CI) | High specificity (ability to correctly identify true negatives) |
SARS-CoV-2 | ANS | 97.2% (96.2-97.9 CI) | High specificity (ability to correctly identify true negatives) | |
Influenza A | NPS | 99.3% (98.8-99.6 CI) | High specificity (ability to correctly identify true negatives) | |
Influenza A | ANS | 99.3% (98.8-99.6 CI) | High specificity (ability to correctly identify true negatives) | |
Influenza B | NPS | 99.3% (98.8-99.6 CI) | High specificity (ability to correctly identify true negatives) | |
Influenza B | ANS | 99.5% (99.0-99.7 CI) | High specificity (ability to correctly identify true negatives) | |
*Clinical Performance (PPA)Retrospective | Influenza B | NPS | 100.0% (89.8-100.0 CI) | High sensitivity (ability to detect true positives) |
Influenza B | ANS | 100.0% (89.8-100.0 CI) | High sensitivity (ability to detect true positives) | |
*Clinical Performance (NPA)Retrospective | Influenza B | NPS | 97.9% (94.7-99.2 CI) | High specificity (ability to correctly identify true negatives) |
Influenza B | ANS | 98.3% (95.0-99.4 CI) | High specificity (ability to correctly identify true negatives) |
2. Sample Sizes Used for the Test Set and Data Provenance
-
Prospective Clinical Study:
- Sample Size: 1729 symptomatic subjects enrolled.
- 1705 evaluable NPS specimens for analysis (19 non-evaluable due to missing/invalid results, 5 due to handling).
- 1706 evaluable ANS specimens for SARS-CoV-2 and Influenza B analysis (22 non-evaluable due to missing/invalid results, 1 due to handling).
- 1704 evaluable ANS specimens for Influenza A analysis (2 additional found inconclusive for comparator).
- Data Provenance: Prospective, collected between September 2023 and March 2024 at 14 point-of-care testing sites in the United States (US).
- Sample Size: 1729 symptomatic subjects enrolled.
-
Retrospective Clinical Study (Influenza B Supplement):
- Sample Size: 223 archived NPS specimens and 206 archived ANS specimens (total 429).
- One NPS sample pre-characterized as positive for influenza B was non-evaluable.
- Data Provenance: Retrospective, frozen archived (Category III) specimens collected between 2019 and 2023. Distributed to 6 sites for testing.
- Sample Size: 223 archived NPS specimens and 206 archived ANS specimens (total 429).
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Experts
The document does not mention the use of experts to establish ground truth for the clinical test sets. For molecular diagnostic tests like this, the ground truth is typically established by comparing the investigational device's results against a highly accurate, accepted comparator method (another FDA-cleared Nucleic Acid Amplification Test specific for the target analytes). The expertise lies in the development and validation of these comparator methods, not in individual expert review of each sample for ground truth in this context.
4. Adjudication Method for the Test Set
The document describes discrepant result analysis for both prospective and retrospective clinical studies.
- For the prospective study, "discrepant NAAT results" are detailed for SARS-CoV-2 (NPS and ANS), Influenza A (NPS and ANS), and Influenza B (NPS and ANS).
- For the retrospective study, discrepant NAAT results are detailed for Influenza B (NPS and ANS).
The method appears to be:
- The cobas liat test result is compared to the FDA-cleared comparator NAAT result.
- When there's a discrepancy (e.g., cobas liat positive, comparator negative), it explicitly states how many were "positive" and "negative" upon further investigation or re-evaluation (e.g., with "discrepant NAAT results").
- For example: "Of 12 specimens negative on cobas® liat and positive on the comparator, 8 were positive and 4 were negative." This implies some form of re-testing or deeper analysis (not specified as "adjudication by experts" but rather "discrepant NAAT results"). It's more of a re-confirmation of the comparator or a third method, rather than a human expert consensus process. Such re-evaluation often involves re-testing using the comparator or a reference method.
Therefore, while there's no "2+1" or "3+1" expert adjudication method described as would be seen in imaging studies, there is a discrepant resolution process based on further NAAT results. It's not "none" in the sense that discrepancies are just reported without follow-up; rather, they are further investigated using additional NAAT results to re-confirm the original comparator status if possible.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance
No. This is a standalone diagnostic test (RT-PCR), not an AI-assisted imaging device or a test that involves human "readers" interpreting results. Therefore, an MRMC comparative effectiveness study involving human readers and AI assistance is not applicable and was not performed.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, implicitly. This is a fully automated RT-PCR test run on the cobas liat analyzer. The performance metrics (LoD, inclusivity, cross-reactivity, reproducibility, and clinical agreement) are measures of the device's performance on its own against established ground truth (comparator NAAT). While humans load samples and interpret the final digital result (positive/negative), the core detection and differentiation is algorithm-driven within the instrument, making its performance essentially "standalone" in the context of diagnostic accuracy.
7. The Type of Ground Truth Used
- Clinical Performance (Prospective and Retrospective): The ground truth for clinical sample testing was established by comparing the cobas liat results against an FDA-cleared Nucleic Acid Amplification Test (NAAT), which serves as the reference or "ground truth" method for molecular diagnostic assays. The document explicitly states: "PPA and NPA were determined by comparing the results of cobas® liat SARS-CoV-2 & Influenza A/B v2 to the results of an FDA-cleared Nucleic Acid Amplification Test (NAAT)." and "The comparator method was an acceptable FDA-cleared molecular assay."
- Analytical Studies (LoD, Inclusivity, Cross-Reactivity, Interference, Reproducibility): Ground truth was established by preparing precisely known concentrations of viral material (cultured or inactivated viruses) or specific microorganisms in controlled laboratory settings. For these studies, the "ground truth" is meticulously prepared and verified laboratory standards.
8. The Sample Size for the Training Set
The document does not specify a separate "training set" sample size. For an RT-PCR diagnostic platform, the "training" involves the fundamental biochemical and optical engineering, and the optimization of assay (reagent) design to achieve sensitivity and specificity. This is distinct from machine learning models that often require large, labeled datasets for "training." The analytical and clinical validation studies described here are verification and validation (V&V) studies, akin to a "test set" to prove the device's performance against its design specifications and clinical utility.
9. How the Ground Truth for the Training Set Was Established
Since no explicit "training set" for a machine learning algorithm is mentioned (as this is a molecular diagnostic test), this question is not directly applicable. However, the ground truth for assay development and optimization (which can be considered analogous to "training" in a broader sense of device development) would have been established through extensive laboratory work using:
- Highly characterized viral cultures or purified nucleic acids: Used to define target sequences, optimize primer/probe design, and determine initial analytical sensitivity.
- Spiked samples: Adding known quantities of targets or interferents to negative clinical matrices to mimic real-world conditions during early development.
- Early clinical samples: Used to refine assay performance and resolve initial issues prior to formal validation studies.
These processes ensure the assay correctly identifies the target nucleic acids.
Ask a specific question about this device
(175 days)
Dual Track
The cobas liat SARS-CoV-2, Influenza A/B & RSV nucleic acid test is an automated rapid multiplex real-time reverse transcription polymerase chain reaction (RT-PCR) test intended for the simultaneous qualitative detection and differentiation of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), influenza A virus, influenza B virus and respiratory syncytial virus (RSV) nucleic acids in anterior nasal (nasal) and nasopharyngeal swab specimens from individuals exhibiting signs and symptoms of respiratory tract infection. Clinical signs and symptoms of respiratory viral infection due to SARS-CoV-2, influenza and RSV can be similar. This test is intended to aid in the differential diagnosis of SARS-CoV-2, influenza A, influenza B, and RSV infections in humans and is not intended to detect influenza C virus infections.
Nucleic acids from the viral organisms identified by this test are generally detectable in nasopharyngeal and nasal swab specimens during the acute phase of infection. The detection and identification of specific viral nucleic acids from individuals exhibiting signs and symptoms of respiratory tract infection are indicative of the presence of the identified virus, and aid in diagnosis if used in conjunction with other clinical and epidemiological information, and laboratory findings.
The results of this test should not be used as the sole basis for diagnosis, treatment, or other patient management decisions. Positive results do not rule out coinfection with other organisms. The organism(s) detected by the cobas liat SARS-CoV-2, Influenza A/B & RSV nucleic acid test may not be the definite cause of disease. Negative results do not preclude SARS-CoV-2, influenza A virus, influenza B virus, or RSV infections.
The cobas liat SARS-CoV-2, Influenza A/B & RSV nucleic acid test is performed on the cobas liat analyzer which automates and integrates sample purification, nucleic acid amplification, and detection of the target sequence in biological samples using real-time PCR assays. The assay targets both the ORF1 a/b non-structural region and membrane protein gene that are unique to SARS-CoV-2, a well-conserved region of the matrix gene of influenza A (Flu A target), the nonstructural protein 1 (NS1) gene of influenza B (Flu B target) and the matrix gene of RSV (RSV target). An Internal Control (IC) is included to control for adequate processing of the target virus through all steps of the assay process and to monitor the presence of inhibitors in the RT-PCR processes.
The provided text describes the analytical and clinical performance evaluation of the cobas® liat SARS-CoV-2, Influenza A/B & RSV nucleic acid test, which is a real-time RT-PCR assay. The information mainly focuses on the performance characteristics required for FDA clearance (510(k)).
Here's a breakdown of the requested information based on the provided document:
Acceptance Criteria and Device Performance
The document does not explicitly present a table of "acceptance criteria" in a pass/fail format for clinical performance. Instead, it demonstrates the device's performance through various analytical studies and clinical agreement percentages relative to a comparator method. The acceptance for a 510(k) submission is typically that the device is "substantially equivalent" to a legally marketed predicate device, which implies demonstrating comparable performance characteristics.
The key performance metrics are the Positive Percent Agreement (PPA) and Negative Percent Agreement (NPA) in clinical studies. While there aren't explicit numeric acceptance criteria stated, the achieved performance values are presented as evidence of substantial equivalence.
Here’s a table summarizing the reported clinical device performance based on the prospective study (Table 13) and the retrospective study (Table 15). These are the metrics by which the device's clinical performance would be "accepted" as substantially equivalent.
Table of Reported Device Performance (Clinical)
Target | Specimen Type | PPA (%) (95% CI) | NPA (%) (95% CI) |
---|---|---|---|
SARS-CoV-2 | NPS | 94.5 (90.7-96.8) | 97.6 (96.7-98.3) |
SARS-CoV-2 | ANS | 96.7 (93.4-98.4) | 97.2 (96.2-97.9) |
Influenza A | NPS | 100.0 (93.4-100.0) | 99.3 (98.8-99.6) |
Influenza A | ANS | 100.0 (93.2-100.0) | 99.3 (98.8-99.6) |
Influenza B | NPS (Prospective) | 100.0 (85.1-100.0) | 99.3 (98.8-99.6) |
Influenza B | ANS (Prospective) | 100.0 (86.2-100.0) | 99.5 (99.0-99.7) |
Influenza B | NPS (Retrospective) | 100.0 (89.8-100.0) | 97.9 (94.7-99.2) |
Influenza B | ANS (Retrospective) | 100.0 (89.8-100.0) | 98.3 (95.0-99.4) |
RSV | NPS | 100.0 (94.8-100.0) | 99.0 (98.3-99.3) |
RSV | ANS | 97.5 (91.4-99.3) | 98.8 (98.2-99.3) |
Note on "Acceptance Criteria" for Analytical Performance: The document describes detailed analytical studies (LoD, inclusivity, cross-reactivity, interference, reproducibility), and the reported hit rates and concentrations demonstrate that the device met the internal analytical performance specifications, which are implicitly the "acceptance criteria" for these aspects. For instance, for LoD, the acceptance criterion is implied to be ≥95% hit rate at the determined concentration. For inclusivity, it's detection at or near 3x LoD. For cross-reactivity and interference, the acceptance criterion is no cross-reactivity/interference observed. The document states that "none of the organisms tested cross reacted or interfered" and that "substances... did not interfere," indicating successful meeting of these criteria. For reproducibility, the agreement percentages for positive and negative samples are above 99% for most categories.
Study Details
-
Sample sizes used for the test set and the data provenance:
- Prospective Clinical Study (Category I):
- NPS specimens: 1729 enrolled subjects leading to 1704 evaluable specimens for SARS-CoV-2, Flu A, Flu B, and 1705 for RSV.
- ANS specimens: 1729 enrolled subjects leading to 1705 evaluable specimens for SARS-CoV-2, Flu B, 1703 for Flu A, and 1706 for RSV.
- Data Provenance: Fresh specimens, prospective, collected between September 2023 and March 2024 at 14 point-of-care testing sites in the United States (US).
- Retrospective Clinical Study (Category III):
- Specimens: Frozen archived clinical NPS (n=223) and ANS (n=206) specimens.
- Data Provenance: Retrospective, collected between 2019 and 2023. Distributed to 6 sites for testing. Country of origin not explicitly stated but implied to be US given the overall context of a US FDA clearance (though not definitively stated for the retrospective part).
- Prospective Clinical Study (Category I):
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
The ground truth for the clinical test set was established by comparing the results of the cobas® liat test to the "results of an FDA-cleared Nucleic Acid Amplification Test (NAAT)." The document does not specify the number of human experts, their qualifications, or their role in establishing this ground truth. The "ground truth" here is the result from the comparator NAAT, not human expert interpretation of images or clinical data. -
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
The document describes a discrepant analysis for cases where the cobas® liat test results differed from the comparator NAAT. For the prospective study, the discrepancy analysis showed how many of the discrepant results (e.g., cobas® liat negative, comparator positive) were ultimately confirmed as positive or negative by further investigation (implied to be by the comparator method or potentially a third method, though not explicitly detailed beyond "discrepant NAAT results"). The details provided are:- SARS-CoV-2 NPS: Of 12 negative cobas® liat/positive comparator, 8 were positive and 4 negative. Of 35 positive cobas® liat/negative comparator, 12 were positive and 23 negative.
- Similar analyses are provided for other targets and specimen types.
This implies an adjudication method where discrepant results were further investigated, likely with repeat testing or a confirmatory reference method, but the specific "2+1" or "3+1" reader/expert adjudication model (common in imaging studies) is not applicable or described for this in vitro diagnostic (IVD) device.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
No, an MRMC comparative effectiveness study was not done. This type of study is primarily relevant for imaging devices that assist human readers (e.g., AI for radiology). The cobas® liat test is an automated molecular diagnostic test directly detecting nucleic acids, not an AI-assisted interpretation device for human "readers." -
If a standalone (i.e. algorithm only without human-in-the loop performance) was done:
Yes, the performance reported (PPA, NPA, and analytical studies) represents the standalone performance of the cobas® liat device. It is an automated system where the "algorithm" (the RT-PCR assay and its interpretation software) directly produces a qualitative result (Detected/Not Detected), without human "in-the-loop" interpretation for the primary result. Human operators load samples and review results, but the analytical detection and differentiation itself is automated. -
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
The ground truth for both prospective and retrospective clinical studies was based on the results of an FDA-cleared Nucleic Acid Amplification Test (NAAT), which served as the comparator method. For discrepant samples, further re-testing against the comparator or a reference method was performed for adjudication. This falls under a "reference standard" or "comparator method" type of ground truth. -
The sample size for the training set:
The document does not specify the sample size for a "training set." This type of molecular diagnostic device typically relies on analytical validation (LoD, inclusivity, specificity) and clinical validation through comparison to a reference method, rather than a machine learning model that requires explicit training data. The development process would involve iterative optimization of primers, probes, and assay conditions, but this is not typically referred to as a "training set" in the context of IVD submissions, especially for traditional PCR assays. -
How the ground truth for the training set was established:
As no explicit "training set" for a machine learning model is described, the question of how its ground truth was established is not applicable based on the provided text. The "ground truth" in the context of this traditional IVD development refers to the reliable identification of the target analytes in samples for analytical and clinical validation, often through established reference methods or characterized materials.
Ask a specific question about this device
(354 days)
Dual Track
The Acucy Influenza A&B Test is a rapid chromatographic immunoassay for the qualitative detection and differentiation of influenza A and B viral nucleoprotein antigens directly from anterior nasal and nasopharyngeal swabs from patients with signs and symptoms of respiratory infection. The test is intended for use with the Acucy or Acucy 2 Reader as an aid in the diagnosis of influenza A and B viral infections. The test is not intended for the detection of influenza C viruses. Negative test results are presumptive and should be confirmed by viral culture or an FDA-cleared influenza A and B molecular assay. Negative test results do not preclude influenza viral infection and should not be used as the sole basis for treatment or other patient management decisions.
Performance characteristics for influenza A were established during the 2017-2018 influenza season when influenza A/H3N2 and A/H1N1pdm09 were the predominant influenza A viruses in circulation. When other influenza A viruses are emerging, performance characteristics may vary.
If an infection with a novel influenza A virus is suspected based on current clinical and epidemiological screening criteria recommended by public health authorities, specimens should be collected with appropriate infection control precautions for novel virulent influenza viruses and sent to state or local health department for testing. Viral culture should not be attempted in these cases unless a BSL 3+ facility is available to receive and culture specimens.
The Acucy Influenza A&B Test allows for the differential detection of influenza A and influenza B antigens, when used with the Acucy 2 Reader. The patient sample is placed in the Extraction Buffer vial, during which time the virus particles in the sample are disrupted, exposing internal viral nucleoproteins. After disruption, the sample is dispensed into the Test Cassette sample well. From the sample well, the sample migrates along the membrane surface. If influenza A or B viral antigens are present, they will form a complex with mouse monoclonal antibodies to influenza A and/or B nucleoproteins conjugated to colloidal gold. The complex will then be bound by a rat anti-influenza A and/or mouse anti-influenza B antibody coated on the nitrocellulose membrane.
Depending upon the operator's choice, the Test Cassette is either placed inside the Acucy 2 Reader for automatically timed development mode (WALK AWAY Mode) or placed on the counter or bench top for a manually timed development and then placed into Acucy 2 Reader to be scanned (READ NOW Mode).
The Acucy 2 Reader will scan the Test Cassette and measure the absorbance intensity by processing the results using method-specific algorithms. The Acucy 2 Reader will display the test results POS (+), NEG (-), or INVALID on the screen. The results can also be automatically printed on the optional Printer if this option is selected.
Here's a breakdown of the acceptance criteria and the studies performed for the Acucy Influenza A&B Test with the Acucy 2 System, based on the provided FDA 510(k) clearance letter:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state "acceptance criteria" for each study, but rather presents the results and the implication that these results demonstrate the equivalence and performance of the device. For the purpose of this table, I will infer the implicit acceptance criteria from the expected outcomes and the conclusion that the device is "substantially equivalent."
Performance Metric | Implicit Acceptance Criteria (Inferred) | Reported Device Performance |
---|---|---|
Within-Laboratory Repeatability (Acucy) | All positive samples (MP, LP) detect as positive (100% agreement); All negative samples (HN, N) detect as negative (100% agreement). | Flu A MP: 100% (80/80) |
Flu A LP: 100% (80/80) | ||
Flu A HN: 100% (80/80) | ||
Flu B MP: 100% (80/80) | ||
Flu B LP: 100% (80/80) | ||
Flu AB HN: 100% (80/80) | ||
Negative: 100% (80/80) | ||
Within-Laboratory Repeatability (Acucy 2) | All positive samples (MP, LP) detect as positive (100% agreement); All negative samples (HN, N) detect as negative (100% agreement). | Flu A MP: 100% (80/80) |
Flu A LP: 100% (80/80) | ||
Flu A HN: 100% (80/80) | ||
Flu B MP: 100% (80/80) | ||
Flu B LP: 100% (80/80) | ||
Flu AB HN: 100% (80/80) | ||
Negative: 100% (80/80) | ||
Instrument-to-Instrument Precision | All positive samples detect as positive (100% agreement); All negative samples detect as negative (100% agreement). | Flu A M (2x LoD): 75/75 (Pass) |
Flu A L (0.95x LoD): 75/75 (Pass) | ||
Flu A HN (0.05x LoD): 0/75 (Pass - expected negative) | ||
Flu B M (2x LoD): 75/75 (Pass) | ||
Flu B L (0.95x LoD): 75/75 (Pass) | ||
Flu B HN (0.05x LoD): 0/75 (Pass - expected negative) | ||
Negative: 0/75 (Pass - expected negative) | ||
Test Mode Equivalency | All positive samples detect as positive; All negative samples detect as negative, and results are equivalent between READ NOW and WALK AWAY modes. | Flu A+/B-: 20/20 POS Flu A, 20/20 NEG Flu B for both READ NOW and WALK AWAY modes. |
Flu A-/B+: 20/20 NEG Flu A, 20/20 POS Flu B for both READ NOW and WALK AWAY modes. | ||
Flu A-/B- Negative: 20/20 NEG Flu A, 20/20 NEG Flu B for both READ NOW and WALK AWAY modes. | ||
Limit of Detection (LoD) | Acucy 2 LoD should be equivalent to Acucy LoD (e.g., ≥95% detection rate at the lowest concentration). | Influenza A/Michigan Strain: LoD 2.82E+02 TCID50/mL (Acucy: 20/20, Acucy 2: 20/20 for both devices A & B) |
Influenza A/Singapore Strain: LoD 3.16E+03 TCID50/mL (Acucy: 20/20, Acucy 2: 20/20 for both devices A & B) | ||
Influenza B/Phuket Strain: LoD 2.09E+02 TCID50/mL (Acucy: 20/20, Acucy 2: 20/20 for Device A); LoD 4.17E+02 TCID50/mL (Acucy: 20/20, Acucy 2: 20/20 for Device B) | ||
Influenza B/Colorado Strain: LoD 2.82E+02 TCID50/mL (Acucy: 20/20, Acucy 2: 20/20 for Device A); LoD 7.05E+02 TCID50/mL (Acucy: 20/20, Acucy 2: 20/20 for Device B) | ||
Analytical Cutoff (LoB) | All blank samples should be negative (0 mABS) and the cutoff values should be consistent with the predicate device. | All blank samples showed 0 mABS. Analytical cut-off values for Acucy 2 were set to match the previously established cut-off of 6.4 mABS for Flu A Line and 5.4 mABS for Flu B Line (from the predicate Acucy system). |
Cross Contamination | No cross-contamination (high titer positives detect as positive, negatives detect as negative). | Flu A High Positive: 30/30 (Pass) |
Flu B High Positive: 30/30 (Pass) | ||
Negative: 60/60 (Pass) | ||
Method Comparison (Acucy vs. Acucy 2) | High Positive Percent Agreement (PPA) and Negative Percent Agreement (NPA) compared to the Acucy Reader should be high (close to 100%). | Influenza A: PPA: 100% (30/30), NPA: 98.3% (59/60) |
Influenza B: PPA: 100% (30/30), NPA: 100% (60/60) | ||
Flex Studies | All hazards and sources of potential error are controlled. | All tests showed expected results, indicating that the device performs correctly under various "flex" conditions (temperature, humidity, vibrations, lighting, air draft, altitude, non-level position, cassette read window contamination, movement in WALK AWAY mode, test cassette movement/vertical incubation, reader drawer positioning). Conclusion: All hazards controlled through design and labeling mitigations. |
External Multi-Site Reproducibility (Acucy) | High agreement (close to 100%) for all panel members across sites and operators. | Influenza A HN: 98.9% (89/90) |
Influenza A LP: 100% (90/90) | ||
Influenza A MP: 100% (90/90) | ||
Negative: 100% (90/90) | ||
Influenza B HN: 100% (90/90) | ||
Influenza B LP: 100% (90/90) | ||
Influenza B MP: 98.9% (89/90) | ||
Negative: 100% (90/90) | ||
External Multi-Site Reproducibility (Acucy 2) | High agreement (close to 100%) for all panel members across sites and operators. | Influenza A HN: 100% (90/90) |
Influenza A LP: 100% (90/90) | ||
Influenza A MP: 100% (90/90) | ||
Influenza B HN: 100% (90/90) | ||
Influenza B LP: 100% (90/90) | ||
Influenza B MP: 100% (90/90) | ||
Influenza A & B Negative: 100% (90/90) |
2. Sample Size Used for the Test Set and Data Provenance
- Precision Study (Repeatability & Instrument-to-Instrument): These studies primarily used contrived samples (prepared in the laboratory by spiking virus into clinical matrix) rather than naturally occurring patient samples.
- Repeatability: 80 replicates per panel member (Flu A MP, LP, HN; Flu B MP, LP, HN; Negative) for both Acucy and Acucy 2. Total of 7 x 80 = 560 tests per device (Acucy and Acucy 2).
- Instrument-to-Instrument Precision: 75 replicates per panel member (7 panel members). Total of 7 x 75 = 525 tests per device.
- Data Provenance: Laboratory-generated, in vitro data. The origin of the clinical matrix used for preparing contrived samples is described as "nasal swab samples... collected from healthy donors and confirmed Flu negative by PCR" for LoD studies, likely similar for precision studies. No specific country of origin is mentioned, but typically for FDA submissions, this data is generated in the US or under comparable quality systems. It is retrospective in the sense that the samples were prepared and tested.
- Test Mode Equivalency: 20 replicates each of contrived positive Flu A, 20 replicates of contrived positive Flu B, and 20 Flu A and Flu B negative samples. Total of 60 tests (3 x 20 replicates). Data provenance is laboratory-generated/contrived.
- Limit of Detection (LoD):
- Range Finding: 5 replicates per concentration for multiple strains and dilutions (as shown in Table 5).
- Confirmation Testing: 20 replicates per concentration for established LoD.
- Data Provenance: Contrived samples using pooled negative clinical matrix from healthy donors (confirmed Flu negative by PCR). Laboratory-generated, in vitro data.
- Analytical Cutoff Study: 60 replicates of a blank sample per lot. Total of 2 lots, so 120 tests. Data provenance is laboratory-generated/contrived.
- Cross-Contamination Study: 30 high titer Flu A positive, 30 high titer Flu B positive, and 60 negative samples. Total of 120 tests. Data provenance is laboratory-generated/contrived.
- Method Comparison (Acucy Reader vs. Acucy 2 Reader):
- Test Set: 30 PCR-confirmed Flu A positive clinical samples, 30 PCR-confirmed Flu B positive clinical samples, and 30 Flu A and Flu B negative clinical samples.
- Total N for Flu A analysis: 30 Flu A positive + (30 Flu B positive + 30 double negative) = 90 samples.
- Total N for Flu B analysis: 30 Flu B positive + (30 Flu A positive + 30 double negative) = 90 samples.
- Data Provenance: Clinical samples (retrospective, given they are PCR-confirmed and a specific count is provided). No country of origin is explicitly stated.
- CLIA Waiver Studies (Flex Studies): 5 replicates for each flex condition (Negative, Low Positive Flu A, Low Positive Flu B). Number of flex conditions is not explicitly totaled but over 10 types are listed. Data provenance is laboratory-generated/contrived.
- Reproducibility Studies (External Multi-Site):
- Acucy System: Panel of 7 samples (Flu A HN, LP, MP; Flu B HN, LP, MP; Negative). Two operators per site, 3 sites, over 5 non-consecutive days. This means 30 replicates per sample type per site (2 operators * 5 days * 3 replicates assumed for LP/HN based on typical studies, though not explicitly stated as count of 30, but total is 90). So, 90 replicates per sample type across all sites (3 sites * 30 replicates). Overall N for Flu A or Flu B: 4 sample types * 90 replicates = 360 tests.
- Acucy 2 System: Same design as above. 90 replicates per sample type (Flu A HN, LP, MP; Flu B HN, LP, MP; Influenza A & B Negative), across 3 sites. Overall N for Flu A or Flu B: 4 sample types * 90 replicates = 360 tests.
- Data Provenance: Contrived samples (negative, high negative, low positive, moderate positive) with coded, randomized, and masked conditions. Tested at 3 "point-of-care (POC) sites" for Acucy and 3 "laboratory sites" for Acucy 2. This suggests real-world testing environments, but with contrived samples.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
For most analytical studies (precision, LoD, analytical cutoff, cross-contamination, flex studies), the ground truth is established by known concentrations of spiked viral material in a controlled laboratory setting. Therefore, dedicated "experts" for ground truth adjudication in these cases are not applicable in the same way as for clinical studies.
For the Method Comparison study (Acucy Reader vs. Acucy 2 Reader), the ground truth for the clinical samples was established by PCR confirmation. The document does not specify the number of experts or their qualifications for interpreting these PCR results, but PCR results are generally considered a high standard for viral detection.
For the Reproducibility Studies, the ground truth for the test panel was established by the known composition of the contrived samples (e.g., negative, high negative, low positive, moderate positive).
4. Adjudication Method for the Test Set
- Analytical Studies (Precision, LoD, Analytical Cutoff, Cross-Contamination, Flex Studies, Reproducibility): Adjudication is inherently by known input concentration or known sample composition. There isn't an "adjudication method" in the sense of multiple human reviewers; rather, it's a comparison to the predefined true state of the contrived sample.
- Method Comparison Study: The ground truth for clinical samples was established by PCR confirmed results. The device's results were compared against these PCR results. There is no mention of human expert adjudication (e.g., 2+1 or 3+1 consensus) for the PCR results themselves or for resolving discrepancies between the device and PCR.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
There is no MRMC comparative effectiveness study described in this document.
- The device is an automated reader for a rapid chromatographic immunoassay. It does not appear to involve human interpretation of images or complex data that would typically benefit from AI assistance in the way an MRMC study evaluates.
- The study focuses on the performance of the device (Acucy 2 System) only compared to a predicate device (Acucy System) and against laboratory-defined ground truths. There's no "human-in-the-loop" aspect being evaluated in terms of improved human reader performance with AI.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
Yes, the studies presented are primarily standalone performance studies for the Acucy 2 System. The device (Acucy 2 Reader) automatically scans the test cassette and processes results using "method-specific algorithms" (Page 6). The output is "POS (+), NEG (-), or INVALID" displayed on the screen. The entire workflow described (from sample application to reader result) represents the standalone performance of the device and its embedded algorithms.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The ground truth used varied depending on the study:
- Analytical Studies (Precision, LoD, Analytical Cutoff, Cross-Contamination, Flex Studies, Reproducibility): Ground truth was based on known concentrations of spiked viral material in contrived samples. For negative controls, it was the absence of the target virus.
- Method Comparison Study: Ground truth for clinical samples was established by PCR confirmation.
8. The sample size for the training set
The document does not explicitly describe a training set or its sample size. The reported studies are primarily verification and validation studies to demonstrate performance and equivalence of the Acucy 2 System compared to the predicate Acucy System. For medical devices, especially immunoassay readers, algorithms are often developed and locked down before these validation studies are performed. If machine learning or AI was used in the algorithm development, the training data would precede these clearance studies and is typically not fully disclosed in a 510(k) summary unless directly relevant to a specific "software change" or unique characteristic being validated.
9. How the ground truth for the training set was established
As no training set is explicitly mentioned, the method for establishing its ground truth is also not specified in this document. If algorithmic development involved a training phase, it's highly probable that contrived samples with known viral concentrations and PCR-confirmed clinical samples with known outcomes would have been utilized for this purpose.
Ask a specific question about this device
(165 days)
Dual Track
The cobas® liat SARS-CoV-2 v2 nucleic acid test is an automated real-time reverse transcription polymerase chain reaction (RT-PCR) test intended for the qualitative detection of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) nucleic acids in anterior nasal (nasal) and nasopharyngeal swab specimens collected from individuals exhibiting signs and symptoms of respiratory tract infection (i.e., symptomatic). Additionally, this test is intended to be used with nasal and nasopharyngeal swab specimens collected from individuals without signs and symptoms of COVID-19 (i.e., asymptomatic).
The cobas® liat SARS-CoV-2 v2 nucleic acid test is intended for use as an aid in the diagnosis of COVID-19 if used in conjunction with other clinical and epidemiological information and laboratory findings. SARS-CoV-2 RNA is generally detectable in nasal swab and nasopharyngeal swab specimens during the acute phase of infection.
Positive results are indicative of the presence of SARS-CoV-2 RNA. Positive results do not rule out co-infection with other microorganisms. Negative results do not preclude SARS-CoV-2 infection. Negative results must be combined with clinical observations, patient history, and epidemiological information. The results of this test should not be used as the sole basis for diagnosis, treatment, or other patient management decisions.
A negative result from an asymptomatic individual is presumptive. Additionally, a negative result obtained with a nasal or nasopharyngeal swab collected from an asymptomatic individual should be followed up by testing at least twice over three days with at least 48 hours between tests.
The cobas® liat SARS-CoV-2 v2 nucleic acid test is performed on the cobas® liat analyzer which automates and integrates sample purification, nucleic acid amplification, and detection of the target sequence in biological samples using real-time PCR assays. The assay targets both the ORF1 a/b non-structural region and membrane protein gene that are unique to SARS-CoV-2. An Internal Control (IC) is included to control for adequate processing of the target virus through all steps of the assay process and to monitor the presence of inhibitors in the RT-PCR processes.
The provided text is a 510(k) Clearance Letter for a medical device which does not include information about AI/ML models. Therefore, it's not possible to extract the information you requested about Acceptance Criteria and a study proving an AI/ML device meets those criteria.
The device described, the "cobas liat SARS-CoV-2 v2 nucleic acid test," is an in vitro diagnostic (IVD) device based on real-time RT-PCR technology. It directly detects viral targets and its performance is evaluated through analytical and clinical studies common for IVDs, not against AI/ML performance metrics like sensitivity, specificity, MRMC studies, or multi-reader reviews.
Here's a breakdown of why your requested information isn't present in this document:
- No AI/ML Component: The document describes a traditional RT-PCR assay. There is no mention of algorithms, machine learning, deep learning, or any AI component.
- Performance Metrics Differ: The performance metrics provided (Limit of Detection, Inclusivity, Cross-reactivity, Reproducibility, Positive Percent Agreement, Negative Percent Agreement) are standard for IVD assays. They are not analogous to metrics used for evaluating AI/ML models (e.g., AUC, F1-score, accuracy in image classification, or diagnostic improvement from AI-assistance).
- No Human Reader Interaction: Since it's an automated lab test, there's no "human reader" (like a radiologist) involved in interpreting the device's output in a way that an AI would assist. The output is qualitative (Detected/Not Detected).
- No Ground Truth Experts in the AI Sense: Ground truth for this IVD is established by a "comparator" (another FDA-cleared NAAT) and clinical/epidemiological information, not by multiple human experts reviewing AI outputs or images.
- No Training/Test Set Split for AI: The "test set" and "training set" concepts described in your request are fundamental to AI/ML model development and validation. For this IVD, there's a "clinical performance evaluation" using prospective and retrospective samples, which serves as the validation dataset, but it's not structured as a training/test split for an AI.
Therefore, I cannot provide the requested table and detailed points because the provided document does not pertain to an AI/ML device.
If you have a document describing an AI/ML medical device, please provide that, and I can attempt to extract the relevant information.
Ask a specific question about this device
(177 days)
Dual Track
The Visby Medical Respiratory Health Test is a single-use (disposable), fully integrated, automated Reverse Transcription Polymerase Chain Reaction (RT-PCR) in vitro diagnostic test intended for the simultaneous qualitative detection and differentiation of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), influenza A, and influenza B RNA in nasopharyngeal swab and anterior nasal swab specimens from individuals with signs and symptoms of respiratory tract infection. Clinical signs and symptoms of respiratory tract infection due to SARS-CoV-2, influenza A, and influenza B can be similar.
The Visby Medical Respiratory Health Test is intended for use as an aid in the differential diagnosis of SARS-CoV-2, influenza A, and influenza B infection if used in conjunction with other clinical and epidemiological information, and laboratory findings. SARS-CoV-2, influenza A, and influenza B viral RNA are generally detectable in nasopharyngeal swab and anterior nasal swab specimens during the acute phase of infection. This test is not intended to detect influenza C virus infections.
Positive results are indicative of the identified virus, but do not rule out bacterial infection or co-infection with other organisms not detected by the test. The agent(s) detected by the Visby Medical Respiratory Health Test may not be the definitive cause of disease. Negative results do not preclude SARS-CoV-2, influenza A, or influenza B infection. The results of this test should not be used as the sole basis for diagnosis, treatment, or other patient management decisions.
The Visby Medical Respiratory Health Test is a single-use (disposable), fully integrated, compact device containing a reverse transcription polymerase chain reaction (RT-PCR) based assay for qualitative detection of influenza B, and/or SARS-CoV-2 viral RNA in upper respiratory tract specimens. The device automatically performs all steps required to complete lysis, reverse transcription (RT), PCR amplification, and detection.
Specimen collected using nasopharyngeal (NP) or anterior nasal (AN) swabs (without transport media) are placed in the Visby Medical Respiratory Health Buffer and then transferred into the sample port of the device using the provided fixed volume pipette. The sample enters a lysis module and rehydrates the RT enzyme and RT primers. The mixture then moves through a sample preparation module where viruses and human cells are simultaneously lysed, and RNA is reverse transcribed. The resulting fluid (containing cDNA) is then mixed with lyophilized PCR reagents containing the DNA polymerase enzyme and PCR primers. The PCR mixture (containing cDNA template and reagents) is then thermal cycled to amplify the targets, including human beta-2 microglobulin (B2M) RNA, which serves as a process control. After PCR, the biotinylated product is hybridized to covalently bound capture probes at specific locations along a flow channel. The flow channel is configured to facilitate an enzymatic reaction that uses streptavidin bound horseradish peroxidase (HRP) and a colorimetric substrate that forms a purple precipitate. The operator observes a color change at the specific locations indicating the presence of an amplified target. Test results can be expected in approximately 30 minutes: illumination of a "DONE" status light on the front of the device and a purple color in the "RESULTS VALID" spot, indicate a successful test. A purple spot adjacent to "Flu A", "Flu B", and/or "COVID-19" signifies the presence of, influenza A, influenza B, and/or SARS-CoV-2 viral RNA.
Here's an analysis of the acceptance criteria and study detailed in the provided text:
Acceptance Criteria and Reported Device Performance
The document doesn't explicitly state "acceptance criteria" in a separate table. However, based on the Summary of Performance Data, the inferred acceptance criteria are the achieved Positive Percentage Agreement (PPA) and Negative Percentage Agreement (NPA) values in the clinical evaluation. The study aims to demonstrate substantial equivalence to the predicate device, implying that the performance needs to be comparable or better.
Here's a table summarizing the reported device performance, which implicitly represents the met acceptance criteria:
Target | Specimen Type | Reported PPA (95% CI) | Reported NPA (95% CI) |
---|---|---|---|
Influenza A | NP | 97.1% (85.1-99.5%) | 99.5% (98.7-99.8%) |
Influenza A | AN | 96.8% (89.1-99.1%) | 99.2% (98.1-99.7%) |
Influenza A | NP+AN | 96.9% (91.3-98.9%) | 99.4% (98.8-99.7%) |
Influenza B | NP | 100% (79.6-100%) | 99.8% (99.1-99.9%) |
Influenza B | AN | 100% (79.6-100%) | 99.9% (99.1-100%) |
Influenza B | NP+AN | 100% (88.7-100%) | 99.8% (99.4-99.9%) |
SARS-CoV-2 | NP | 96.3% (91.6-98.4%) | 99.0% (97.9-99.5%) |
SARS-CoV-2 | AN | 98.3% (94.0-99.5%) | 99.1% (97.9-99.6%) |
SARS-CoV-2 | NP+AN | 97.2% (94.4-98.7%) | 99.0% (98.3-99.5%) |
Study Details:
-
Sample size used for the test set and the data provenance:
- Sample Size: A total of 1,501 subjects were included in the performance analysis after exclusions. 1,575 Visby tests were initially performed.
- Data Provenance: Prospectively collected fresh specimens from subjects presenting with signs and symptoms of a viral respiratory infection at five CLIA Waived study sites in the US (urgent care and family care clinics). Specimens were collected and tested between May 2022 and Feb 2024.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- The document does not specify the number or qualifications of experts used to establish ground truth for the test set. Instead, it states that the comparator assays define the ground truth.
-
Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- The document describes comparing the Visby Medical Respiratory Health Test results against an "FDA-cleared influenza molecular test and an FDA-EUA authorized SARS-CoV-2 RT-PCR test as a comparator." It also mentions "alternate molecular assay" for discordant results (footnotes a, b, e, f, g, h, i, j in Table 2). This indicates that the ground truth was established by these comparator assays, potentially with some form of reference-standard-based comparison and possibly resolution of discrepancies with alternate assays, rather than a multi-expert adjudication method on the test set itself.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study was not done. This device is an in vitro diagnostic (IVD) test for molecular detection of viruses, not an AI-assisted diagnostic imaging or human-read interpretation system. The "operators" in the reproducibility study were "non-laboratorians representing healthcare professionals," but their performance was evaluated against expected results for spiked samples, not in comparison to their own performance with and without an AI assistant on clinical cases.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone performance evaluation was primarily done for the device. The clinical study evaluated the device's ability to detect viral RNA in specimens. While "typical CLIA Waived operators" performed the test, their role was to execute the device's protocol, and the device's detection accuracy was then compared against the comparator assays. The "device automatically performs all steps required to complete lysis, reverse transcription (RT), PCR amplification, and detection," implying it functions as a standalone diagnostic unit once the sample is loaded.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- The ground truth for the clinical evaluation was established by comparator molecular assays: an "FDA-cleared influenza molecular test" and an "FDA-EUA authorized SARS-CoV-2 RT-PCR test." Discordant results were sometimes further investigated with an "alternate molecular assay."
-
The sample size for the training set:
- The document does not specify a separate training set or its sample size. For IVD devices like this RT-PCR test, the "training" typically refers to the assay development and optimization process in the lab, rather than a distinct training set in the way AI/ML models are trained. The clinical performance study evaluates the final, optimized device.
-
How the ground truth for the training set was established:
- As no distinct "training set" in the context of an AI/ML model is described, this information is not applicable. The development and optimization of such a diagnostic test would involve analytical studies (e.g., LoD, inclusivity, cross-reactivity) where the "ground truth" for those specific experiments (e.g., known concentrations of viral targets) is established by careful spiking and molecular characterization in a laboratory setting.
Ask a specific question about this device
Page 1 of 6