(256 days)
Alinity m Resp-4-Plex is a multiplexed real-time in vitro reverse transcription polymerase chain reaction (RT-PCR) assay for use with the automated Alinity m System for the qualitative detection and differentiation of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2), influenza A virus, influenza B virus and Respiratory Syncytial Virus (RSV) in nasopharyngeal swab specimens collected from patients with signs and symptoms of respiratory tract infection. Clinical signs and symptoms of respiratory tract infection due to SARS-CoV-2, influenza B, and RSV can be similar.
The Alinity m Resp-4-Plex assay is intended for use in the differential detection of SARS-CoV-2, influenza B and/or RSV RNA and aids in the diagnosis of COVID-19, influenza and/or RSV infections if used in conjunction with other clinical and epidemiological information, and laboratory findings. SARS-CoV-2, influenza B and RSV viral RNA are generally detectable in nasopharyngeal swab specimens during the acute phase of infection. This test is not intended to detect influenza C virus infections.
Positive results are indication of the identified virus, but do not rule out bacterial infection or co-infection with other pathogens not detected by the test. The agent(s) detected by the Alinity m Resp-4-Plex assay may not be the definite cause of disease.
Negative results do not preclude SARS-CoV-2, influenza B and/or RSV infections and should not be used as the sole basis for diagnosis, treatment or other patient management decisions.
The Alinity m Resp-4-Plex assay requires two separate assay-specific kits: Alinity m Resp-4-Plex AMP Kit and Alinity m Resp-4-Plex CTRL Kit. The assay utilizes real-time PCR to amplify and detect genomic RNA sequences of influenza A (flu A), influenza B (flu B), RSV, and/or SARS-CoV-2 from nasopharyngeal (NP) swab specimens. The assay targets 2 different genes within the SARS-CoV-2 genome. Fluorescently labeled probes allow for simultaneous detection and differentiation of amplified products of all 4 viruses and Internal Control (IC) in a single reaction vessel. All steps of the assay procedure are executed automatically by the Alinity m System, which is a continuous random-access analyzer. The system performs automated sample preparation using magnetic microparticle technology. The IC is introduced into each specimen at the beginning of sample preparation. Purified RNA is combined with activation and amplification/detection reagents and transferred to a reaction vessel for reverse transcription, PCR amplification, and real-time fluorescence detection. A positive and negative control are tested to ensure performance. Patient results are automatically reported. The assay also utilizes the Alinity m Resp-4-Plex Assay Application Specification File, Alinity m System and System Software, Alinity m Sample Prep Kit 2, Alinity m Tubes and Caps, and Alinity m System Solutions.
The provided text is a 510(k) Summary for the Abbott Molecular Inc. Alinity m Resp-4-Plex assay, a multiplexed real-time RT-PCR assay for the qualitative detection and differentiation of SARS-CoV-2, influenza A, influenza B, and RSV.
Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided document:
Acceptance Criteria and Reported Device Performance
The acceptance criteria for this device are demonstrated through various analytical and clinical studies, primarily focusing on analytical sensitivity (Limit of Detection), inclusivity, precision, reproducibility, analytical specificity (interfering substances and cross-reactants), competitive interference, carryover, and clinical performance (Positive Percent Agreement - PPA, and Negative Percent Agreement - NPA).
The document implicitly defines the acceptance criteria as the successful demonstration of performance metrics that are typically expected for such in vitro diagnostic devices, often compared to highly sensitive FDA-cleared or EUA assays. While specific numerical acceptance criteria (e.g., PPA > X%, NPA > Y%) are not explicitly stated as "acceptance criteria," they are demonstrated by the reported results. The conclusion statement (Section 5.0) explicitly states that the "analytical and clinical study results demonstrate that the Alinity m Resp-4-Plex assay... performs comparably to the predicate device... the results support a substantial equivalence decision." This implies that the demonstrated performance values met the FDA's criteria for substantial equivalence to the predicate device.
Here's a table summarizing the reported device performance, which implicitly met the acceptance criteria:
Table 1: Acceptance Criteria (as Demonstrated Performance) and Reported Device Performance
Performance Characteristic | Acceptance Criteria (Implied by Demonstrated Performance) | Reported Device Performance |
---|---|---|
Analytical Sensitivity (LoD) | Lowest concentration at which ≥ 95% of replicates test positive. | Influenza A: 0.002 - 0.06 TCID50/mL (across 5 strains) |
Influenza B: 0.02 - 0.05 TCID50/mL (across 2 strains) | ||
RSV: 0.1 - 0.3 TCID50/mL (across 2 strains) | ||
SARS-CoV-2: 30 Genome Copies/mL (for 1 strain) | ||
Inclusivity | 100% positive results for various strains at or below 3xLoD; >99.99% detection by in silico analysis for SARS-CoV-2. | Flu A: 16 strains tested, lowest concentration yielding 100% positive results (e.g., 0.006 TCID50/mL, 3.33E+00 CEID50/mL). |
Flu B: 9 strains tested, lowest concentration yielding 100% positive results (e.g., 0.006 TCID50/mL, 2.78E-02 CEID50/mL). | ||
RSV: 6 strains tested, lowest concentration yielding 100% positive results (e.g., 0.03 TCID50/mL, 0.9 TCID50/mL). | ||
SARS-CoV-2: 9 strains tested, lowest concentration yielding 100% positive results (90 GC/mL or GE/mL). | ||
In silico (SARS-CoV-2): ≥ 99.99% of sequences predicted to be detected (14.8M GISAID, 7.6M NCBI). | ||
Precision | Consistent and reproducible results across multiple runs, days, and instruments. | Flu A (Moderate/Low): Total %CV 1.8%/1.7%. 100% agreement. |
Flu B (Moderate/Low): Total %CV 1.1%/1.0%. 100% agreement. | ||
RSV (Moderate/Low): Total %CV 2.0%/2.2%. 100% agreement. | ||
SARS-CoV-2 (Moderate/Low): Total %CV 1.1%/1.4%. 100% agreement. (All negative samples 100% agreement). | ||
Reproducibility | Consistent and reproducible results across external sites. | Flu A (Moderate/Low): Total %CV 2.0%/2.0%. 100% agreement. |
Flu B (Moderate/Low): Total %CV 1.0%/2.1%. 100% agreement. | ||
RSV (Moderate/Low): Total %CV 2.8%/2.3%. 100% agreement. | ||
SARS-CoV-2 (Moderate/Low): Total %CV 1.1%/2.5%. 100% for moderate, 97.5% for low positive. Negative samples 99.7% agreement. | ||
Analytical Specificity | No interference from common substances; no cross-reactivity from other microorganisms. | Interfering Substances: No interference observed for 34 tested substances (e.g., blood, mucin, nasal sprays, medications). |
Cross-Reactants: No cross-reactivity observed with 74 potential cross-reacting microorganisms (viruses, bacteria, fungi) at high concentrations. No interference on positive samples. | ||
Competitive Interference | Accurate detection of low concentration analytes in presence of high concentration of other analytes. | All valid replicates of low concentration analytes reported positive results; high concentrations did not interfere. |
Carryover | Minimal to no carryover between samples (e.g., from high positive to negative). | Overall carryover rate of 0.0% (0/360) for SARS-CoV-2. |
Clinical Performance (PPA/NPA) Prospective Study | High agreement with composite comparator. | Flu A: PPA 100.0% (96.2, 100.0), NPA 99.6% (99.3, 99.8) |
Flu B: NPA 100.0% (99.8, 100.0) (PPA not calculated as no CC positive) | ||
RSV: PPA 98.0% (89.3, 99.6), NPA 99.7% (99.5, 99.9) | ||
SARS-CoV-2: PPA 95.3% (91.4, 97.5), NPA 96.0% (94.0, 97.4) | ||
Clinical Performance (PPA/NPA) Retrospective Study | High agreement with composite comparator (especially for flu B where prospective data was limited). | Flu A: NPA 99.4% (98.3, 99.8) (PPA not calculated as no CC positive) |
Flu B: PPA 100.0% (92.9, 100.0), NPA 98.5% (96.9, 99.3) | ||
RSV: NPA 100.0% (99.2, 100.0) (PPA not calculated as only 1 CC positive) |
Study Details
Here's the breakdown of the study details as requested:
-
A table of acceptance criteria and the reported device performance: Already provided above.
-
Sample sizes used for the test set and the data provenance:
-
Analytical Studies (Test Set):
- LoD: For each virus (Flu A, Flu B, RSV, SARS-CoV-2), preliminary LoD involved testing a minimum of 3 levels, each in a minimum of 3 replicates. Final LoD confirmation involved testing a minimum of 3 panel members with target concentrations bracketing the preliminary LoD, each panel member in a minimum of 20 replicates. (Total specific sample numbers not provided per virus, but this describes the method and minimums). Specimens were pooled negative NP clinical specimens.
- Inclusivity: Each individual virus isolate or strain was tested in replicates of 5. (Total specific sample numbers not provided per virus, but 16 Flu A strains, 9 Flu B, 6 RSV, 9 SARS-CoV-2). Specimens were pooled negative clinical NP swab matrix.
- Precision: 5 panel members (1 negative, 4 positive) tested with 4 replicates twice each day for 5 days, on 3 Alinity m Systems operated by 3 operators using 3 reagent lots. This leads to:
- Flu A: 120 positive replicates for each level, 360 negative replicates.
- Flu B: 120 positive replicates for each level, 357 negative replicates.
- RSV: 120 positive replicates for moderate, 117 for low, 360 negative.
- SARS-CoV-2: 120 positive replicates for each level, 357 negative.
- Reproducibility: 5 panel members tested at 3 external clinical testing sites. Each site tested 2 Alinity m Resp-4-Plex AMP Kit lots, on 5 non-consecutive days for each lot. Four replicates of each panel member were tested on each of 5 days. This leads to:
- Flu A: 120 positive replicates for each level, 360 negative replicates.
- Flu B: 120 positive replicates for each level, 359 negative replicates.
- RSV: 120 positive replicates for moderate, 119 for low, 360 negative.
- SARS-CoV-2: 120 positive replicates for moderate, 117 for low, 359 negative.
- Analytical Specificity (Interfering Substances): 34 substances evaluated in 2 different positive panel members (PM1 & PM2), each containing multiple analytes at 3xLoD. (Replicate number not specified).
- Analytical Specificity (Cross-Reactants): 74 microorganisms added to pooled negative clinical NP swab matrix (replicate number not specified) and also to positive samples (replicate number not specified).
- Competitive Interference: 4 panel members, each containing 3 viruses at low concentrations and one at high concentration. (Replicate number not specified).
- Carryover: Negative and high positive samples tested in alternating positions, across 3 Alinity m Systems. 360 negative samples total.
-
Clinical Performance (Test Set):
- Prospective Clinical Study:
- Flu A/B, RSV: 2,753 valid results initially, 2,504 (Flu A), 2,710 (Flu B), 2,700 (RSV) used in analysis.
- Data Provenance: Multicenter study using prospectively collected nasopharyngeal swab specimens. 4 US clinical sites for testing. Specimens collected during 2021-2022 flu season at 7 geographically distributed locations in the US and during the 2020 flu season at 1 location in the Southern Hemisphere.
- SARS-CoV-2: 826 valid results initially, 698 used in analysis.
- Data Provenance: Specimens collected at 10 geographically distributed locations in the US over 2 time periods (Dec 2020 - Feb 2021 and May 2023).
- Flu A/B, RSV: 2,753 valid results initially, 2,504 (Flu A), 2,710 (Flu B), 2,700 (RSV) used in analysis.
- Retrospective Clinical Study:
- Flu A/B, RSV: 515 valid results initially, 506 (Flu A), 504 (Flu B), 505 (RSV) used in analysis.
- Data Provenance: Preselected archived flu B positive NP swab specimens in UVT or UTM collected during the 2017-2018 and 2019-2020 flu seasons. Randomly mixed with known negative specimens.
- Flu A/B, RSV: 515 valid results initially, 506 (Flu A), 504 (Flu B), 505 (RSV) used in analysis.
- Prospective Clinical Study:
-
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- The ground truth for the clinical test sets (both prospective and retrospective) was established using a Composite Comparator (CC). This CC was based on results from "2 to 3 FDA cleared assays for flu A, flu B, and RSV" and "2 to 3 highly sensitive EUA SARS-CoV-2 molecular assays."
- The document does not specify the number or qualifications of human experts (e.g., radiologists, pathologists) involved in establishing this ground truth. The ground truth method described is entirely based on laboratory comparator assays, not expert human interpretation of results.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- The ground truth was established by a Composite Comparator (CC) method:
- A specimen was categorized as CC positive if a minimum of 2 comparator positive results were reported.
- A specimen was categorized as CC negative if a minimum of 2 comparator negative results were reported.
- A specimen was categorized CC indeterminate if a CC could not be determined due to missing results from the comparator assays.
- This functions as a type of "majority rule" adjudication or consensus, but strictly between other molecular assays, not human experts.
- The ground truth was established by a Composite Comparator (CC) method:
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study was not done.
- This device is an in vitro diagnostic (RT-PCR assay) that provides a qualitative (positive/negative) result directly. It does not involve human "readers" interpreting images or other complex data that would typically benefit from AI assistance or an MRMC study design. Therefore, there's no data on human reader improvement with or without AI assistance.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, the primary performance evaluation is a standalone algorithm-only performance.
- The Alinity m Resp-4-Plex assay is an automated RT-PCR system. Its performance (PPA, NPA) in both analytical and clinical studies is the performance of the "algorithm only" in generating positive/negative results from the sample. Human intervention is limited to sample collection and system operation, not interpretation of the primary diagnostic output.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc):
- The ground truth for the clinical studies was established using a Composite Comparator (CC) method based on the results of other FDA-cleared molecular diagnostic assays (or EUA high-sensitivity molecular assays for SARS-CoV-2).
- It was not based on expert consensus, pathology, or outcomes data directly.
-
The sample size for the training set:
- The document describes a de novo device, or a device for which substantial equivalence is being sought, not an AI/ML device that requires a distinct "training set" and "test set" in the context of model development.
- The studies presented are primarily verification and validation studies to demonstrate the device's analytical and clinical performance after its development.
- Therefore, the concept of a separate "training set" as understood in machine learning (where data is used to train a model) is not applicable to this RT-PCR assay. The assay's "knowledge" is embedded in its reagents, primers, probes, and system parameters, which were likely optimized during development using various analytical samples, but these are not typically referred to as a "training set" in the context of a 510(k) for an RT-PCR assay.
-
How the ground truth for the training set was established:
- As explained in point 8, there isn't a "training set" in the AI/ML sense for this RT-PCR assay. The ground truth for any samples used during the development or optimization phases would similarly be established using well-characterized samples (e.g., cultured viruses, positive clinical samples confirmed by reference methods, synthetic nucleic acids, negative clinical samples).
§ 866.3981 Device to detect and identify nucleic acid targets in respiratory specimens from microbial agents that cause the SARS-CoV-2 respiratory infection and other microbial agents when in a multi-target test.
(a)
Identification. A device to detect and identify nucleic acid targets in respiratory specimens from microbial agents that cause the SARS-CoV-2 respiratory infection and other microbial agents when in a multi-target test is an in vitro diagnostic device intended for the detection and identification of SARS-CoV-2 and other microbial agents when in a multi-target test in human clinical respiratory specimens from patients suspected of respiratory infection who are at risk for exposure or who may have been exposed to these agents. The device is intended to aid in the diagnosis of respiratory infection in conjunction with other clinical, epidemiologic, and laboratory data or other risk factors.(b)
Classification. Class II (special controls). The special controls for this device are:(1) The intended use in the labeling required under § 809.10 of this chapter must include a description of the following: Analytes and targets the device detects and identifies, the specimen types tested, the results provided to the user, the clinical indications for which the test is to be used, the specific intended population(s), the intended use locations including testing location(s) where the device is to be used (if applicable), and other conditions of use as appropriate.
(2) Any sample collection device used must be FDA-cleared, -approved, or -classified as 510(k) exempt (standalone or as part of a test system) for the collection of specimen types claimed by this device; alternatively, the sample collection device must be cleared in a premarket submission as a part of this device.
(3) The labeling required under § 809.10(b) of this chapter must include:
(i) A detailed device description, including reagents, instruments, ancillary materials, all control elements, and a detailed explanation of the methodology, including all pre-analytical methods for processing of specimens;
(ii) Detailed descriptions of the performance characteristics of the device for each specimen type claimed in the intended use based on analytical studies including the following, as applicable: Limit of Detection, inclusivity, cross-reactivity, interfering substances, competitive inhibition, carryover/cross contamination, specimen stability, precision, reproducibility, and clinical studies;
(iii) Detailed descriptions of the test procedure(s), the interpretation of test results for clinical specimens, and acceptance criteria for any quality control testing;
(iv) A warning statement that viral culture should not be attempted in cases of positive results for SARS-CoV-2 and/or any similar microbial agents unless a facility with an appropriate level of laboratory biosafety (
e.g., BSL 3 and BSL 3+, etc.) is available to receive and culture specimens; and(v) A prominent statement that device performance has not been established for specimens collected from individuals not identified in the intended use population (
e.g., when applicable, that device performance has not been established in individuals without signs or symptoms of respiratory infection).(vi) Limiting statements that indicate that:
(A) A negative test result does not preclude the possibility of infection;
(B) The test results should be interpreted in conjunction with other clinical and laboratory data available to the clinician;
(C) There is a risk of incorrect results due to the presence of nucleic acid sequence variants in the targeted pathogens;
(D) That positive and negative predictive values are highly dependent on prevalence;
(E) Accurate results are dependent on adequate specimen collection, transport, storage, and processing. Failure to observe proper procedures in any one of these steps can lead to incorrect results; and
(F) When applicable (
e.g., recommended by the Centers for Disease Control and Prevention, by current well-accepted clinical guidelines, or by published peer-reviewed literature), that the clinical performance may be affected by testing a specific clinical subpopulation or for a specific claimed specimen type.(4) Design verification and validation must include:
(i) Detailed documentation, including performance results, from a clinical study that includes prospective (sequential) samples for each claimed specimen type and, as appropriate, additional characterized clinical samples. The clinical study must be performed on a study population consistent with the intended use population and compare the device performance to results obtained using a comparator that FDA has determined is appropriate. Detailed documentation must include the clinical study protocol (including a predefined statistical analysis plan), study report, testing results, and results of all statistical analyses.
(ii) Risk analysis and documentation demonstrating how risk control measures are implemented to address device system hazards, such as Failure Modes Effects Analysis and/or Hazard Analysis. This documentation must include a detailed description of a protocol (including all procedures and methods) for the continuous monitoring, identification, and handling of genetic mutations and/or novel respiratory pathogen isolates or strains (
e.g., regular review of published literature and periodic in silico analysis of target sequences to detect possible mismatches). All results of this protocol, including any findings, must be documented and must include any additional data analysis that is requested by FDA in response to any performance concerns identified under this section or identified by FDA during routine evaluation. Additionally, if requested by FDA, these evaluations must be submitted to FDA for FDA review within 48 hours of the request. Results that are reasonably interpreted to support the conclusion that novel respiratory pathogen strains or isolates impact the stated expected performance of the device must be sent to FDA immediately.(iii) A detailed description of the identity, phylogenetic relationship, and other recognized characterization of the respiratory pathogen(s) that the device is designed to detect. In addition, detailed documentation describing how to interpret the device results and other measures that might be needed for a laboratory diagnosis of respiratory infection.
(iv) A detailed device description, including device components, ancillary reagents required but not provided, and a detailed explanation of the methodology, including molecular target(s) for each analyte, design of target detection reagents, rationale for target selection, limiting factors of the device (
e.g., saturation level of hybridization and maximum amplification and detection cycle number, etc.), internal and external controls, and computational path from collected raw data to reported result (e.g., how collected raw signals are converted into a reported signal and result), as applicable.(v) A detailed description of device software, including software applications and hardware-based devices that incorporate software. The detailed description must include documentation of verification, validation, and hazard analysis and risk assessment activities, including an assessment of the impact of threats and vulnerabilities on device functionality and end users/patients as part of cybersecurity review.
(vi) For devices intended for the detection and identification of microbial agents for which an FDA recommended reference panel is available, design verification and validation must include the performance results of an analytical study testing the FDA recommended reference panel of characterized samples. Detailed documentation must be kept of that study and its results, including the study protocol, study report for the proposed intended use, testing results, and results of all statistical analyses.
(vii) For devices with an intended use that includes detection of Influenza A and Influenza B viruses and/or detection and differentiation between the Influenza A virus subtypes in human clinical specimens, the design verification and validation must include a detailed description of the identity, phylogenetic relationship, or other recognized characterization of the Influenza A and B viruses that the device is designed to detect, a description of how the device results might be used in a diagnostic algorithm and other measures that might be needed for a laboratory identification of Influenza A or B virus and of specific Influenza A virus subtypes, and a description of the clinical and epidemiological parameters that are relevant to a patient case diagnosis of Influenza A or B and of specific Influenza A virus subtypes. An evaluation of the device compared to a currently appropriate and FDA accepted comparator method. Detailed documentation must be kept of that study and its results, including the study protocol, study report for the proposed intended use, testing results, and results of all statistical analyses.
(5) When applicable, performance results of the analytical study testing the FDA recommended reference panel described in paragraph (b)(4)(vi) of this section must be included in the device's labeling under § 809.10(b) of this chapter.
(6) For devices with an intended use that includes detection of Influenza A and Influenza B viruses and/or detection and differentiation between the Influenza A virus subtypes in human clinical specimens in addition to detection of SARS-CoV-2 and similar microbial agents, the required labeling under § 809.10(b) of this chapter must include the following:
(i) Where applicable, a limiting statement that performance characteristics for Influenza A were established when Influenza A/H3 and A/H1-2009 (or other pertinent Influenza A subtypes) were the predominant Influenza A viruses in circulation.
(ii) Where applicable, a warning statement that reads if infection with a novel Influenza A virus is suspected based on current clinical and epidemiological screening criteria recommended by public health authorities, specimens should be collected with appropriate infection control precautions for novel virulent influenza viruses and sent to State or local health departments for testing. Viral culture should not be attempted in these cases unless a BSL 3+ facility is available to receive and culture specimens.
(iii) Where the device results interpretation involves combining the outputs of several targets to get the final results, such as a device that both detects Influenza A and differentiates all known Influenza A subtypes that are currently circulating, the device's labeling must include a clear interpretation instruction for all valid and invalid output combinations, and recommendations for any required followup actions or retesting in the case of an unusual or unexpected device result.
(iv) A limiting statement that if a specimen yields a positive result for Influenza A, but produces negative test results for all specific influenza A subtypes intended to be differentiated (
i.e., H1-2009 and H3), this result requires notification of appropriate local, State, or Federal public health authorities to determine necessary measures for verification and to further determine whether the specimen represents a novel strain of Influenza A.(7) If one of the actions listed at section 564(b)(1)(A) through (D) of the Federal Food, Drug, and Cosmetic Act occurs with respect to an influenza viral strain, or if the Secretary of Health and Human Services determines, under section 319(a) of the Public Health Service Act, that a disease or disorder presents a public health emergency, or that a public health emergency otherwise exists, with respect to an influenza viral strain:
(i) Within 30 days from the date that FDA notifies manufacturers that characterized viral samples are available for test evaluation, the manufacturer must have testing performed on the device with those influenza viral samples in accordance with a standardized protocol considered and determined by FDA to be acceptable and appropriate.
(ii) Within 60 days from the date that FDA notifies manufacturers that characterized influenza viral samples are available for test evaluation and continuing until 3 years from that date, the results of the influenza emergency analytical reactivity testing, including the detailed information for the virus tested as described in the certificate of authentication, must be included as part of the device's labeling in a tabular format, either by:
(A) Placing the results directly in the device's labeling required under § 809.10(b) of this chapter that accompanies the device in a separate section of the labeling where analytical reactivity testing data can be found, but separate from the annual analytical reactivity testing results; or
(B) In a section of the device's label or in other labeling that accompanies the device, prominently providing a hyperlink to the manufacturer's public website where the analytical reactivity testing data can be found. The manufacturer's website, as well as the primary part of the manufacturer's website that discusses the device, must provide a prominently placed hyperlink to the website containing this information and must allow unrestricted viewing access.