(87 days)
The CareSuperb™ COVID-19/Flu A&B Antigen Combo Home Test is a lateral flow immunochromatographic assay intended for the qualitative detection and differentiation of influenza A and influenza B nucleoprotein antigens and SARS-CoV-2 nucleocapsid antigens directly in anterior nasal swab samples from individuals with signs and symptoms of respiratory tract infection. Symptoms of respiratory infections due to SARS-CoV-2 and influenza can be similar. This test is for non-prescription home use by individuals aged 14 years or older testing themselves, or adults testing individuals aged 2 years or older.
All negative results are presumptive and should be confirmed with an FDA-cleared molecular assay when determined to be appropriate by a healthcare provider. Negative results do not rule out infection with influenza, SARS-CoV-2, or other pathogens.
Individuals who test negative and experience continued or worsening respiratory symptoms, such as fever, cough, and/or shortness of breath, should seek follow up care from their healthcare provider.
Positive results do not rule out co-infection with other respiratory pathogens and therefore do not substitute for a visit to a healthcare provider for appropriate follow-up.
The CareSuperb™ COVID-19/Flu A&B Antigen Combo Home Test is a lateral flow immunoassay intended for the qualitative detection and differentiation of SARS-CoV-2 nucleocapsid antigen, Influenza A nucleoprotein antigen, and Influenza B nucleoprotein antigen from anterior nasal swab specimens.
The CareSuperb™ COVID-19/Flu A&B Antigen Combo Home Test utilizes an adaptor-based lateral flow assay platform integrating a conjugate wick filter to facilitate sample processing. Each test cassette contains a nitrocellulose membrane with immobilized capture antibodies for SARS-CoV-2, Influenza A, Influenza B, and internal control. Following specimen application to the sample port, viral antigens, if present, bind to labeled detection antibodies embedded in the conjugate wick filter. The resulting immune complexes migrate along the test strip and are captured at the respective test lines (C19 for SARS-CoV-2, A for Influenza A, and B for Influenza B), forming visible colored lines. A visible control line (Cont) confirms proper sample migration and test validity. The absence of a control line invalidates the test result.
Each kit includes a single-use test cassette, assay buffer dropper vial, nasal swab, and Quick Reference Instructions (QRI). Test results are visually interpreted 10 minutes after swab removal.
The provided document describes the CareSuperb™ COVID-19/Flu A&B Antigen Combo Home Test, an over-the-counter lateral flow immunoassay for lay users. The study aimed to demonstrate its substantial equivalence to a predicate device and its performance characteristics for qualitative detection and differentiation of SARS-CoV-2, Influenza A, and Influenza B antigens in anterior nasal swab samples.
Here's an analysis of the acceptance criteria and the study proving the device meets them:
1. Table of Acceptance Criteria and Reported Device Performance
While specific acceptance criteria (i.e., pre-defined thresholds the device must meet for clearance) are not explicitly stated as numbered points in this 510(k) summary, they can be inferred from the reported performance data and common FDA expectations for such devices. The performance data presented serves as the evidence that the device met these implied criteria.
Performance Characteristic | Implied Acceptance Criteria (e.g., typical FDA expectations) | Reported Device Performance |
---|---|---|
Clinical Performance (vs. Molecular Assay) | ||
SARS-CoV-2 - Positive Percent Agreement (PPA) | High PPA (e.g., >80-90%) | 92.5% (95% CI: 86.4%-96.0%) |
SARS-CoV-2 - Negative Percent Agreement (NPA) | Very high NPA (e.g., >98%) | 99.6% (95% CI: 99.1%-99.8%) |
Influenza A - PPA | High PPA (e.g., >80-90%) | 85.6% (95% CI: 77.9%-90.9%) |
Influenza A - NPA | Very high NPA (e.g., >98%) | 99.0% (95% CI: 98.4%-99.4%) |
Influenza B - PPA | High PPA (e.g., >80-90%) | 86.0% (95% CI: 72.7%-93.4%) |
Influenza B - NPA | Very high NPA (e.g., >98%) | 99.7% (95% CI: 99.3%-99.9%) |
Analytical Performance | ||
Precision (1x LoD) | ≥95% agreement | 99.2% for SARS-CoV-2, 99.2% for Flu A, 99.7% for Flu B (all at 1x LoD) |
Precision (3x LoD) | 100% agreement expected at higher concentrations | 100% for all analytes at 3x LoD |
Limit of Detection (LoD) | Lowest detectable concentration with ≥95% positive agreement | Confirmed LoDs provided for various strains (e.g., SARS-CoV-2 Omicron: 7.50 x 10^0 TCID₅₀/Swab at 100% agreement) |
Co-spike LoD | ≥95% result agreement in presence of multiple analytes | Met for Panel I and II (e.g., 98% for SARS-CoV-2, 97% for Flu A in Panel I) |
Inclusivity (Analytical Reactivity) | Demonstrate reactivity with diverse strains | Low reactive concentrations established for a wide range of SARS-CoV-2, Flu A, Flu B strains, with 5/5 replicates positive |
Competitive Interference | No interference from high concentrations of other analytes | 100% agreement, no competitive interference observed |
Hook Effect | No false negatives at high antigen concentrations | 100% positive result agreement, no hook effect observed |
Analytical Sensitivity (WHO Std) | Demonstrate sensitivity using international standard | LoD of 8 IU/Swab with 95% (19/20) agreement |
Cross-Reactivity/Microbial Interference | No false positives (cross-reactivity) or reduced performance (interference) | No cross-reactivity or microbial interference observed (100% agreement for positive samples, 0% for negative) |
Endogenous/Exogenous Substances Interference | No false positives or reduced performance | No cross-reactivity or interference observed (all target analytes accurately detected) |
Biotin Interference | Clearly define impact of biotin; specify concentration for potential interference | False negatives for Influenza A at 3,750 ng/mL and 5,000 ng/mL (Important finding for labeling) |
Real-time Stability | Support claimed shelf-life | 100% expected results over 15 months, supporting 13-month shelf-life |
Transportation Stability | Withstand simulated transport conditions | 100% expected results, no false positives/negatives under extreme conditions |
Usability Study | High percentage of correct performance and interpretation by lay users | >98% correct completion of critical steps, 98.7% observer agreement with user interpretation, >94% found instructions easy/test simple |
Readability Study | High percentage of correct interpretation from QRI by untrained lay users | 94.8% correct interpretation of mock devices from QRI without assistance |
2. Sample Sizes Used for the Test Set and Data Provenance
- Clinical Performance Test Set (Human Samples): N=1644 total participants.
- Self-collecting: N=1447 (individuals aged 14 or older testing themselves)
- Lay-user/Tester Collection: N=197 (adults testing individuals aged 2-17 years)
- Data Provenance:
- Country of Origin: United States ("13 clinical sites across the U.S.").
- Retrospective/Prospective: The clinical study was prospective, as samples were collected "between November of 2023 and March of 2025" from "symptomatic subjects, suspected of respiratory infection."
- Analytical Performance Test Sets (Contrived/Spiked Samples): Sample sizes vary per study:
- Precision Study 1: 360 results per panel member (negative, 1x LoD positive, 3x LoD positive).
- Precision Study 2: 36 sample replicates/lot (for negative and 0.75x LoD positive samples).
- LoD Confirmation: 20 replicates per LoD concentration.
- Co-spike LoD: 20 replicates per panel (multiple panels tested).
- Inclusivity: 5 replicates per strain (for identifying lowest reactive concentration).
- Competitive Interference: 3 replicates per of 19 sample configurations.
- Hook Effect: 5 replicates per concentration.
- WHO Standard LoD: 20 replicates for confirmation.
- Cross-Reactivity/Microbial Interference: 3 replicates per microorganism (in absence and presence of analytes).
- Endogenous/Exogenous Substances Interference: 3 replicates per substance (in absence and presence of analytes).
- Biotin Interference: 3 replicates per biotin concentration.
- Real-time Stability: 5 replicates per lot at each time point.
- Transportation Stability: 5 replicates per sample type per lot for each condition.
- Usability Study: 1,795 participants.
- Readability Study: 50 participants.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
-
Clinical Performance (Reference Method - Test Set Ground Truth): The ground truth for the clinical test set was established using FDA-cleared molecular RT-PCR comparator assays for SARS-CoV-2, Influenza A, and Influenza B.
- This implies that the "experts" were the established and validated molecular diagnostic platforms, rather than human expert readers/adjudicators for visual interpretation.
-
Usability/Readability Studies:
- Usability Study: "Observer agreement with user-interpreted results was 98.7%." This suggests trained observers (likely not "experts" in the sense of clinical specialists, but rather study personnel trained in test interpretation as per IFU) established agreement with user results.
- Readability Study: The study focused on whether lay users themselves could interpret results after reading the QRI. Ground truth for the mock devices would be pre-determined by the device manufacturer based on their design.
4. Adjudication Method for the Test Set
- Clinical Performance: No human adjudication method (e.g., 2+1, 3+1) is mentioned for the clinical test set. The direct comparison was made against molecular RT-PCR as the gold standard, which serves as the definitive ground truth for the presence or absence of the viruses. This type of diagnostic test typically relies on a definitive laboratory method for ground truth, not human interpretation consensus.
- Usability/Readability Studies: The usability study mentioned "Observer agreement with user-interpreted results," implying direct comparison between user interpretation and a pre-defined correct interpretation or an observer's interpretation. The readability study involved participants interpreting mock devices based on the QRI, with performance measured against the pre-determined correct interpretation of those mock devices.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance
- No AI Component: This device (CareSuperb™ COVID-19/Flu A&B Antigen Combo Home Test) is a lateral flow immunoassay for visual interpretation. It is not an AI-powered diagnostic device, nor does it have a human-in-the-loop AI assistance component. Therefore, an MRMC study related to AI assistance was not applicable and not performed.
6. If a Standalone (i.e., Algorithm Only Without Human-in-the-Loop Performance) Was Done
- Not Applicable: As this is a visually interpreted antigen test, there is no "algorithm only" or standalone algorithm performance to evaluate. The device's performance is intrinsically linked to its chemical reactions and subsequent visual interpretation by the user (or observer in studies).
7. The Type of Ground Truth Used
- Clinical Performance Test Set: FDA-cleared molecular RT-PCR comparator assays (molecular ground truth). This is generally considered a highly reliable and objective ground truth for viral detection.
- Analytical Performance Test Sets: Generally contrived samples with known concentrations of viral analytes or microorganisms against negative pooled swab matrix. This allows for precise control of the 'ground truth' concentration and presence/absence.
- Usability/Readability Studies: For readability, it was pre-defined correct interpretations of "mock test devices." For usability, it was observation of correct procedural steps and comparison of user interpretation to trained observer interpretation.
8. The Sample Size for the Training Set
- Not explicitly stated in terms of a "training set" for the device itself. As a lateral flow immunoassay, this device is developed through biochemical design, antigen-antibody interactions, and manufacturing processes, rather than through machine learning models that require distinct training datasets.
- The document describes the analytical studies (LoD, inclusivity, interference, etc.) which inform the device's technical specifications and ensure it's robust. The clinical study and usability/readability studies are typically considered validation/test sets for the final manufactured device.
- If this were an AI/ML device, a specific training set size would be crucial. For this type of IVD, the "training" analogous to an AI model would be the research, development, and optimization of the assay components (antibodies, membrane, buffer, etc.) using various known positive and negative samples in the lab.
9. How the Ground Truth for the Training Set Was Established
- Not applicable in the context of a machine learning training set.
- For the development and optimization of the assay (analogous to training), ground truth would have been established through:
- Using quantified viral stocks (e.g., TCID₅₀/mL, CEID₅₀/mL, FFU/mL, IU/mL) to precisely spike into negative matrix (PNSM) to create known positive and negative samples at various concentrations.
- Employing established laboratory reference methods (e.g., molecular assays) to confirm the presence/absence and concentration of analytes in developmental samples.
- Utilizing characterized clinical samples (if available) with confirmed statuses from gold-standard methods early in development.
N/A