Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K251604
    Manufacturer
    Date Cleared
    2025-08-22

    (87 days)

    Product Code
    Regulation Number
    866.3987
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The CareSuperb™ COVID-19/Flu A&B Antigen Combo Home Test is a lateral flow immunochromatographic assay intended for the qualitative detection and differentiation of influenza A and influenza B nucleoprotein antigens and SARS-CoV-2 nucleocapsid antigens directly in anterior nasal swab samples from individuals with signs and symptoms of respiratory tract infection. Symptoms of respiratory infections due to SARS-CoV-2 and influenza can be similar. This test is for non-prescription home use by individuals aged 14 years or older testing themselves, or adults testing individuals aged 2 years or older.

    All negative results are presumptive and should be confirmed with an FDA-cleared molecular assay when determined to be appropriate by a healthcare provider. Negative results do not rule out infection with influenza, SARS-CoV-2, or other pathogens.

    Individuals who test negative and experience continued or worsening respiratory symptoms, such as fever, cough, and/or shortness of breath, should seek follow up care from their healthcare provider.

    Positive results do not rule out co-infection with other respiratory pathogens and therefore do not substitute for a visit to a healthcare provider for appropriate follow-up.

    Device Description

    The CareSuperb™ COVID-19/Flu A&B Antigen Combo Home Test is a lateral flow immunoassay intended for the qualitative detection and differentiation of SARS-CoV-2 nucleocapsid antigen, Influenza A nucleoprotein antigen, and Influenza B nucleoprotein antigen from anterior nasal swab specimens.

    The CareSuperb™ COVID-19/Flu A&B Antigen Combo Home Test utilizes an adaptor-based lateral flow assay platform integrating a conjugate wick filter to facilitate sample processing. Each test cassette contains a nitrocellulose membrane with immobilized capture antibodies for SARS-CoV-2, Influenza A, Influenza B, and internal control. Following specimen application to the sample port, viral antigens, if present, bind to labeled detection antibodies embedded in the conjugate wick filter. The resulting immune complexes migrate along the test strip and are captured at the respective test lines (C19 for SARS-CoV-2, A for Influenza A, and B for Influenza B), forming visible colored lines. A visible control line (Cont) confirms proper sample migration and test validity. The absence of a control line invalidates the test result.

    Each kit includes a single-use test cassette, assay buffer dropper vial, nasal swab, and Quick Reference Instructions (QRI). Test results are visually interpreted 10 minutes after swab removal.

    AI/ML Overview

    The provided document describes the CareSuperb™ COVID-19/Flu A&B Antigen Combo Home Test, an over-the-counter lateral flow immunoassay for lay users. The study aimed to demonstrate its substantial equivalence to a predicate device and its performance characteristics for qualitative detection and differentiation of SARS-CoV-2, Influenza A, and Influenza B antigens in anterior nasal swab samples.

    Here's an analysis of the acceptance criteria and the study proving the device meets them:

    1. Table of Acceptance Criteria and Reported Device Performance

    While specific acceptance criteria (i.e., pre-defined thresholds the device must meet for clearance) are not explicitly stated as numbered points in this 510(k) summary, they can be inferred from the reported performance data and common FDA expectations for such devices. The performance data presented serves as the evidence that the device met these implied criteria.

    Performance CharacteristicImplied Acceptance Criteria (e.g., typical FDA expectations)Reported Device Performance
    Clinical Performance (vs. Molecular Assay)
    SARS-CoV-2 - Positive Percent Agreement (PPA)High PPA (e.g., >80-90%)92.5% (95% CI: 86.4%-96.0%)
    SARS-CoV-2 - Negative Percent Agreement (NPA)Very high NPA (e.g., >98%)99.6% (95% CI: 99.1%-99.8%)
    Influenza A - PPAHigh PPA (e.g., >80-90%)85.6% (95% CI: 77.9%-90.9%)
    Influenza A - NPAVery high NPA (e.g., >98%)99.0% (95% CI: 98.4%-99.4%)
    Influenza B - PPAHigh PPA (e.g., >80-90%)86.0% (95% CI: 72.7%-93.4%)
    Influenza B - NPAVery high NPA (e.g., >98%)99.7% (95% CI: 99.3%-99.9%)
    Analytical Performance
    Precision (1x LoD)≥95% agreement99.2% for SARS-CoV-2, 99.2% for Flu A, 99.7% for Flu B (all at 1x LoD)
    Precision (3x LoD)100% agreement expected at higher concentrations100% for all analytes at 3x LoD
    Limit of Detection (LoD)Lowest detectable concentration with ≥95% positive agreementConfirmed LoDs provided for various strains (e.g., SARS-CoV-2 Omicron: 7.50 x 10^0 TCID₅₀/Swab at 100% agreement)
    Co-spike LoD≥95% result agreement in presence of multiple analytesMet for Panel I and II (e.g., 98% for SARS-CoV-2, 97% for Flu A in Panel I)
    Inclusivity (Analytical Reactivity)Demonstrate reactivity with diverse strainsLow reactive concentrations established for a wide range of SARS-CoV-2, Flu A, Flu B strains, with 5/5 replicates positive
    Competitive InterferenceNo interference from high concentrations of other analytes100% agreement, no competitive interference observed
    Hook EffectNo false negatives at high antigen concentrations100% positive result agreement, no hook effect observed
    Analytical Sensitivity (WHO Std)Demonstrate sensitivity using international standardLoD of 8 IU/Swab with 95% (19/20) agreement
    Cross-Reactivity/Microbial InterferenceNo false positives (cross-reactivity) or reduced performance (interference)No cross-reactivity or microbial interference observed (100% agreement for positive samples, 0% for negative)
    Endogenous/Exogenous Substances InterferenceNo false positives or reduced performanceNo cross-reactivity or interference observed (all target analytes accurately detected)
    Biotin InterferenceClearly define impact of biotin; specify concentration for potential interferenceFalse negatives for Influenza A at 3,750 ng/mL and 5,000 ng/mL (Important finding for labeling)
    Real-time StabilitySupport claimed shelf-life100% expected results over 15 months, supporting 13-month shelf-life
    Transportation StabilityWithstand simulated transport conditions100% expected results, no false positives/negatives under extreme conditions
    Usability StudyHigh percentage of correct performance and interpretation by lay users>98% correct completion of critical steps, 98.7% observer agreement with user interpretation, >94% found instructions easy/test simple
    Readability StudyHigh percentage of correct interpretation from QRI by untrained lay users94.8% correct interpretation of mock devices from QRI without assistance

    2. Sample Sizes Used for the Test Set and Data Provenance

    • Clinical Performance Test Set (Human Samples): N=1644 total participants.
      • Self-collecting: N=1447 (individuals aged 14 or older testing themselves)
      • Lay-user/Tester Collection: N=197 (adults testing individuals aged 2-17 years)
    • Data Provenance:
      • Country of Origin: United States ("13 clinical sites across the U.S.").
      • Retrospective/Prospective: The clinical study was prospective, as samples were collected "between November of 2023 and March of 2025" from "symptomatic subjects, suspected of respiratory infection."
    • Analytical Performance Test Sets (Contrived/Spiked Samples): Sample sizes vary per study:
      • Precision Study 1: 360 results per panel member (negative, 1x LoD positive, 3x LoD positive).
      • Precision Study 2: 36 sample replicates/lot (for negative and 0.75x LoD positive samples).
      • LoD Confirmation: 20 replicates per LoD concentration.
      • Co-spike LoD: 20 replicates per panel (multiple panels tested).
      • Inclusivity: 5 replicates per strain (for identifying lowest reactive concentration).
      • Competitive Interference: 3 replicates per of 19 sample configurations.
      • Hook Effect: 5 replicates per concentration.
      • WHO Standard LoD: 20 replicates for confirmation.
      • Cross-Reactivity/Microbial Interference: 3 replicates per microorganism (in absence and presence of analytes).
      • Endogenous/Exogenous Substances Interference: 3 replicates per substance (in absence and presence of analytes).
      • Biotin Interference: 3 replicates per biotin concentration.
      • Real-time Stability: 5 replicates per lot at each time point.
      • Transportation Stability: 5 replicates per sample type per lot for each condition.
    • Usability Study: 1,795 participants.
    • Readability Study: 50 participants.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Clinical Performance (Reference Method - Test Set Ground Truth): The ground truth for the clinical test set was established using FDA-cleared molecular RT-PCR comparator assays for SARS-CoV-2, Influenza A, and Influenza B.

      • This implies that the "experts" were the established and validated molecular diagnostic platforms, rather than human expert readers/adjudicators for visual interpretation.
    • Usability/Readability Studies:

      • Usability Study: "Observer agreement with user-interpreted results was 98.7%." This suggests trained observers (likely not "experts" in the sense of clinical specialists, but rather study personnel trained in test interpretation as per IFU) established agreement with user results.
      • Readability Study: The study focused on whether lay users themselves could interpret results after reading the QRI. Ground truth for the mock devices would be pre-determined by the device manufacturer based on their design.

    4. Adjudication Method for the Test Set

    • Clinical Performance: No human adjudication method (e.g., 2+1, 3+1) is mentioned for the clinical test set. The direct comparison was made against molecular RT-PCR as the gold standard, which serves as the definitive ground truth for the presence or absence of the viruses. This type of diagnostic test typically relies on a definitive laboratory method for ground truth, not human interpretation consensus.
    • Usability/Readability Studies: The usability study mentioned "Observer agreement with user-interpreted results," implying direct comparison between user interpretation and a pre-defined correct interpretation or an observer's interpretation. The readability study involved participants interpreting mock devices based on the QRI, with performance measured against the pre-determined correct interpretation of those mock devices.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

    • No AI Component: This device (CareSuperb™ COVID-19/Flu A&B Antigen Combo Home Test) is a lateral flow immunoassay for visual interpretation. It is not an AI-powered diagnostic device, nor does it have a human-in-the-loop AI assistance component. Therefore, an MRMC study related to AI assistance was not applicable and not performed.

    6. If a Standalone (i.e., Algorithm Only Without Human-in-the-Loop Performance) Was Done

    • Not Applicable: As this is a visually interpreted antigen test, there is no "algorithm only" or standalone algorithm performance to evaluate. The device's performance is intrinsically linked to its chemical reactions and subsequent visual interpretation by the user (or observer in studies).

    7. The Type of Ground Truth Used

    • Clinical Performance Test Set: FDA-cleared molecular RT-PCR comparator assays (molecular ground truth). This is generally considered a highly reliable and objective ground truth for viral detection.
    • Analytical Performance Test Sets: Generally contrived samples with known concentrations of viral analytes or microorganisms against negative pooled swab matrix. This allows for precise control of the 'ground truth' concentration and presence/absence.
    • Usability/Readability Studies: For readability, it was pre-defined correct interpretations of "mock test devices." For usability, it was observation of correct procedural steps and comparison of user interpretation to trained observer interpretation.

    8. The Sample Size for the Training Set

    • Not explicitly stated in terms of a "training set" for the device itself. As a lateral flow immunoassay, this device is developed through biochemical design, antigen-antibody interactions, and manufacturing processes, rather than through machine learning models that require distinct training datasets.
    • The document describes the analytical studies (LoD, inclusivity, interference, etc.) which inform the device's technical specifications and ensure it's robust. The clinical study and usability/readability studies are typically considered validation/test sets for the final manufactured device.
    • If this were an AI/ML device, a specific training set size would be crucial. For this type of IVD, the "training" analogous to an AI model would be the research, development, and optimization of the assay components (antibodies, membrane, buffer, etc.) using various known positive and negative samples in the lab.

    9. How the Ground Truth for the Training Set Was Established

    • Not applicable in the context of a machine learning training set.
    • For the development and optimization of the assay (analogous to training), ground truth would have been established through:
      • Using quantified viral stocks (e.g., TCID₅₀/mL, CEID₅₀/mL, FFU/mL, IU/mL) to precisely spike into negative matrix (PNSM) to create known positive and negative samples at various concentrations.
      • Employing established laboratory reference methods (e.g., molecular assays) to confirm the presence/absence and concentration of analytes in developmental samples.
      • Utilizing characterized clinical samples (if available) with confirmed statuses from gold-standard methods early in development.
    Ask a Question

    Ask a specific question about this device

    K Number
    K241915
    Manufacturer
    Date Cleared
    2025-01-29

    (212 days)

    Product Code
    Regulation Number
    866.3984
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The CareSuperb™ COVID-19 Antigen Home Test is a visually read lateral flow immunoassay device intended for the rapid, qualitative detection of SARS-CoV-2 virus nucleocapsid protein antigen directly in anterior nasal swab specimens from individuals with signs and symptoms of COVID-19.

    This test is for non-prescription home use by individuals aged 14 years or older testing themselves, or adults testing individuals aged 2 years or older.

    All negative results are presumptive. Symptomatic individuals with an initial negative test result must be re-tested once between 48 and 72 hours after the first test using either an antigen test or a molecular test for SARS-CoV-2. Negative results do not rule out SARS-CoV-2 infections or other pathogens and should not be used as the sole basis for treatment.

    Positive results do not rule out co-infection with other respiratory pathogens.

    This test is not a substitute for visits to a healthcare provider or appropriate follow-up and should not be used to determine any treatments without provider supervision. Individuals who test negative and experience continued or worsening COVID-19 symptoms, such as fever, cough and/or shortness of breath, should seek follow up care from their healthcare provider.

    Performance characteristics for SARS-CoV-2 were established from October 2023 to April 2024 when SARS-CoV-2 Omicron variant was dominant. Test accuracy may change as new SARS-CoV-2 viruses emerge. Additional testing with a lab-based molecular test (e.g., PCR) should be considered in situations where a new virus or variant is suspected.

    Device Description

    The CareSuperb™ COVID-19 Antigen Home Test is a lateral flow immunoassay device intended for the qualitative detection of SARS-CoV-2 nucleocapsid protein in anterior nasal samples.

    To begin the test, a self-collected anterior nasal swab sample (in individuals between the age of 2 to 14 a swab collected by a parent or quardian), or a heathcare-provider collected anterior nasal swab sample is inserted into the sample port and the extraction reagent in the dropper vial is added to the extraction to occur exposing the viral nucleocapsid antigens The SARS-CoV-2 antigens present in the sample bind with anti-SARS-CoV-2 antibodies dispensed in the conjuqate wick filter. These antigen-antibody complexes migrate to the the plastic cassette and travel across the membrane through capillary action. The complexes are captured at the tegion, causing a colored line to appear on the membrane.

    If the sample contains SARS-CoV-2 antigen, a visible line at the test line ("T") and a procedural control line at the control line ("C") will appear in the result window indicating a positive result. If SARS-CoV 2 viral nucleocapsid antigens are not present, or are present at very low levels, only the procedural control line will appear.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves the device (CareSuperb™ COVID-19 Antigen Home Test) meets them, based on the provided FDA 510(k) summary:

    Summary of Acceptance Criteria and Device Performance

    The acceptance criteria for this device are primarily centered around its clinical performance (accuracy) and usability for lay users, as well as robust analytical performance.

    1. Table of Acceptance Criteria and Reported Device Performance

    CategoryAcceptance Criteria (Implied by FDA Review)Reported Device Performance
    Clinical Performance
    Positive Percent Agreement (PPA) with RT-PCR (within 4 days of symptom onset)High agreement (e.g., generally > ~80-90% for antigen tests, with higher expectations for devices for home use) to correctly identify positive samples.97.2% (140/144) (95% CI: 93.1%-98.9%)
    Negative Percent Agreement (NPA) with RT-PCR (within 4 days of symptom onset)High agreement (e.g., generally > ~95-98% for antigen tests for home use) to correctly identify negative samples.98.8% (496/502) (95% CI: 97.4%-99.5%)
    Analytical Performance
    Precision (Repeatability/Reproducibility)Consistent results across operators, sample types, lots, and varying concentrations (especially around LoD).True Negative: 100% agreement (1620/1620). High Negative (0.75X LoD): 92.6% agreement (500/540). Low Positive (1X LoD): 99.8% agreement (539/540). Low Positive (1.5X LoD, 2X LoD, 4X LoD): 100% agreement. Demonstrated precision with no significant variability between lots.
    Limit of Detection (LoD)Ability to detect SARS-CoV-2 at low concentrations.WA1/2020: 2.63 x 10^2 TCID50/mL (1.32 x 10^1 TCID50/Swab). Omicron B.1.1.529: 1.5 x 10^2 TCID50/mL (7.5 x 10^1 TCID50/Swab). WHO Standard (NIBSC 21/368): 32 IU/mL (1.6 IU/swab).
    Hook EffectNo false negatives at very high concentrations of analyte.No hook effect observed up to 4.0 x 10^5 TCID50/mL (WA1/2020) and 7.5 x 10^5 TCID50/mL (Omicron B.1.1.529).
    Cross-Reactivity/Microbial InterferenceNo false positives or interference from common respiratory microorganisms or viruses.None of 18 non-SARS-CoV-2 viruses and 10 other microorganisms showed cross-reactivity or interference. Pooled human nasal wash also showed no cross-reactivity or interference.
    Interfering Substances EffectNo interference from common medications or endogenous substances.None of 42 tested interfering substances (common medications, endogenous substances like blood, mucin) showed cross-reactivity or interference, except biotin at high concentrations. (False negative results observed when biotin concentration exceeded 2,500 ng/mL in positive samples). This is a known limitation for biotin-sensitive assays.
    Inclusivity (Analytical Reactivity)Ability to detect various SARS-CoV-2 variants.Demonstrated reactivity with 7 additional SARS-CoV-2 variants (Alpha, Delta, Omicron BA.2.12.1, BA.2.3, BA.2.75.5, BA.4.6, JN.1.4) at specific low concentrations.
    Flex StudiesRobustness to minor variations in user technique and environmental conditions.Studies support that the test is robust with an insignificant risk of erroneous results under various tested conditions (e.g., reading times, buffer volume, swab handling, environmental stress).
    Usability & Readability
    Usability StudyLay users can competently perform critical tasks with the device using provided instructions.Overall success rate for all critical tasks was ≥ 80% among 50 users.
    Readability StudyLay users can correctly interpret test results (positive, negative, invalid).Overall success rate for both tested panels (negative and positive interpretations) was 95.0% among 50 users.

    2. Sample Size and Data Provenance

    • Test Set (Clinical Performance):

      • Sample Size: A total of 646 symptomatic subjects were evaluated in the clinical study.
      • Data Provenance: Data was collected from 10 clinical sites in the U.S. between October 2023 and April 2024. This was a prospective study, as subjects self-sampled and self-tested in a simulated home setting.
    • Analytical Performance Studies: Sample sizes for these studies vary, but are explicitly stated for each (e.g., 540 replicates per lot for precision, 20 replicates for LoD confirmation, 3 replicates for cross-reactivity/interference studies). The document does not specify a country of origin for these lab-based studies, but for a US FDA submission, it implicitly means the studies adhere to US regulatory standards.

    3. Number of Experts and Qualifications for Ground Truth

    • For the clinical performance study, the ground truth was established by an FDA-cleared molecular assay (RT-PCR). This is a scientific, objective standard, not dependent on human expert interpretation in the same way imaging studies might be.
    • For the adjudication of discrepant results in the clinical study, it states: "All discrepant results were investigated by testing using an alternative FDA-cleared molecular assay at the central laboratory." This further reinforces the objective, lab-based ground truth.
    • For analytical studies (LoD, cross-reactivity, etc.), ground truth is established by the known concentrations of spiked analytes and the inherent characteristics of the reference materials. These are objective measures rather than expert consensus.

    4. Adjudication Method for the Test Set

    • For the clinical performance study, the primary ground truth was an FDA-cleared molecular assay.
    • Any discrepant results between the CareSuperb™ test and the primary molecular assay were adjudicated by testing with an alternative FDA-cleared molecular assay at a central laboratory. This acts as a robust, independent verification method for discrepancy resolution. There isn't a "2+1" or "3+1" human reader adjudication since the ground truth is objective molecular testing.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • No, an MRMC comparative effectiveness study was not done in the context of human readers improving with AI vs. without AI assistance. This is because the device is a visually read lateral flow immunoassay for COVID-19 antigen detection. It is a standalone diagnostic test performed by lay users, not an AI-assisted interpretation tool for images or complex data that would involve human readers and their performance improvement.

    6. Standalone Performance (Algorithm Only)

    • Yes, the primary clinical performance data (PPA and NPA) represents the standalone performance of the device when used by lay users, as it is a visually read test. There is no "algorithm" in the sense of a software-based AI interpreting results; the interpretation is visual by the user. The performance metrics (PPA, NPA) directly reflect the device's accuracy in identifying positive and negative cases compared to the molecular reference standard.

    7. Type of Ground Truth Used

    • Clinical Performance Ground Truth: The primary ground truth for the clinical study was an FDA-cleared molecular assay (RT-PCR) result. Discrepant results were further confirmed by an alternative FDA-cleared molecular assay. This is considered a highly reliable and objective gold standard for SARS-CoV-2 detection.
    • Analytical Performance Ground Truth: For the analytical studies (LoD, precision, cross-reactivity, inclusivity), the ground truth was established by using known concentrations of purified or inactivated SARS-CoV-2 strains/variants/reference materials and other microorganisms/substances, diluted into a negative clinical matrix (nasal swab matrix).

    8. Sample Size for the Training Set

    • The document primarily describes a diagnostic test kit (lateral flow immunoassay), not an AI/ML-based algorithm that requires a "training set" in the computational sense.
    • The closest equivalent to a "training set" for physical test development would involve extensive R&D and optimization studies during the design phase to establish reagent concentrations, membrane properties, and other manufacturing parameters. This type of "training" isn't quantified by a sample size of patient data in the same way an AI model's training set would be. The clinical and analytical studies presented are validation studies to prove the device works as intended, not data used for "training" the device itself.

    9. How the Ground Truth for the Training Set was Established

    • As noted above, there isn't a "training set" and associated ground truth in the AI/ML context for this type of device. The development and optimization of the physical components (e.g., antibody selection, membrane type, buffer formulation) are based on robust analytical chemistry and immunology principles, often using characterized biological materials (like specific viral concentrations) as internal benchmarks during the R&D process. The performance of these optimized components is then validated in the extensive analytical and clinical studies detailed in the 510(k) submission.
    Ask a Question

    Ask a specific question about this device

    K Number
    K191514
    Manufacturer
    Date Cleared
    2020-02-18

    (256 days)

    Product Code
    Regulation Number
    866.3328
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The CareStart™ Flu A&B Plus is an in vitro rapid immunochromatographic assay for the qualitative detection of influenza virus type A and B nucleoprotein antigens directly from nasopharyngeal swab specimens of symptomatic patients.

    The test is intended for use as an aid in the rapid differential diagnosis of acute influenza type A and B viral infections. This test is intended to distinguish between influenza type A and/or B virus in a single test. This test is not intended to detect influenza type C viral antigens. Negative test results are presumptive and should be confirmed by viral culture or an FDA-cleared influenza A and B molecular assay. Negative results do not preclude influenza virus infections and should not be used as the basis for treatment or other patient management decisions.

    Performance characteristics for influenza A and B were established during the 2018-2019 influenza season when influenza A/H3N2, A/H1N1pdm09, and B/Victoria were the predominant influenza viruses in circulation. When other influenza viruses are emerging, performance characteristics may vary.

    If infection with a novel influenza virus is suspected based on current clinical and epidemiological screening criteria recommended by public health authorities, specimens should be collected with appropriate infection control precautions for novel virulent influenza viruses and sent to the state or local health department for testing. Viral culture should not be attempted in these cases unless a BSL 3+ facility is available to received and culture specimens.

    Device Description

    The CareStart™ Flu A&B Plus test is an immunochromatographic assay for detection of extracted influenza type A and B virus nucleoprotein antigens in nasopharyngeal specimens.

    Nasopharyngeal swabs require a sample preparation step in which the sample is eluted and washed off into the extraction buffer solution. Extracted swab sample is added to the sample well of the test device to initiate the test. When the swab sample migrates in the test strip, influenza A or B viral antigens bind to anti-influenza antibodies conjugated to indicator particles in the test strip forming an immune complex. The immune complex is then captured by each test line and control line on the membrane as it migrates through the strip.

    Test results are interpreted at 10 minutes. The presence of two colored lines, a purplecolored line in the control region "C" and a red-colored line in the influenza A test region "A", indicates influenza A positive. The presence of two colored lines, a purplecolored line in the control region "C" and a blue-colored line in the influenza B test region "B", indicates influenza B positive. The presence of three colored lines, a purple-colored line in the control region "C", a red-colored line in the influenza A test region "A", and a blue-colored line in the influenza B test region "B indicates, influenza A and B dual positive result. The absence of a line on both influenza A and B test regions with a purple-colored line in the control region "C" indicates negative. No appearance of purple-colored line in the control region "C" indicates invalid test.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    Device Name: CareStart™ Flu A&B Plus (Influenza Virus Antigen Detection Test System)

    General Acceptance Criteria (Implied by FDA 510(k) Clearance):
    The primary acceptance criterion for a 510(k) submission is that the device is substantially equivalent to a legally marketed predicate device. This is demonstrated by showing that the new device has the same intended use and technological characteristics, and that its performance is at least as safe and effective as the predicate device. Specific performance criteria are established through analytical and clinical studies.


    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state pre-defined acceptance criteria values (e.g., "PPA must be >X%"). Instead, the performance estimates are presented, and the implication is that these results demonstrate substantial equivalence to the predicate device. For the purpose of this analysis, I will treat the reported performance as the "met criteria" that led to substantial equivalence.

    Performance MetricAcceptance Criteria (Implied/Demonstrated)Reported Device Performance
    Clinical Performance (Influenza A)Sufficiently high PPA and NPA compared to molecular assayPPA: 79.9% (95% CI: 75.7% – 83.7%) NPA: 98.4% (95% CI: 97.0% – 99.2%)
    Clinical Performance (Influenza B)Sufficiently high PPA and NPA compared to molecular assayPPA: 88.2% (95% CI: 65.7% – 96.7%) (Prospective) PPA: 96.6% (95% CI: 91.5% – 98.7%) (Retrospective) NPA: 100.0% (95% CI: 99.6% – 100.0%) (Prospective) NPA: 97.8% (95% CI: 88.4% – 99.6%) (Retrospective)
    Analytical Sensitivity (LoD)Detection of virus strains at specified concentrationsAll tested strains detected at concentrations ranging from $2.0 \times 10^{5.2}$ to $1.6 \times 10^{6.4}$ for Influenza A and $2.0 \times 10^{5.5}$ to $1.6 \times 10^{6.4}$ for Influenza B, with 95-100% reactivity.
    Reactivity (Inclusivity)Detection of various influenza A and B strainsAll 15 Influenza A and 10 Influenza B strains detected in 3/3 replicates at specified concentrations.
    Analytical Specificity (Cross-Reactivity/Interference)No false positives (cross-reactivity); no interference with positive samplesNo false positives with 31 bacteria and 15 non-influenza viruses. No interference with influenza A/B positive samples. Biotin concentrations >500 ng/ml can cause false negative influenza A results (caveat).
    ReproducibilityHigh agreement across sites, operators, and days100.0% overall agreement for all sample categories across 3 sites, 3 operators, and 5 days.
    Lot-to-Lot PrecisionConsistent results across different reagent lots100.0% agreement across 3 reagent lots for all sample categories.

    2. Sample Size Used for the Test Set and Data Provenance

    • Clinical Test Set:

      • Prospective Clinical Study: 944 evaluable nasopharyngeal swab specimens.
        • Provenance: Collected from symptomatic patients during the 2018-2019 influenza season at 10 Point-of-Care investigational sites throughout the U.S. (prospective, U.S. origin).
      • Retrospective Study (supplemental for Influenza B): 162 swab samples prepared from archived respiratory specimens.
        • Provenance: Archived respiratory specimens from patients with influenza-like symptoms, confirmed positive or negative by an FDA-cleared molecular assay. Samples were distributed among four investigational sites (retrospective, likely U.S. origin, as it supplemental for the U.S. prospective study).
    • Analytical Sensitivity (LoD): 20 replicates per virus strain (8 strains tested).

    • Analytical Sensitivity (Reactivity/Inclusivity): 3 replicates per virus strain (25 strains tested).

    • Analytical Specificity (Cross-Reactivity/Interference): 3 replicates per organism/virus, both with and without influenza viruses (31 bacteria, 15 non-influenza viruses, 30 interfering substances).

    • Reproducibility Study: 7 sample categories tested in a blinded manner by 3 operators at 3 sites on 5 non-consecutive days (7 samples * 3 operators * 3 sites * 5 days = 315 tests, per virus type if applicable). Counted as 135/135 for each category overall.

    • Lot-to-Lot Precision: 7 sample categories tested across 3 reagent lots (counted as 27/27 for each category overall).


    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    The document does not specify the number or qualifications of experts for establishing ground truth.

    • Clinical Study Ground Truth: The ground truth for the clinical studies (prospective and retrospective) was established using an FDA-cleared influenza A and B molecular assay (comparator method). For discrepant results in the prospective study, an alternative FDA-cleared molecular assay was used for investigation. These are laboratory-based, highly sensitive and specific molecular tests for influenza, which are considered a gold standard for pathogen detection.

    • Analytical Studies Ground Truth: For analytical studies (LoD, Reactivity, Cross-Reactivity, Interference, Reproducibility, Lot-to-Lot Precision), the ground truth was inherent to the carefully prepared samples (e.g., known virus concentrations, known interfering substances).


    4. Adjudication Method for the Test Set

    • Clinical Studies: For the prospective clinical study, discrepancies between the CareStart™ Flu A&B Plus results and the initial molecular comparator assay results were investigated using an alternative FDA-cleared molecular influenza A/B assay. The results of this secondary testing were captured in footnotes but not included in the primary calculations of the performance estimates. This suggests a form of 2+1 adjudication not directly applied to the final performance metrics but used for understanding discrepancies.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, what was the effect size of how much human readers improve with AI vs without AI assistance

    • Not applicable. This device is an in vitro rapid immunochromatographic assay (a rapid diagnostic test kit) and does not involve AI assistance or human readers interpreting AI outputs. The results are interpreted visually by a human or using a simple instrument (like the predicate, though the proposed device is visually interpreted). Therefore, an MRMC study related to AI assistance for human readers was not performed.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Yes, in essence, the "device performance" in the clinical and analytical studies represents the standalone performance of the assay. The CareStart™ Flu A&B Plus itself is a rapid test that produces visual results. The clinical studies compare the device's output (positive/negative for Flu A/B) directly against a molecular truth, effectively testing its "standalone" diagnostic capabilities without a human interpretation model beyond reading a test line. The detection format is "Visual determination of presence or absence of colored line indicators" for the proposed device, indicating it functions as a standalone diagnostic tool without requiring an additional human interpretation step or an algorithm to read the result in a complex way.

    7. The type of ground truth used

    • Molecular Assay: The primary ground truth for clinical performance evaluation (prospective and retrospective studies) was an FDA-cleared influenza A and B molecular assay. An alternative FDA-cleared molecular assay was used to investigate discrepant results.
    • Known Concentrations/Absence of Analytes: For analytical studies (LoD, Reactivity, Cross-Reactivity, Interference, Reproducibility, Lot-to-Lot Precision), the ground truth was based on samples with known concentrations of specific virus strains or known absence of target analytes/presence of interfering substances.

    8. The sample size for the training set

    The document does not explicitly describe a separate "training set" in the context of machine learning or AI. As a rapid diagnostic test, the device's development typically involves iterative design and testing using various lab-prepared and clinical samples, which implicitly serve as a development/training phase. However, there isn't a formally described training set as one might see for an AI algorithm. The performance evaluation presented focuses on the validation of the device.


    9. How the ground truth for the training set was established

    As there is no explicitly defined "training set" in the provided document, the method for establishing its ground truth is not described. The analytical and clinical studies described serve as validation of the final device. The ground truth for those validation studies was established through FDA-cleared molecular assays or by controlled preparation of samples with known viral content.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1