Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K241652
    Manufacturer
    Date Cleared
    2024-12-20

    (196 days)

    Product Code
    Regulation Number
    866.3981
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Nuclein, LLC

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The DASH® SARS-CoV-2 & Flu A/B Test is a rapid reverse transcription polymerase chain reaction (RT-PCR) assay performed on the DASH Rapid PCR Instrument and is intended for the simultaneous in vitro qualitative detection and differentiation of SARS-CoV-2, influenza A and influenza B virus ribonucleic acid (RNA) in anterior nasal swab specimens from patients with signs and symptoms of respiratory tract infection. The test is intended to aid in the differential diagnosis of SARS-CoV-2, influenza B in humans in conjunction with other clinical, epidemiologic and laboratory findings.

    Positive results of a specific target are indicative of that viral RNA and may not be the definite cause of disease. Positive results do not rule out co-infection with other pathogens. Negative results do not preclude SARS-COV-2, influenza A or influenza B infection and should not be used as the sole basis for patient management decisions.

    Device Description

    The DASH® SARS-CoV-2 & Flu A/B Test is a rapid, polymerase chain reaction (PCR) assay performed on the DASH Rapid PCR Instrument (DASH Instrument) with the DASH External Controls. The external control materials and DASH Instrument are sold and distributed separately from the DASH SARS-CoV-2 & Flu A/B Test. The DASH SARS-CoV-2 & Flu A/B Test (for use with the DASH Rapid PCR System components) uses reverse transcription polymerase chain reaction (RT-PCR) for rapid qualitative detection and differentiation of SARS-CoV-2, Flu A and Flu B from nasal swabs.

    The test combines the technologies of sequence specific capture sample preparation and RT-PCR amplification. The DASH SARS-CoV-2 & Flu A/B Test cartridge contains all reagents necessary to perform the test. An anterior nares nasal swab with a 30-mm breakpoint is used to collect a specimen. The nasal swab specimen is added directly to the DASH SARS-CoV-2 & Fly A/B Test cartridge sample chamber. The cartridge is capped and inserted into the DASH Rapid PCR Instrument to initiate the test, and all subsequent test steps are performed automatically by the DASH Instrument.

    AI/ML Overview

    The provided text describes the performance of the DASH® SARS-CoV-2 & Flu A/B Test, a rapid RT-PCR assay. The information focuses on analytical and clinical performance studies to demonstrate its substantial equivalence to a predicate device for FDA clearance.

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria for a diagnostic test like this are typically related to:

    • Analytical Sensitivity (Limit of Detection - LoD): The lowest concentration of the analyte that can be reliably detected.
    • Precision/Reproducibility: The consistency of results when the test is run multiple times under varying conditions (different operators, days, sites, lots).
    • Analytical Specificity (Cross-Reactivity & Microbial Interference): The ability of the test to exclusively detect the target analyte without reacting to other related or unrelated microorganisms and to perform accurately in the presence of other common substances.
    • Clinical Performance (Agreement with a Comparator Method): How well the device's results align with a well-established (FDA cleared) method when tested on clinical samples. This is typically measured by Positive Percent Agreement (PPA) and Negative Percent Agreement (NPA).

    Acceptance Criteria (Implied) and Reported Device Performance for DASH® SARS-CoV-2 & Flu A/B Test

    Performance MetricAcceptance Criteria (Implied/Typical for IVD)Reported Device Performance
    Analytical Sensitivity (LoD)Lowest concentration at which ≥95% of replicates yield a positive result.SARS-CoV-2: 1200 copies/swab (BEI Resources, Omicron Variant)
    Flu A (H1N1 Victoria/2570/19): 1.35 TCID50/swab
    Flu A (H3N2 /Darwin/9/21): 0.225 TCID50/swab
    Flu B (Washington/02/19): 0.10 TCID50/swab
    Flu B (Utah/9/14): 0.675 TCID50/swab
    Precision (Within-Laboratory)100% positivity for targets at 2X and 5X LoD; 0% positivity for negatives.All targets (SARS-CoV-2, Flu A, Flu B) at 2x LoD and 5x LoD: 96/96 (100%) positive.
    Negative samples: 96/96 (100%) negative.
    Percent positive agreement: 100% between operators and testing days for each target at 2X and 5X LoD.
    Reproducibility (Multi-site)High agreement with expected results across sites, operators, and days.Low Positive (2X LoD): SARS-CoV-2 99.3%, Flu A 99.3%, Flu B 99.3%.
    Moderate Positive (5X LoD): SARS-CoV-2 99.6%, Flu A 100.0%, Flu B 100.0%.
    Negative: 100.0% for all targets.
    Site 1: Minor deviations, e.g., SARS-CoV-2 Low Positive (88/90 = 97.8%), Flu A Low Positive (88/90 = 97.8%), Flu B Low Positive (88/90 = 97.8%).
    Overall: 97.8%-100% agreement for positive samples, 100% for negative samples.
    Analytical SpecificityNo cross-reactivity with tested organisms; no interference from substances.Cross-Reactivity (Wet Testing of 50 viral, bacterial, fungal agents): None of the evaluated organisms demonstrated cross-reactivity at tested concentrations (0% positive call rate for all 3 replicates).
    Microbial Interference: None of the evaluated microorganisms demonstrated interference with the assay at tested concentrations (100% positive call rate for all 3 replicates with target present).
    Competitive Interference: Identified specific high concentrations of one virus that can inhibit detection of another at 3xLoD. Highest co-infection levels permitting all targets detected at 3xLoD are reported (e.g., SARS-CoV-2 at 1.41E+06 copies/mL inhibits Flu A at 3xLoD).
    Exogenous/Endogenous Interfering Substances: Interference with SARS-CoV-2 assay at 4.58 µg/mL Biotin, but not at 2.29 µg/mL. Interference with SARS-CoV-2 assay at 5% v/v Flonase, but not at 2.5% v/v. Most other substances showed no interference.
    Analytical Reactivity (Inclusivity)Detects intended variants/strains at or near LoD.Wet Testing: 7 SARS-CoV-2 strains, 21 Flu A strains, 10 Flu B strains detected at or near LoD (100% positive call rate for 3 replicates).
    In silico Analysis: Influenza A ≥99.97% (18112 of 18117), Influenza B ≥98.27% (8136 of 8279), SARS-CoV-2 ≥99.99% (994778 of 994846) of sequences predicted to be detected.
    H5N3 and H7N7 Flu A subtypes expected to be detected via in silico; H5N3 confirmed by wet testing. ~97.3% of human host influenza A sequences from Nov 2023-2024 predicted to be detected.
    Clinical Performance (PPA/NPA)High agreement (e.g., >90% PPA and NPA) with FDA cleared comparator.SARS-CoV-2: PPA 95.2% (160/168), NPA 99.5% (624/627)
    Flu A: PPA 94.3% (50/53), NPA 98.1% (725/739)
    Flu B: PPA 97.3% (36/37), NPA 99.2% (749/755)

    2. Sample Size Used for the Test Set and Data Provenance

    • Analytical Test Sets:
      • LoD: 5 viral strains (SARS-CoV-2, Flu A x2, Flu B x2) evaluated, with "replicates" (number not explicitly stated per strain, but implies multiple).
      • Within-Laboratory Precision: 96 replicates per concentration (2X LoD, 5X LoD, and negative) for each of the triple positive samples (SARS-CoV-2, Flu A, Flu B). Total of 288 samples for positive and 96 for negative for each target (total 384 for each target or more generally 96 of each concentration for the multiplex test).
      • Reproducibility (Multi-Site): Over 810 samples tested (270 replicates per panel member across 3 sites, 5 days, 2 runs/day, 3 replicates/panel member). Panel members: true negative, low positive (2X LoD), moderate positive (5X LoD).
      • Analytical Specificity (Cross-Reactivity & Microbial Interference): 50 different viruses, bacteria, and fungi (for wet testing of cross-reactivity and interference). Each tested with three (3) replicates.
      • Inclusivity (Wet Testing): 7 strains of SARS-CoV-2, 21 strains of Flu A, and 10 strains of Flu B. Three (3) replicates evaluated per strain.
    • Clinical Test Set:
      • Evaluated Subjects: 795 subjects evaluable for at least one analyte/target.
      • SARS-CoV-2: 795 evaluable subjects (168 positive by comparator, 627 negative).
      • Flu A & Flu B: 792 evaluable subjects (Flu A: 53 positive by comparator, 739 negative; Flu B: 37 positive by comparator, 755 negative).
    • Data Provenance:
      • Clinical Study: Prospective collection of specimens within the United States (7 geographical locations). Specimens collected from January to March 2024.
      • Analytical Studies: Lab-based studies using contrived samples/strains. The text does not specify the country of origin for these analytical labs, but given the FDA submission, it's likely US or accredited international labs.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts

    • For analytical studies (LoD, precision, specificity, inclusivity), the "ground truth" is established by the known concentration of the spiked viral strains or the defined presence/absence of organisms/substances. This does not involve human expert consensus.
    • For the clinical study, the ground truth was established by comparison with an FDA cleared RT-PCR test.
      • SARS-CoV-2: Comparison with an FDA cleared RT-PCR test for SARS-CoV-2.
      • Flu A & Flu B: Comparison with a second FDA cleared test for Flu A and Flu B.
      • Discordant Results: Discordant results were investigated using a "third highly sensitive FDA cleared test."
    • The text does not specify the number of experts or their qualifications for establishing the ground truth (i.e., for interpreting the results of the FDA-cleared comparator tests). This is typical for such submissions; the "expert" is implied to be the validated and regulated comparator device.

    4. Adjudication Method (e.g. 2+1, 3+1, none) for the Test Set

    • For the clinical study, an adjudication method was used for discordant results. The method was: "All discordant results between the DASH® SARS-CoV-2 & Flu A/B Test and the comparator were investigated using a third highly sensitive FDA cleared test." This acts as a reference standard to resolve discrepancies. It loosely resembles a "2+1" or "tie-breaker" approach where the third test breaks the tie between the investigational device and the initial comparator.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No, an MRMC comparative effectiveness study was not done. This type of study is relevant for imaging devices or AI algorithms where human readers interpret images. The DASH® SARS-CoV-2 & Flu A/B Test is a molecular diagnostic (RT-PCR) test; its output is qualitative (positive/negative) and determined by an automated instrument/software algorithm without human visual interpretation of the test result. Therefore, human reader improvement with AI assistance is not applicable to this device.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Yes, a standalone performance was done for the device. The device is an automated RT-PCR system.
      • "The cartridge is capped and inserted into the DASH Rapid PCR Instrument to initiate the test, and all subsequent test steps are performed automatically by the DASH Instrument."
      • "A software algorithm determines whether any of the targets are positive."
      • The analytical and clinical performance data (LoD, precision, specificity, inclusivity, PPA/NPA) all represent the "algorithm only without human-in-the-loop performance" in terms of result generation. The human role is in specimen collection, loading the cartridge, and interpreting the final positive/negative result displayed by the instrument. The operators were "untrained" in the clinical study, further indicating the device's standalone nature in generating results.

    7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)

    • Analytical Studies: The ground truth was established using known quantities and defined characteristics of viral strains, bacteria, fungi, and chemical substances spiked into matrices (pooled nasal matrix, simulated clinical nasal matrix). This is a form of defined experimental truth.
    • Clinical Study: The ground truth was established by comparison to results from one or more FDA cleared RT-PCR tests. This can be considered a reference standard ground truth based on established diagnostic methods. For discordant results, a "third highly sensitive FDA cleared test" served as the tie-breaker/confirmatory ground truth.

    8. The Sample Size for the Training Set

    • The document describes a 510(k) submission for a diagnostic test, not an AI/ML device where a distinct "training set" would be explicitly mentioned for model development. The text focuses on the validation of the device rather than its development. Therefore, a specific "training set sample size" for the algorithm itself is not provided in this document. The development of the assay (primers, probes, algorithm thresholds) would involve internal R&D, but the data presented here is for the regulatory submission's performance evaluation.

    9. How the Ground Truth for the Training Set was Established

    • As noted in point 8, a "training set" in the context of an AI/ML model for image or signal interpretation is not explicitly discussed. The "training" for this type of RT-PCR device involves the development and optimization of the assay's chemical reagents (primers, probes) and the instrument's detection algorithm. The ground truth for such development would involve:
      • Known concentrations of target nucleic acids: To establish reaction efficiency and sensitivity.
      • Characterized positive and negative clinical samples or controls: To optimize thresholds and ensure correct classification.
      • Various interfering substances/microorganisms: To ensure specificity.
    • This ground truth is established through standard molecular biology and analytical chemistry practices, using highly characterized reagents and samples, and often iterative testing during the R&D phase prior to formal validation studies. The document does not detail this prior developmental process.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1