K Number
K243561
Date Cleared
2025-06-17

(211 days)

Product Code
Regulation Number
866.3987
Panel
MI
Reference & Predicate Devices
N/A
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The Nano-Check Influenza+COVID-19 Dual Test is a lateral flow immunochromatographic assay intended for the qualitative detection and differentiation of influenza A, and influenza B nucleoprotein antigens and SARS-CoV-2 nucleocapsid antigen directly in anterior nasal swab (ANS) samples from individuals with signs and symptoms of respiratory tract infection. Clinical signs and symptoms of respiratory viral infection due to SARS-CoV-2 and influenza can be similar.

All negative results are presumptive and should be confirmed with a molecular assay, if necessary, for patient management. Negative results do not rule out infection with influenza or SARS-CoV-2 and should not be used as the sole basis for treatment or patient management decisions.

Positive results do not rule out bacterial infection or co-infection with other viruses.

Device Description

The Nano-Check™ Influenza+COVID-19 Dual Test is a lateral flow immunochromatographic assay intended for in vitro rapid, simultaneous qualitative detection and differentiation of influenza A, and influenza B nucleoprotein antigens and SARS-CoV-2 nucleocapsid antigen directly from anterior nasal swab specimens.

The assay kit consists of 25 test cassette devices, 25 reagent tubes, 25 ampules containing extraction buffer, 25 anterior nasal specimen collection swabs, one positive control swab, one negative control swab, one Instructions for Use, and one Quick Reference Instruction. An external positive control swab contains noninfectious influenza A, influenza B, and SARS-CoV-2 antigens dried onto the swab and an external negative control swab contains noninfectious blank universal viral transport media dried on the swab. The kit should be stored at 2°C - 30°C.

AI/ML Overview

Device Acceptance Criteria and Performance Study: Nano-Check Influenza+COVID-19 Dual Test

The Nano-Check Influenza+COVID-19 Dual Test is a lateral flow immunochromatographic assay for the qualitative detection and differentiation of influenza A, influenza B, and SARS-CoV-2 antigens in anterior nasal swab samples. The device's acceptance criteria and performance were established through extensive analytical and clinical studies.

1. Table of Acceptance Criteria and Reported Device Performance

The following table summarizes the key acceptance criteria and the performance achieved by the Nano-Check Influenza+COVID-19 Dual Test based on the provided 510(k) summary. Given that this is a qualitative assay, the primary performance metrics are Positive Percent Agreement (PPA) and Negative Percent Agreement (NPA) in clinical studies, and various measures of agreement/detection rates in analytical studies.

Performance Metric CategoryAcceptance Criteria (Implicit)Reported Device Performance
CLINICAL PERFORMANCE
SARS-CoV-2PPA ≥ 80% (typical for antigen tests), NPA ≥ 95%PPA: 87.6% (95% CI: 83.0% - 91.0%)
NPA: 99.8% (95% CI: 99.5% - 99.9%)
Influenza APPA ≥ 80%, NPA ≥ 95%PPA: 86.9% (95% CI: 83.6% - 89.6%)
NPA: 99.6% (95% CI: 99.1% - 99.8%)
Influenza BPPA ≥ 80%, NPA ≥ 95%PPA: 86.8% (95% CI: 79.4% - 91.9%)
NPA: 99.7% (95% CI: 99.4% - 99.9%)
ANALYTICAL PERFORMANCE
Precision (Within-Lab)100% agreement for TN, HN, LP, MP levels across runs/operators100% agreement for all levels (SARS-CoV-2, Flu A, Flu B) per operator per run.
Precision (Between-Lot)Consistent results across lots, especially for moderate and high positivesFor C90 levels, agreement ranged from 83.3% to 100%. For 3X LOD levels, 100% agreement.
Reproducibility (Multi-site, Multi-operator)High agreement across sites and operators for all sample types (TN, HN, LP, MP)TN: 100%
HN COVID: 100%
HN Flu A: 100%
HN Flu B: 99.4%
LP COVID: 100%
LP Flu A: 99.4%
LP Flu B: 100%
MP COVID: 100%
MP Flu A: 100%
MP Flu B: 100%
Cross-Reactivity/Microbial InterferenceNo cross-reactivity/interference at tested concentrationsNo cross-reactivity/interference observed with 50 pathogens (bacteria, fungi, viruses) and negative matrix.
Endogenous/Exogenous InterferenceNo interference with common substances at tested concentrationsNo interference observed with various nasal sprays, pain relievers, hand sanitizers, and other biological substances (except Hand sanitizer lotion, which caused false negative Influenza B when tested at 15% w/v).
Limit of Detection (LoD)Specific LoD values per virus strainSARS-CoV-2: 1.95×10² TCID₅₀/mL to 1.27×10⁴ TCID₅₀/mL (strain dependent)
Influenza A: 2.8×10³ TCID₅₀/mL to 1.4×10⁵ CEID₅₀/mL (strain dependent)
Influenza B: 1.04×10² TCID₅₀/mL to 2.25×10⁵ CEID₅₀/mL (strain dependent)
WHO Standard SARS-CoV-2: 667 IU/mL
Analytical Reactivity (Inclusivity)100% detection for various strains at specified concentrations100% detection (3/3 replicates) for 14 SARS-CoV-2, 31 Flu A, and 16 Flu B strains at specified concentrations.
High Dose Hook EffectNo false negatives at high concentrationsNo high-dose hook effect observed for all tested viruses at concentrations up to 3.89×10⁴ TCID₅₀/mL (SARS-CoV-2), 2.8×10⁸ CEID₅₀/mL (Flu A), and 1.8×10⁷ TCID₅₀/mL (Flu B).
Competitive InterferenceNo interference between targets in co-infection scenariosNo competitive interference observed between SARS-CoV-2, Influenza A, and Influenza B at high/low titer combinations.
Specimen StabilityStable results for specified storage conditions/timesNasal swab samples stable for up to 48 hours at -20°C, 2-8°C, 23.5°C, and 30°C.
External Controls100% agreement with expected results for positive/negative controls100% agreement for all three lots of positive and negative external controls.

2. Sample Size Used for the Test Set and Data Provenance

  • Clinical Study Test Set Sample Size: A total of 1,969 subjects were enrolled in the clinical study.
  • Data Provenance: The data was collected from a multi-center, prospective clinical study in the U.S. between November 2022 and February 2025.

3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

The device being reviewed is an in vitro diagnostic (IVD) test for antigen detection. For such devices, the "ground truth" in clinical performance studies is typically established by a highly sensitive and specific molecular assay (RT-PCR), rather than by human experts interpreting images or signals from the test device itself.

  • In this case, the ground truth for the clinical test set was established using an FDA-cleared RT-PCR method as the comparator.
  • The document does not specify the number of experts directly involved in establishing the RT-PCR ground truth or their qualifications beyond stating it was performed at a "reference laboratory as per the cleared instruction for use." This implies that qualified laboratory personnel, adhering to standardized RT-PCR protocols, established the ground truth.

4. Adjudication Method for the Test Set

Adjudication methods (e.g., 2+1, 3+1) are typically used in studies involving human interpretation (e.g., radiology reads) where discrepancies between readers need to be resolved. Since the Nano-Check Influenza+COVID-19 Dual Test is a lateral flow immunoassay interpreted visually by an operator, and the ground truth was established by an RT-PCR molecular assay, no explicit adjudication method for the test set is described or implied in the provided text. The comparison was directly between the device's visual results and the RT-PCR results.

5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

No, an MRMC comparative effectiveness study was not done. This type of study (MRMC) is generally conducted for imaging AI devices to evaluate the impact of AI assistance on human reader performance. The Nano-Check Influenza+COVID-19 Dual Test is an in vitro diagnostic device for antigen detection, not an imaging AI device where human readers interpret complex images. Therefore, the concept of "human readers improve with AI vs without AI assistance" is not applicable to this device.

6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

Yes, the performance presented for the Nano-Check Influenza+COVID-19 Dual Test in the clinical study is essentially standalone performance in the context of a rapid diagnostic test. While the test is visually interpreted by an operator, the performance metrics (PPA and NPA) are derived from the direct output of the device compared to the RT-PCR reference. There is no complex "algorithm" separate from the physical test strips' chemical reaction and visual readout. The operator simply reads the result displayed by the device. The "human-in-the-loop" here is the visual interpretation of a clear positive/negative line, not a complex decision-making process aided by AI.

7. The Type of Ground Truth Used

The type of ground truth used for the clinical performance study was an FDA-cleared molecular assay (RT-PCR method). This is a highly sensitive and specific laboratory-based test considered the gold standard for detecting viral nucleic acids, making it appropriate for establishing true positive and true negative cases of infection.

8. The Sample Size for the Training Set

The provided document describes the performance data for the test set (clinical study and analytical validation). It does not specify a separate training set sample size. This is expected because the Nano-Check Influenza+COVID-19 Dual Test is a lateral flow immunoassay, not a machine learning or AI model that requires a distinct training phase with a labeled dataset. The development and optimization of such assays rely on biochemical and immunological principles, followed by rigorous analytical and clinical validation.

9. How the Ground Truth for the Training Set Was Established

As noted above, there isn't a "training set" in the machine learning sense for this type of IVD device. The development of the assay (e.g., selecting antibodies, optimizing reagents) would involve internal R&D studies, using characterized viral samples and clinical specimens, but these are part of the development process rather than a formal "training set" with ground truth establishment for an AI algorithm. The performance data presented is from the validation against established reference methods.

N/A