Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K250273
    Date Cleared
    2025-06-13

    (134 days)

    Product Code
    Regulation Number
    866.3982
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    BinaxNOW COVID-19 Ag Card

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The BinaxNOW COVID-19 Ag Card is a lateral flow immunochromatographic assay for the rapid, qualitative detection of the SARS-CoV-2 nucleocapsid protein antigen directly in anterior nasal swab specimens from individuals with signs and symptoms of upper respiratory tract infection (i.e., symptomatic). The test is intended for use as an aid in the diagnosis of SARS-CoV-2 infections (COVID-19) in symptomatic individuals when either: tested at least twice over three days with at least 48 hours between tests; or when tested once, and negative by the BinaxNOW COVID-19 Ag Card and followed up with a molecular test.

    A negative test is presumptive and does not preclude SARS-CoV-2 infection; it is recommended these results be confirmed by a molecular SARS-CoV-2 assay.

    Positive results do not rule out co-infection with other bacteria or viruses and should not be used as the sole basis for diagnosis, treatment, or other patient management decisions.

    Device Description

    The BinaxNOW COVID-19 Ag Card is an immunochromatographic membrane assay that uses antibodies to detect SARS-CoV-2 nucleocapsid protein from anterior nasal swab specimens. SARS-CoV-2 specific antibodies and a control antibody are immobilized onto a membrane support as two distinct lines and combined with other reagents/pads to construct a test strip. This test strip and a well to hold the swab specimen are mounted on opposite sides of a cardboard, book-shaped hinged test card.

    To perform the test, an anterior nasal swab specimen is collected from the patient, 6 drops of extraction reagent from a dropper bottle are added to the top hole of the swab well. The patient sample is inserted into the test card through the bottom hole of the swab well, and firmly pushed upwards until the swab tip is visible through the top hole. The swab is rotated 3 times clockwise and the card is closed, bringing the extracted sample into contact with the test strip. Test results are interpreted visually at 15 minutes based on the presence or absence of visually detectable pink/purple colored lines. Results should not be read after 30 minutes.

    AI/ML Overview

    The provided document is a 510(k) summary for the BinaxNOW COVID-19 Ag Card. It does not describe a study proving a device meets acceptance criteria in the manner typically associated with AI/ML-driven medical devices, which would involve measures like sensitivity, specificity, or AUC against a ground truth, often with human readers involved (MRMC studies).

    Instead, this document describes the validation of an immunochromatographic assay (a rapid antigen test) for COVID-19. The "acceptance criteria" here are typically performance targets for analytical and clinical characteristics (e.g., Limit of Detection, cross-reactivity, Positive Percent Agreement, Negative Percent Agreement). The "study" refers to the analytical and clinical studies conducted to demonstrate these performance characteristics.

    Therefore, the following response will interpret "acceptance criteria" as the performance benchmarks for a diagnostic assay and describe the validation studies for the BinaxNOW COVID-19 Ag Card based on the provided text.

    Here's a breakdown of the information requested, interpreted in the context of a rapid antigen test (not an AI/ML device):


    Acceptance Criteria and Device Performance for BinaxNOW COVID-19 Ag Card

    The BinaxNOW COVID-19 Ag Card is a lateral flow immunochromatographic assay, not an AI/ML diagnostic device. Therefore, the "acceptance criteria" are based on the analytical and clinical performance characteristics typical for such an in-vitro diagnostic (IVD) device, rather than metrics like AUC, sensitivity/specificity of an AI algorithm, or human reader improvement with AI assistance.

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria CategorySpecific Metric/StudyPerformance Target (Implicit/Typical for IVDs)Reported Device Performance
    Analytical PerformanceLimit of Detection (LOD)Lowest virus concentration detected ≥ 95% of the time (e.g., 19/20 replicates positive)USA-WA1/2020: 3.5 x 10³ TCID50/mL (70 TCID/swab)
    B.1.1.529 (Omicron): 1.6 x 10³ TCID50/mL (32.06 TCID/swab)
    WHO International Standard (NIBSC 21/368): 375 IU/mL (7.5 IU/swab), with 100% detection (20/20) at this concentration.
    Analytical Reactivity (Inclusivity)Detection of various SARS-CoV-2 strains at specified concentrations (all 5 replicates positive for a given concentration)Detected 19 different SARS-CoV-2 variants (Alpha, Beta, Delta, Gamma, Iota, Italy-INMI1, Kappa, Zeta, Omicron variants including BA.2.3, BA.2.12.1, BA.2.75.5, BA.4.6, BA.5, BA.5.5, BF.5, BF.7, BQ.1, BQ.1.1, XBB, JN.1) at concentrations ranging from 8.75 x 10² TCID50/mL to 5.60 x 10⁴ TCID50/mL (or IU/mL for JN.1).
    Analytical Specificity (Cross Reactivity) & Microbial InterferenceNo cross-reactivity or interference with common respiratory pathogens/commensals.No cross-reactivity or interference observed with 28 tested microorganisms (9 bacteria, 17 viruses, 1 yeast, pooled human nasal wash, and 4 Coronavirus HKU1 clinical specimens).
    In silico analysis for P. jirovecii showed very low potential for cross-reactivity. Possible susceptibility to SARS-CoV (due to homology) noted, but deemed low clinical likelihood.
    High Dose Hook EffectNo hook effect at high viral concentrations.No high dose hook effect observed up to 1.4 x 10⁶ TCID50/mL.
    Interfering SubstancesNo interference from specified endogenous or exogenous substances (e.g., common nasal medications, blood, mucin).No effect on test performance found at specified concentrations for 25 substances (e.g., throat lozenges, various nasal sprays, hand sanitizer, blood, mucin).
    Reproducibility/Near the Cut OffHigh agreement across sites for negative, low, moderate positive, and high negative samples.Moderate Positive: 100% (135/135) overall agreement (95% CI: 97.2%-100.0%).
    Low Positive: 94.1% (127/135) overall agreement (95% CI: 88.7%–97.0%).
    High Negative: 99.2% (132/133) overall agreement (95% CI: 95.9%-99.9%).
    True Negative: 99.3% (134/135) overall agreement (95% CI: 95.9%-99.9%).
    Clinical PerformancePositive Percent Agreement (PPA)High PPA against a molecular comparator (RT-PCR) in symptomatic individuals.Overall (Combined Studies): 86.9% (186/214) with 95% CI: 81.7%, 90.8% (within 5 days symptom onset).
    Original Study: 81.6% (71/87) with 95% CI: 72.2%, 88.4%.
    Omicron Study: 90.6% (115/127) with 95% CI: 84.2%, 94.5%.
    Negative Percent Agreement (NPA)High NPA against a molecular comparator (RT-PCR) in symptomatic individuals.Overall (Combined Studies): 98.5% (384/390) with 95% CI: 96.7%, 99.3% (within 5 days symptom onset).
    Original Study: 98.6% (205/208) with 95% CI: 95.8%, 99.5%.
    Omicron Study: 98.4% (179/182) with 95% CI: 95.3%, 99.4%.
    Performance by Days Post Symptom Onset (DPSO)Performance maintained within the specified window.PPA ranged across DPSO:
    • Day 0: 69.23% (Omicron Study)
    • Day 1: 94.12% (Original), 88.24% (Omicron)
    • Day 2: 73.33% (Original), 97.22% (Omicron)
    • Day 3: 76.00% (Original), 100.00% (Omicron)
    • Day 4: 88.89% (Original), 66.67% (Omicron)
    • Day 5: 100.00% (Original), 100.00% (Omicron) |
      | | Invalid Rate | Low invalid rate. | 0.68% overall (5/730). |
      | User/Environmental Factors | Flex Studies (Robustness) | Device performs accurately under various usage and environmental conditions. | Demonstrated robustness to usage variation and environmental factors. Identified that direct exposure of test strip to wet cleaning solutions or excessive glove powder may cause erroneous results, leading to specific instructions for use. |

    2. Sample Sizes and Data Provenance (Clinical Studies)

    • Clinical Test Set Sample Size:
      • Study 1 (Original): 295 evaluable subjects.
      • Study 2 (Omicron): 309 evaluable subjects.
      • Combined Clinical Data: 604 evaluable nasal swabs from symptomatic patients (within 5 days of symptom onset).
    • Data Provenance: Clinical studies were conducted within the United States.
      • Study 1: November 2020 through March 2021 (when Delta and Omicron were dominant).
      • Study 2: February 2022 to July 2022 (when Omicron and its variants were prevalent).
    • Retrospective/Prospective: Both clinical studies were prospective.

    3. Number of Experts and Qualifications for Ground Truth for Test Set

    This type of diagnostic device (lateral flow immunoassay) does not typically utilize human experts in the same way an AI/ML device would for image interpretation or clinical diagnosis. For the BinaxNOW COVID-19 Ag Card, the ground truth for the clinical studies was established by a comparator molecular test (RT-PCR). The experts involved would be the laboratory personnel performing and interpreting the RT-PCR assays. Their specific qualifications are not detailed in this summary but are implicitly assumed to be standard for clinical laboratory professionals performing EUA-authorized RT-PCR tests.

    4. Adjudication Method for the Test Set

    Not applicable in the typical sense for an AI/ML study involving human interpretation. The comparator method (RT-PCR) serves as the reference standard. The document mentions for the serial testing study's composite comparator method that in cases of discordant RT-PCR results, a third RT-PCR test was performed, and the final result based on majority rule.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No. This is a rapid antigen test, not an AI/ML system where human readers would interpret results "with vs. without AI assistance." The test is visually read by the user, and its performance is assessed against a molecular gold standard.

    6. Standalone Performance (Algorithm Only without Human-in-the-Loop Performance)

    This question is not applicable in the context of this device. The BinaxNOW COVID-19 Ag Card is a manually read, qualitative visual assay. There is no AI algorithm to evaluate for standalone performance. The "performance" tables provided in the document (PPA and NPA) essentially represent the "standalone" performance of the rapid antigen test itself when interpreted visually.

    7. Type of Ground Truth Used

    • For Clinical Studies: The primary ground truth for clinical performance (PPA, NPA) was an FDA Emergency Use Authorized real-time Polymerase Chain Reaction (RT-PCR) assay for the detection of SARS-CoV-2.
    • For Serial Testing Study: A composite comparator method was used, involving at least two highly sensitive EUA RT-PCRs. If discordant, a third RT-PCR was performed, and the final result was based on majority rule.
    • For Analytical Studies: Ground truth was established by known concentrations of heat-inactivated SARS-CoV-2 virus or WHO International Standard for SARS-CoV-2 Antigen (NIBSC 21/368) for LoD and inclusivity studies, and known presence/absence of specific microorganisms for cross-reactivity.

    8. Sample Size for the Training Set

    This information is not applicable for this type of IVD device. The BinaxNOW COVID-19 Ag Card is a laboratory-developed lateral flow assay, not an AI/ML model that is 'trained' on data. Its 'training' is the fundamental assay development and optimization process, not a computational training set.

    9. How the Ground Truth for the Training Set Was Established

    Not applicable for this device type. The manufacturing process and quality control of the reagents and test strip govern its 'performance' characteristics, which are then analytically and clinically validated.

    Ask a Question

    Ask a specific question about this device

    K Number
    K243518
    Date Cleared
    2025-02-11

    (90 days)

    Product Code
    Regulation Number
    866.3984
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    BinaxNOW™ COVID-19 Antigen Self Test; BinaxNOW™ COVID-19 Ag Card

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The BinaxNOW COVID-19 Antigen Self Test is a visually read lateral flow immunoassay intended for the rapid, qualitative detection of SARS-CoV-2 nucleocapsid protein antigens directly in anterior nasal (nares) swab specimens from individuals with signs and symptoms of COVID-19. This test is for non-prescription home use by individuals aged 15 years or older testing themselves, or adults testing individuals aged 2 years or older.

    All negative results are presumptive. Symptomatic individuals with an initial negative test result must be re-tested once between 48 and 72 hours after the first test using either an antigen test or a molecular test for SARS-CoV-2. Negative results do not preclude SARS-CoV-2 infections or other pathogens and should not be used as for treatment.

    Positive results do not rule out co-infection with other respiratory pathogens.

    This test is not a substitute for visits to a healthcare provider or appropriate follow-up and should not be used to determine any treatments without provider supervision. Individuals who test negative and experience continued or worsening COVID-19 like symptoms, such as fever, cough and/or shortness of breath, should seek follow up care from their healthcare provider.

    The performance characteristics for SARS-CoV-2 were established from November, 2020 to July, 2022, when SARS-CoV-2 Delta and Omicron were dominant. Test accuracy may change as new SARS-CoV-2 viruses emerge. Additional testing with a lab-based molecular test (e.g., PCR) should be considered in situations where a new virus or variant is suspected.

    Device Description

    The BinaxNOW COVID-19 Antigen Self Test is an immunochromatographic membrane assay that uses highly sensitive antibodies to detect SARS-CoV-2 nucleocapsid protein from nasal swab specimens. SARS-COV-2 specific antibodies and a control antibody are immobilized onto a membrane support as two distinct lines and combined with other reagents/pads to construct a test strip. This test strip and a well to hold the swab specimen are mounted on opposite sides of a cardboard, book-shaped hinged test card.

    To perform the test, a nasal swab specimen is collected from the patient, 6 drops of extraction reagent from a dropper bottle are added to the top hole of the swab well. The patient sample is inserted into ugh the bottom hole of the swab well, and firmly pushed upwards until the swab tip is visible through the top hole. The swab is rotated 3 times clockwise and is closed, bringing the extracted sample into contact with the test strip. Test results are interpreted visually at 15 minutes based on the presence of visually detectable pink/purple colored lines. Results should not be read after 30 minutes.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and the study proving the device meets them, based on the provided text:

    Acceptance Criteria and Device Performance for BinaxNOW COVID-19 Antigen Self Test

    The document outlines acceptance criteria generally through reported performance metrics for various analytical and clinical studies. Explicit "acceptance criteria" are not given as pass/fail thresholds in the same way that a test specification would be. Instead, the reported performance serves to demonstrate the device's capabilities.

    1. Table of Acceptance Criteria (Implied) and Reported Device Performance

    Acceptance Criteria Category (Implied)Reported Device Performance
    Analytical Performance
    PrecisionLot 1: 5X LoD (100%), 1X LoD (100%), High Negative (100%), Negative (100%)
    Lot 2: 5X LoD (100%), 1X LoD (97.8%), High Negative (100%), Negative (100%)
    Lot 3: 5X LoD (100%), 1X LoD (95.8%), High Negative (100%), Negative (100%)
    Limit of Detection (LoD)USA-WA1/2020: $3.5 \times 10^3$ TCID50/mL (70 TCID50/swab)
    B.1.1.529 (Omicron): $1.6 \times 10^3$ TCID50/mL (32.06 TCID50/swab)
    International Standard for SARS-CoV-2 Ag (NIBSC 21/368): 375 IU/mL (7.5 IU/swab)
    Analytical Reactivity (Inclusivity)Detected all tested SARS-CoV-2 strains (Alpha, Beta, Delta, Gamma, Iota, Italy-INMI1, Kappa, Zeta, Omicron variants including BA.2.3, BA.2.12.1, BA.2.75.5, BA.4.6, BA.5, BA.5.5, BF.5, BF.7, BQ.1, BQ.1.1, XBB, JN.1) at specified concentrations (e.g., Alpha at $2.80 \times 10^5$ TCID50/ml, Omicron (BA.5.5) at $8.80 \times 10^2$ TCID50/ml).
    Analytical Specificity (Cross-Reactivity/Interference)No cross-reactivity or interference observed with 29 common commensal and pathogenic microorganisms (9 bacteria, 17 viruses, 1 yeast, pooled human nasal wash) at specified concentrations (1 x $10^6$ CFU/mL for bacteria/yeast, 1 x $10^5$ TCID50/mL for viruses), both in absence and presence of SARS-CoV-2 at 3xLoD.
    High Dose Hook EffectNo high dose hook effect observed up to 1.4 x $10^6$ TCID50/mL of inactivated SARS-CoV-2 virus.
    Interfering SubstancesNo effect on test performance by 25 specified substances (e.g., throat lozenges, various nasal sprays, hand sanitizer/soap, blood, mucin, common medications) at specified concentrations.
    Usability Performance
    Usability Study (Procedural Execution)98% correct execution of procedural steps by lay users.
    Usability Study (Impact of Errors)100% of participants produced a valid result and interpreted their test result correctly.
    Lay User ReadabilityOverall (n=30): Positive Control (100%), Positive 2xLoD (83%), Positive 1.5xLoD (67%), Positive ≤1xLoD (60%), Negative Control (97%), Invalid 1 (97%), Invalid 2 (97%), Invalid 3 (100%), Invalid 4 (97%). Performance decreases with faint sample lines and is influenced by age and visual capabilities.
    Clinical Performance
    Overall/Combined (within 5 days of symptom onset)Positive Agreement (Sensitivity): 86.9% (95% CI: 81.7, 90.8) (186/214)
    Negative Agreement (Specificity): 98.5% (95% CI: 96.7, 99.3) (384/390)
    Invalid Rate: 0.68% (5/730)
    Original Study (Nov 2020 - Mar 2021)Positive Agreement: 81.6% (95% CI: 72.2, 88.4) (71/87)
    Negative Agreement: 98.6% (95% CI: 95.8, 99.5) (205/208)
    Overall Agreement: 93.6% (95% CI: 90.2, 95.8)
    Invalid Rate: 0.76% (3/397)
    Omicron Study (Feb 2022 - Jul 2022)Positive Agreement: 90.6% (95% CI: 84.2, 94.5) (115/127)
    Negative Agreement: 98.4% (95% CI: 95.3, 99.4) (179/182)
    Invalid Rate: 0.61% (2/327)
    PPA Stratified by Days Post Symptom Onset (DPSO)Varied by day and study; e.g., Original Study Day 1: 94.12%, Omicron Study Day 3: 100.0%.
    Serial Testing PPASymptomatic on First Day of Testing (2 Tests): Day 0: 59.6% (34/57), Day 2: 93.5% (58/62), Day 4: 94.8% (55/58)
    Symptomatic on First Day of Testing (3 Tests): Day 0: 92.2% (47/51), Day 2: 98.3% (59/60), Day 4: 98.1% (53/54)

    2. Sample Sizes and Data Provenance

    Test Set (Clinical Studies):

    • Total for Clinical Performance: 604 nasal swabs from symptomatic patients within 5 days of symptom onset.
    • Study 1 (Original Study): 295 subjects (resulting in 295 evaluable samples).
    • Study 2 (Omicron Study): 309 subjects (resulting in 309 evaluable samples).
    • Serial Testing Study: 5,600 eligible participants for analysis, out of 7,361 enrolled. 154 tested positive for SARS-CoV-2 infection by RT-PCR.
    • Provenance: All subjects were from the United States.
      • Study 1: Prospective, conducted from November 2020 through March 2021 across five investigational sites (when Delta and Omicron were dominant).
      • Study 2 (Omicron Study): Prospective, "all comers, real world," conducted from March 2022 to July 2022 at a high-volume COVID community testing site (when Omicron and its variants were prevalent). Led by Johns Hopkins Medicine in collaboration with the University of Maryland Medical Center and Maryland Department of Health.
      • Serial Testing Study: Prospective, decentralized clinical study conducted between January 2021 and May 2022 as part of the Rapid Acceleration of Diagnostics (RADx) initiative from NIH, with broad geographical representation in the U.S.

    Test Set (Usability Studies):

    • Lay User Readability Study: 30 users across various age ranges and with and without vision impairments.

    3. Number of Experts and Qualifications for Ground Truth

    • The document does not explicitly state the number of experts used to establish the ground truth for the test set in the clinical studies.
    • The ground truth in the clinical studies was established using FDA Emergency Use Authorized real-time Polymerase Chain Reaction (RT-PCR) assays for the detection of SARS-CoV-2. These are laboratory-based molecular tests, implying the involvement of qualified laboratory personnel (e.g., medical technologists, molecular diagnosticians) experienced in performing and interpreting these assays, but specific qualifications are not detailed. In the serial testing study, the composite comparator method involved "at least two highly sensitive EUA RT-PCRs" and a third if discordant, performed by presumably qualified laboratory personnel.

    4. Adjudication Method for the Test Set

    • Clinical Studies (Primary Performance): No explicit adjudication method is described for discrepancies between the BinaxNOW test and the comparator RT-PCR. The RT-PCR is considered the gold standard (comparator method).
    • Serial Testing Study (Composite Comparator): A form of adjudication was used for the molecular comparator itself: "If results of the first two molecular tests were discordant a third highly sensitive EUA RT-PCR test was performed, and the final test result was based upon the majority rule." This is a 2+1 adjudication method for establishing the RT-PCR ground truth.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • No Multi-Reader Multi-Case (MRMC) comparative effectiveness study was conducted to evaluate human readers with and without AI assistance. The BinaxNOW COVID-19 Antigen Self Test is a visually read lateral flow immunoassay, interpreted directly by the user, and does not involve AI assistance for result interpretation.

    6. Standalone (Algorithm Only) Performance Study

    • No standalone (algorithm only) performance study was conducted. As a visually read immunoassay, the device relies on human interpretation. The "lay user readability study" specifically evaluates human interpretation, not an automated algorithm.

    7. Type of Ground Truth Used

    • Clinical Studies: The primary ground truth for clinical performance was established using FDA Emergency Use Authorized real-time Polymerase Chain Reaction (RT-PCR) assays for SARS-CoV-2.
    • Analytical Studies (LoD, Reactivity, Specificity): The ground truth was based on defined concentrations of inactivated SARS-CoV-2 virus strains, international standards, or specific microbial/substance concentrations, as prepared by qualified laboratory personnel.

    8. Sample Size for the Training Set

    • The document describes performance evaluation studies (test sets) for the BinaxNOW device. It does not provide information about a "training set" in the context of machine learning, as this is a traditional in-vitro diagnostic device that relies on chemical reactions and visual interpretation, not an AI/ML-based device.

    9. How the Ground Truth for the Training Set was Established

    • As there is no mention of an AI/ML training set, this information is not applicable and not provided in the document.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1