K Number
K241188
Date Cleared
2025-04-18

(354 days)

Product Code
Regulation Number
866.3328
Panel
MI
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The Acucy Influenza A&B Test is a rapid chromatographic immunoassay for the qualitative detection and differentiation of influenza A and B viral nucleoprotein antigens directly from anterior nasal and nasopharyngeal swabs from patients with signs and symptoms of respiratory infection. The test is intended for use with the Acucy or Acucy 2 Reader as an aid in the diagnosis of influenza A and B viral infections. The test is not intended for the detection of influenza C viruses. Negative test results are presumptive and should be confirmed by viral culture or an FDA-cleared influenza A and B molecular assay. Negative test results do not preclude influenza viral infection and should not be used as the sole basis for treatment or other patient management decisions.

Performance characteristics for influenza A were established during the 2017-2018 influenza season when influenza A/H3N2 and A/H1N1pdm09 were the predominant influenza A viruses in circulation. When other influenza A viruses are emerging, performance characteristics may vary.

If an infection with a novel influenza A virus is suspected based on current clinical and epidemiological screening criteria recommended by public health authorities, specimens should be collected with appropriate infection control precautions for novel virulent influenza viruses and sent to state or local health department for testing. Viral culture should not be attempted in these cases unless a BSL 3+ facility is available to receive and culture specimens.

Device Description

The Acucy Influenza A&B Test allows for the differential detection of influenza A and influenza B antigens, when used with the Acucy 2 Reader. The patient sample is placed in the Extraction Buffer vial, during which time the virus particles in the sample are disrupted, exposing internal viral nucleoproteins. After disruption, the sample is dispensed into the Test Cassette sample well. From the sample well, the sample migrates along the membrane surface. If influenza A or B viral antigens are present, they will form a complex with mouse monoclonal antibodies to influenza A and/or B nucleoproteins conjugated to colloidal gold. The complex will then be bound by a rat anti-influenza A and/or mouse anti-influenza B antibody coated on the nitrocellulose membrane.

Depending upon the operator's choice, the Test Cassette is either placed inside the Acucy 2 Reader for automatically timed development mode (WALK AWAY Mode) or placed on the counter or bench top for a manually timed development and then placed into Acucy 2 Reader to be scanned (READ NOW Mode).

The Acucy 2 Reader will scan the Test Cassette and measure the absorbance intensity by processing the results using method-specific algorithms. The Acucy 2 Reader will display the test results POS (+), NEG (-), or INVALID on the screen. The results can also be automatically printed on the optional Printer if this option is selected.

AI/ML Overview

Here's a breakdown of the acceptance criteria and the studies performed for the Acucy Influenza A&B Test with the Acucy 2 System, based on the provided FDA 510(k) clearance letter:

1. Table of Acceptance Criteria and Reported Device Performance

The document does not explicitly state "acceptance criteria" for each study, but rather presents the results and the implication that these results demonstrate the equivalence and performance of the device. For the purpose of this table, I will infer the implicit acceptance criteria from the expected outcomes and the conclusion that the device is "substantially equivalent."

Performance MetricImplicit Acceptance Criteria (Inferred)Reported Device Performance
Within-Laboratory Repeatability (Acucy)All positive samples (MP, LP) detect as positive (100% agreement); All negative samples (HN, N) detect as negative (100% agreement).Flu A MP: 100% (80/80)
Flu A LP: 100% (80/80)
Flu A HN: 100% (80/80)
Flu B MP: 100% (80/80)
Flu B LP: 100% (80/80)
Flu AB HN: 100% (80/80)
Negative: 100% (80/80)
Within-Laboratory Repeatability (Acucy 2)All positive samples (MP, LP) detect as positive (100% agreement); All negative samples (HN, N) detect as negative (100% agreement).Flu A MP: 100% (80/80)
Flu A LP: 100% (80/80)
Flu A HN: 100% (80/80)
Flu B MP: 100% (80/80)
Flu B LP: 100% (80/80)
Flu AB HN: 100% (80/80)
Negative: 100% (80/80)
Instrument-to-Instrument PrecisionAll positive samples detect as positive (100% agreement); All negative samples detect as negative (100% agreement).Flu A M (2x LoD): 75/75 (Pass)
Flu A L (0.95x LoD): 75/75 (Pass)
Flu A HN (0.05x LoD): 0/75 (Pass - expected negative)
Flu B M (2x LoD): 75/75 (Pass)
Flu B L (0.95x LoD): 75/75 (Pass)
Flu B HN (0.05x LoD): 0/75 (Pass - expected negative)
Negative: 0/75 (Pass - expected negative)
Test Mode EquivalencyAll positive samples detect as positive; All negative samples detect as negative, and results are equivalent between READ NOW and WALK AWAY modes.Flu A+/B-: 20/20 POS Flu A, 20/20 NEG Flu B for both READ NOW and WALK AWAY modes.
Flu A-/B+: 20/20 NEG Flu A, 20/20 POS Flu B for both READ NOW and WALK AWAY modes.
Flu A-/B- Negative: 20/20 NEG Flu A, 20/20 NEG Flu B for both READ NOW and WALK AWAY modes.
Limit of Detection (LoD)Acucy 2 LoD should be equivalent to Acucy LoD (e.g., ≥95% detection rate at the lowest concentration).Influenza A/Michigan Strain: LoD 2.82E+02 TCID50/mL (Acucy: 20/20, Acucy 2: 20/20 for both devices A & B)
Influenza A/Singapore Strain: LoD 3.16E+03 TCID50/mL (Acucy: 20/20, Acucy 2: 20/20 for both devices A & B)
Influenza B/Phuket Strain: LoD 2.09E+02 TCID50/mL (Acucy: 20/20, Acucy 2: 20/20 for Device A); LoD 4.17E+02 TCID50/mL (Acucy: 20/20, Acucy 2: 20/20 for Device B)
Influenza B/Colorado Strain: LoD 2.82E+02 TCID50/mL (Acucy: 20/20, Acucy 2: 20/20 for Device A); LoD 7.05E+02 TCID50/mL (Acucy: 20/20, Acucy 2: 20/20 for Device B)
Analytical Cutoff (LoB)All blank samples should be negative (0 mABS) and the cutoff values should be consistent with the predicate device.All blank samples showed 0 mABS. Analytical cut-off values for Acucy 2 were set to match the previously established cut-off of 6.4 mABS for Flu A Line and 5.4 mABS for Flu B Line (from the predicate Acucy system).
Cross ContaminationNo cross-contamination (high titer positives detect as positive, negatives detect as negative).Flu A High Positive: 30/30 (Pass)
Flu B High Positive: 30/30 (Pass)
Negative: 60/60 (Pass)
Method Comparison (Acucy vs. Acucy 2)High Positive Percent Agreement (PPA) and Negative Percent Agreement (NPA) compared to the Acucy Reader should be high (close to 100%).Influenza A: PPA: 100% (30/30), NPA: 98.3% (59/60)
Influenza B: PPA: 100% (30/30), NPA: 100% (60/60)
Flex StudiesAll hazards and sources of potential error are controlled.All tests showed expected results, indicating that the device performs correctly under various "flex" conditions (temperature, humidity, vibrations, lighting, air draft, altitude, non-level position, cassette read window contamination, movement in WALK AWAY mode, test cassette movement/vertical incubation, reader drawer positioning). Conclusion: All hazards controlled through design and labeling mitigations.
External Multi-Site Reproducibility (Acucy)High agreement (close to 100%) for all panel members across sites and operators.Influenza A HN: 98.9% (89/90)
Influenza A LP: 100% (90/90)
Influenza A MP: 100% (90/90)
Negative: 100% (90/90)
Influenza B HN: 100% (90/90)
Influenza B LP: 100% (90/90)
Influenza B MP: 98.9% (89/90)
Negative: 100% (90/90)
External Multi-Site Reproducibility (Acucy 2)High agreement (close to 100%) for all panel members across sites and operators.Influenza A HN: 100% (90/90)
Influenza A LP: 100% (90/90)
Influenza A MP: 100% (90/90)
Influenza B HN: 100% (90/90)
Influenza B LP: 100% (90/90)
Influenza B MP: 100% (90/90)
Influenza A & B Negative: 100% (90/90)

2. Sample Size Used for the Test Set and Data Provenance

  • Precision Study (Repeatability & Instrument-to-Instrument): These studies primarily used contrived samples (prepared in the laboratory by spiking virus into clinical matrix) rather than naturally occurring patient samples.
    • Repeatability: 80 replicates per panel member (Flu A MP, LP, HN; Flu B MP, LP, HN; Negative) for both Acucy and Acucy 2. Total of 7 x 80 = 560 tests per device (Acucy and Acucy 2).
    • Instrument-to-Instrument Precision: 75 replicates per panel member (7 panel members). Total of 7 x 75 = 525 tests per device.
    • Data Provenance: Laboratory-generated, in vitro data. The origin of the clinical matrix used for preparing contrived samples is described as "nasal swab samples... collected from healthy donors and confirmed Flu negative by PCR" for LoD studies, likely similar for precision studies. No specific country of origin is mentioned, but typically for FDA submissions, this data is generated in the US or under comparable quality systems. It is retrospective in the sense that the samples were prepared and tested.
  • Test Mode Equivalency: 20 replicates each of contrived positive Flu A, 20 replicates of contrived positive Flu B, and 20 Flu A and Flu B negative samples. Total of 60 tests (3 x 20 replicates). Data provenance is laboratory-generated/contrived.
  • Limit of Detection (LoD):
    • Range Finding: 5 replicates per concentration for multiple strains and dilutions (as shown in Table 5).
    • Confirmation Testing: 20 replicates per concentration for established LoD.
    • Data Provenance: Contrived samples using pooled negative clinical matrix from healthy donors (confirmed Flu negative by PCR). Laboratory-generated, in vitro data.
  • Analytical Cutoff Study: 60 replicates of a blank sample per lot. Total of 2 lots, so 120 tests. Data provenance is laboratory-generated/contrived.
  • Cross-Contamination Study: 30 high titer Flu A positive, 30 high titer Flu B positive, and 60 negative samples. Total of 120 tests. Data provenance is laboratory-generated/contrived.
  • Method Comparison (Acucy Reader vs. Acucy 2 Reader):
    • Test Set: 30 PCR-confirmed Flu A positive clinical samples, 30 PCR-confirmed Flu B positive clinical samples, and 30 Flu A and Flu B negative clinical samples.
    • Total N for Flu A analysis: 30 Flu A positive + (30 Flu B positive + 30 double negative) = 90 samples.
    • Total N for Flu B analysis: 30 Flu B positive + (30 Flu A positive + 30 double negative) = 90 samples.
    • Data Provenance: Clinical samples (retrospective, given they are PCR-confirmed and a specific count is provided). No country of origin is explicitly stated.
  • CLIA Waiver Studies (Flex Studies): 5 replicates for each flex condition (Negative, Low Positive Flu A, Low Positive Flu B). Number of flex conditions is not explicitly totaled but over 10 types are listed. Data provenance is laboratory-generated/contrived.
  • Reproducibility Studies (External Multi-Site):
    • Acucy System: Panel of 7 samples (Flu A HN, LP, MP; Flu B HN, LP, MP; Negative). Two operators per site, 3 sites, over 5 non-consecutive days. This means 30 replicates per sample type per site (2 operators * 5 days * 3 replicates assumed for LP/HN based on typical studies, though not explicitly stated as count of 30, but total is 90). So, 90 replicates per sample type across all sites (3 sites * 30 replicates). Overall N for Flu A or Flu B: 4 sample types * 90 replicates = 360 tests.
    • Acucy 2 System: Same design as above. 90 replicates per sample type (Flu A HN, LP, MP; Flu B HN, LP, MP; Influenza A & B Negative), across 3 sites. Overall N for Flu A or Flu B: 4 sample types * 90 replicates = 360 tests.
    • Data Provenance: Contrived samples (negative, high negative, low positive, moderate positive) with coded, randomized, and masked conditions. Tested at 3 "point-of-care (POC) sites" for Acucy and 3 "laboratory sites" for Acucy 2. This suggests real-world testing environments, but with contrived samples.

3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

For most analytical studies (precision, LoD, analytical cutoff, cross-contamination, flex studies), the ground truth is established by known concentrations of spiked viral material in a controlled laboratory setting. Therefore, dedicated "experts" for ground truth adjudication in these cases are not applicable in the same way as for clinical studies.

For the Method Comparison study (Acucy Reader vs. Acucy 2 Reader), the ground truth for the clinical samples was established by PCR confirmation. The document does not specify the number of experts or their qualifications for interpreting these PCR results, but PCR results are generally considered a high standard for viral detection.

For the Reproducibility Studies, the ground truth for the test panel was established by the known composition of the contrived samples (e.g., negative, high negative, low positive, moderate positive).

4. Adjudication Method for the Test Set

  • Analytical Studies (Precision, LoD, Analytical Cutoff, Cross-Contamination, Flex Studies, Reproducibility): Adjudication is inherently by known input concentration or known sample composition. There isn't an "adjudication method" in the sense of multiple human reviewers; rather, it's a comparison to the predefined true state of the contrived sample.
  • Method Comparison Study: The ground truth for clinical samples was established by PCR confirmed results. The device's results were compared against these PCR results. There is no mention of human expert adjudication (e.g., 2+1 or 3+1 consensus) for the PCR results themselves or for resolving discrepancies between the device and PCR.

5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

There is no MRMC comparative effectiveness study described in this document.

  • The device is an automated reader for a rapid chromatographic immunoassay. It does not appear to involve human interpretation of images or complex data that would typically benefit from AI assistance in the way an MRMC study evaluates.
  • The study focuses on the performance of the device (Acucy 2 System) only compared to a predicate device (Acucy System) and against laboratory-defined ground truths. There's no "human-in-the-loop" aspect being evaluated in terms of improved human reader performance with AI.

6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

Yes, the studies presented are primarily standalone performance studies for the Acucy 2 System. The device (Acucy 2 Reader) automatically scans the test cassette and processes results using "method-specific algorithms" (Page 6). The output is "POS (+), NEG (-), or INVALID" displayed on the screen. The entire workflow described (from sample application to reader result) represents the standalone performance of the device and its embedded algorithms.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

The ground truth used varied depending on the study:

  • Analytical Studies (Precision, LoD, Analytical Cutoff, Cross-Contamination, Flex Studies, Reproducibility): Ground truth was based on known concentrations of spiked viral material in contrived samples. For negative controls, it was the absence of the target virus.
  • Method Comparison Study: Ground truth for clinical samples was established by PCR confirmation.

8. The sample size for the training set

The document does not explicitly describe a training set or its sample size. The reported studies are primarily verification and validation studies to demonstrate performance and equivalence of the Acucy 2 System compared to the predicate Acucy System. For medical devices, especially immunoassay readers, algorithms are often developed and locked down before these validation studies are performed. If machine learning or AI was used in the algorithm development, the training data would precede these clearance studies and is typically not fully disclosed in a 510(k) summary unless directly relevant to a specific "software change" or unique characteristic being validated.

9. How the ground truth for the training set was established

As no training set is explicitly mentioned, the method for establishing its ground truth is also not specified in this document. If algorithmic development involved a training phase, it's highly probable that contrived samples with known viral concentrations and PCR-confirmed clinical samples with known outcomes would have been utilized for this purpose.

§ 866.3328 Influenza virus antigen detection test system.

(a)
Identification. An influenza virus antigen detection test system is a device intended for the qualitative detection of influenza viral antigens directly from clinical specimens in patients with signs and symptoms of respiratory infection. The test aids in the diagnosis of influenza infection and provides epidemiological information on influenza. Due to the propensity of the virus to mutate, new strains emerge over time which may potentially affect the performance of these devices. Because influenza is highly contagious and may lead to an acute respiratory tract infection causing severe illness and even death, the accuracy of these devices has serious public health implications.(b)
Classification. Class II (special controls). The special controls for this device are:(1) The device's sensitivity and specificity performance characteristics or positive percent agreement and negative percent agreement, for each specimen type claimed in the intended use of the device, must meet one of the following two minimum clinical performance criteria:
(i) For devices evaluated as compared to an FDA-cleared nucleic acid based-test or other currently appropriate and FDA accepted comparator method other than correctly performed viral culture method:
(A) The positive percent agreement estimate for the device when testing for influenza A and influenza B must be at the point estimate of at least 80 percent with a lower bound of the 95 percent confidence interval that is greater than or equal to 70 percent.
(B) The negative percent agreement estimate for the device when testing for influenza A and influenza B must be at the point estimate of at least 95 percent with a lower bound of the 95 percent confidence interval that is greater than or equal to 90 percent.
(ii) For devices evaluated as compared to correctly performed viral culture method as the comparator method:
(A) The sensitivity estimate for the device when testing for influenza A must be at the point estimate of at least 90 percent with a lower bound of the 95 percent confidence interval that is greater than or equal to 80 percent. The sensitivity estimate for the device when testing for influenza B must be at the point estimate of at least 80 percent with a lower bound of the 95 percent confidence interval that is greater than or equal to 70 percent.
(B) The specificity estimate for the device when testing for influenza A and influenza B must be at the point estimate of at least 95 percent with a lower bound of the 95 percent confidence interval that is greater than or equal to 90 percent.
(2) When performing testing to demonstrate the device meets the requirements in paragraph (b)(1) of this section, a currently appropriate and FDA accepted comparator method must be used to establish assay performance in clinical studies.
(3) Annual analytical reactivity testing of the device must be performed with contemporary influenza strains. This annual analytical reactivity testing must meet the following criteria:
(i) The appropriate strains to be tested will be identified by FDA in consultation with the Centers for Disease Control and Prevention (CDC) and sourced from CDC or an FDA-designated source. If the annual strains are not available from CDC, FDA will identify an alternative source for obtaining the requisite strains.
(ii) The testing must be conducted according to a standardized protocol considered and determined by FDA to be acceptable and appropriate.
(iii) By July 31 of each calendar year, the results of the last 3 years of annual analytical reactivity testing must be included as part of the device's labeling. If a device has not been on the market long enough for 3 years of annual analytical reactivity testing to have been conducted since the device received marketing authorization from FDA, then the results of every annual analytical reactivity testing since the device received marketing authorization from FDA must be included. The results must be presented as part of the device's labeling in a tabular format, which includes the detailed information for each virus tested as described in the certificate of authentication, either by:
(A) Placing the results directly in the device's § 809.10(b) of this chapter compliant labeling that physically accompanies the device in a separate section of the labeling where the analytical reactivity testing data can be found; or
(B) In the device's label or in other labeling that physically accompanies the device, prominently providing a hyperlink to the manufacturer's public Web site where the analytical reactivity testing data can be found. The manufacturer's home page, as well as the primary part of the manufacturer's Web site that discusses the device, must provide a prominently placed hyperlink to the Web page containing this information and must allow unrestricted viewing access.
(4) If one of the actions listed at section 564(b)(1)(A)-(D) of the Federal Food, Drug, and Cosmetic Act occurs with respect to an influenza viral strain, or if the Secretary of Health and Human Services (HHS) determines, under section 319(a) of the Public Health Service Act, that a disease or disorder presents a public health emergency, or that a public health emergency otherwise exists, with respect to an influenza viral strain:
(i) Within 30 days from the date that FDA notifies manufacturers that characterized viral samples are available for test evaluation, the manufacturer must have testing performed on the device with those viral samples in accordance with a standardized protocol considered and determined by FDA to be acceptable and appropriate. The procedure and location of testing may depend on the nature of the emerging virus.
(ii) Within 60 days from the date that FDA notifies manufacturers that characterized viral samples are available for test evaluation and continuing until 3 years from that date, the results of the influenza emergency analytical reactivity testing, including the detailed information for the virus tested as described in the certificate of authentication, must be included as part of the device's labeling in a tabular format, either by:
(A) Placing the results directly in the device's § 809.10(b) of this chapter compliant labeling that physically accompanies the device in a separate section of the labeling where analytical reactivity testing data can be found, but separate from the annual analytical reactivity testing results; or
(B) In a section of the device's label or in other labeling that physically accompanies the device, prominently providing a hyperlink to the manufacturer's public Web site where the analytical reactivity testing data can be found. The manufacturer's home page, as well as the primary part of the manufacturer's Web site that discusses the device, must provide a prominently placed hyperlink to the Web page containing this information and must allow unrestricted viewing access.