K Number
K973954
Date Cleared
1997-12-22

(74 days)

Product Code
Regulation Number
866.3330
Panel
MI
Reference & Predicate Devices
N/A
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The IMAGEN™ Respiratory Screen is a qualitative indirect immunofluorescence screening test for the presumptive detection of respiratory viruses; Respiratory Syncytial Virus (RSV). Influenza A and B. Parainfluenza types 1. 2 and 3 and Adenovirus in respiratory specimens (nasopharyngeal aspirates) and in cell cultures.

Device Description

Not Found

AI/ML Overview

Here's an analysis of the provided text regarding the IMAGEN™ Respiratory Screen, structured according to your requested information:

IMAGEN™ Respiratory Screen Acceptance Criteria and Study Details

The provided documents (K973954) consist of a summary of safety and effectiveness, the FDA's clearance letter, and the Indications for Use statement for the IMAGEN™ Respiratory Screen. It is a supplement to K962037.

1. A table of acceptance criteria and the reported device performance

The provided text does not explicitly state numerical acceptance criteria in a table format, nor does it provide detailed performance metrics (like sensitivity, specificity, accuracy) for the IMAGEN™ Respiratory Screen. Instead, it states that "Performance characteristics for the additional intended uses have been established by external clinical evaluation against the Bartels Viral Respiratory Screening and Identification Kit and standard viral isolation reference methods."

To fill this table accurately, we would need to refer to "Exhibit E," which is mentioned as containing the detailed performance data. Without "Exhibit E," specific numerical acceptance criteria and reported device performance cannot be provided.

Hypothetical Table (Illustrative, as actual data is missing from the provided text):

Performance MetricAcceptance Criteria (Hypothetical)Reported Device Performance (Hypothetical)
Sensitivity≥ 90% for all target virusesNot Reported
(Refer to Exhibit E)
Specificity≥ 95% for all target virusesNot Reported
(Refer to Exhibit E)
Overall Agreement≥ 92% with reference methodsNot Reported
(Refer to Exhibit E)

2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

  • Sample Size for Test Set: Not explicitly stated in the provided documents. The text mentions "clinical evaluation," but the number of specimens tested is not given.
  • Data Provenance: The study was an "external clinical evaluation." The specific country of origin for the data is not mentioned. Given the regulatory contact is from the UK, it's possible some or all of the clinical evaluation was conducted there or in other European countries, but this is not confirmed. It is a retrospective evaluation, as the data was collected to establish performance characteristics.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

  • The ground truth for the test set was established using "standard viral isolation reference methods" and the "Bartels Viral Respiratory Screening and Identification Kit." These methods are considered the gold standard for viral detection.
  • The text does not specify the number of individual experts or their qualifications involved in interpreting these reference methods for establishing ground truth. The implication is that the reference methods themselves (e.g., viral culture followed by identification) are the "experts" in this context.

4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

  • The provided text does not describe any specific adjudication method among human readers for the test set. Since the evaluation was against "standard viral isolation reference methods" and a predicate device (Bartels Kit), the ground truth was inherently established by these objective methods rather than through expert consensus requiring adjudication.

5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

  • No, an MRMC study was not done. This device is an in vitro diagnostic (IVD) immunofluorescence screening test, not an AI-powered diagnostic tool. The performance description focuses on the agreement of the device's output with reference methods, not on human-reader performance with or without AI assistance.
  • Therefore, an effect size for human readers with/without AI assistance is not applicable and not reported.

6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done

  • Yes, a standalone performance evaluation was done. The IMAGEN™ Respiratory Screen itself is an "algorithm only" in the sense that it produces a result (presumptive detection of respiratory viruses) based on the immunofluorescence reaction. The "clinical evaluation against the Bartels Viral Respiratory Screening and Identification Kit and standard viral isolation reference methods" directly assesses the standalone performance of the IMAGEN™ system. The text indicates it's for use in laboratories where "qualified technicians are familiar with routine indirect immunofluorescence testing," suggesting that while human technicians perform the test, the performance being evaluated is that of the assay itself compared to the gold standard.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

  • The ground truth used was "standard viral isolation reference methods" (e.g., cell culture with subsequent viral identification) and comparison against a legally marketed predicate device, the "Bartels Viral Respiratory Screening and Identification Kit." Viral isolation is considered a gold standard for detecting viable viruses. These are objective laboratory methods, not subjective expert consensus or pathology.

8. The sample size for the training set

  • The provided documents do not mention a "training set" or its sample size. This is consistent with the nature of an immunofluorescence assay development, which typically undergoes analytical validation and then clinical validation against known specimens rather than learning from a "training set" like an AI model would.

9. How the ground truth for the training set was established

  • As there is no mention of a "training set" in the context of an immunofluorescence assay, this question is not applicable. The development process for such a device would involve extensive internal validation using characterized specimens, but it's not typically referred to as a "training set" with ground truth established in the same way as machine learning models.

§ 866.3330 Influenza virus serological reagents.

(a)
Identification. Influenza virus serological reagents are devices that consist of antigens and antisera used in serological tests to identify antibodies to influenza in serum. The identification aids in the diagnosis of influenza (flu) and provides epidemiological information on influenza. Influenza is an acute respiratory tract disease, which is often epidemic.(b)
Classification. Class I (general controls). The device is exempt from the premarket notification procedures in subpart E of part 807 of this chapter subject to the limitations in § 866.9.