Search Results
Found 2 results
510(k) Data Aggregation
(74 days)
IMAGEN RESPIRATORY SCREEN
The IMAGEN™ Respiratory Screen is a qualitative indirect immunofluorescence screening test for the presumptive detection of respiratory viruses; Respiratory Syncytial Virus (RSV). Influenza A and B. Parainfluenza types 1. 2 and 3 and Adenovirus in respiratory specimens (nasopharyngeal aspirates) and in cell cultures.
Not Found
Here's an analysis of the provided text regarding the IMAGEN™ Respiratory Screen, structured according to your requested information:
IMAGEN™ Respiratory Screen Acceptance Criteria and Study Details
The provided documents (K973954) consist of a summary of safety and effectiveness, the FDA's clearance letter, and the Indications for Use statement for the IMAGEN™ Respiratory Screen. It is a supplement to K962037.
1. A table of acceptance criteria and the reported device performance
The provided text does not explicitly state numerical acceptance criteria in a table format, nor does it provide detailed performance metrics (like sensitivity, specificity, accuracy) for the IMAGEN™ Respiratory Screen. Instead, it states that "Performance characteristics for the additional intended uses have been established by external clinical evaluation against the Bartels Viral Respiratory Screening and Identification Kit and standard viral isolation reference methods."
To fill this table accurately, we would need to refer to "Exhibit E," which is mentioned as containing the detailed performance data. Without "Exhibit E," specific numerical acceptance criteria and reported device performance cannot be provided.
Hypothetical Table (Illustrative, as actual data is missing from the provided text):
Performance Metric | Acceptance Criteria (Hypothetical) | Reported Device Performance (Hypothetical) |
---|---|---|
Sensitivity | ≥ 90% for all target viruses | Not Reported |
(Refer to Exhibit E) | ||
Specificity | ≥ 95% for all target viruses | Not Reported |
(Refer to Exhibit E) | ||
Overall Agreement | ≥ 92% with reference methods | Not Reported |
(Refer to Exhibit E) |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size for Test Set: Not explicitly stated in the provided documents. The text mentions "clinical evaluation," but the number of specimens tested is not given.
- Data Provenance: The study was an "external clinical evaluation." The specific country of origin for the data is not mentioned. Given the regulatory contact is from the UK, it's possible some or all of the clinical evaluation was conducted there or in other European countries, but this is not confirmed. It is a retrospective evaluation, as the data was collected to establish performance characteristics.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- The ground truth for the test set was established using "standard viral isolation reference methods" and the "Bartels Viral Respiratory Screening and Identification Kit." These methods are considered the gold standard for viral detection.
- The text does not specify the number of individual experts or their qualifications involved in interpreting these reference methods for establishing ground truth. The implication is that the reference methods themselves (e.g., viral culture followed by identification) are the "experts" in this context.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- The provided text does not describe any specific adjudication method among human readers for the test set. Since the evaluation was against "standard viral isolation reference methods" and a predicate device (Bartels Kit), the ground truth was inherently established by these objective methods rather than through expert consensus requiring adjudication.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, an MRMC study was not done. This device is an in vitro diagnostic (IVD) immunofluorescence screening test, not an AI-powered diagnostic tool. The performance description focuses on the agreement of the device's output with reference methods, not on human-reader performance with or without AI assistance.
- Therefore, an effect size for human readers with/without AI assistance is not applicable and not reported.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
- Yes, a standalone performance evaluation was done. The IMAGEN™ Respiratory Screen itself is an "algorithm only" in the sense that it produces a result (presumptive detection of respiratory viruses) based on the immunofluorescence reaction. The "clinical evaluation against the Bartels Viral Respiratory Screening and Identification Kit and standard viral isolation reference methods" directly assesses the standalone performance of the IMAGEN™ system. The text indicates it's for use in laboratories where "qualified technicians are familiar with routine indirect immunofluorescence testing," suggesting that while human technicians perform the test, the performance being evaluated is that of the assay itself compared to the gold standard.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- The ground truth used was "standard viral isolation reference methods" (e.g., cell culture with subsequent viral identification) and comparison against a legally marketed predicate device, the "Bartels Viral Respiratory Screening and Identification Kit." Viral isolation is considered a gold standard for detecting viable viruses. These are objective laboratory methods, not subjective expert consensus or pathology.
8. The sample size for the training set
- The provided documents do not mention a "training set" or its sample size. This is consistent with the nature of an immunofluorescence assay development, which typically undergoes analytical validation and then clinical validation against known specimens rather than learning from a "training set" like an AI model would.
9. How the ground truth for the training set was established
- As there is no mention of a "training set" in the context of an immunofluorescence assay, this question is not applicable. The development process for such a device would involve extensive internal validation using characterized specimens, but it's not typically referred to as a "training set" with ground truth established in the same way as machine learning models.
Ask a specific question about this device
(216 days)
IMAGEN RESPIRATORY SCREEN
The IMAGENTM Respiratory Screen is a qualitative indirect immunofluorescence test for the detection of Respiratory Syncytial Virus, Influenza A virus, Parainfluenza virus type 3 and Adenovirus directly in respiratory specimens and Respiratory Syncytial Virus, Influenza A and B virus. Parainfluenza virus types 1, 2 and 3 and Adenovirus in cell culture monolayers.
The test consists of the following reagents: a Screening reagent, a Negative control reagent, a Fluorescein Isothiocyanate (FITC) Conjugate reagent, Mounting fluid, and Positive and Negative Control slides. It is a two-step direct immunofluorescence staining method.
This document describes the IMAGENTM Respiratory Screen, a qualitative indirect immunofluorescence test for detecting various respiratory viruses. The information provided focuses on its performance characteristics and safety.
Acceptance Criteria and Device Performance
The document does not explicitly state pre-defined acceptance criteria in terms of specific performance metrics (e.g., sensitivity, specificity thresholds). Instead, it describes performance characteristics established through external clinical evaluations against a predicate device (Bartels Viral Respiratory Screening and Identification Kit) and standard viral isolation reference methods.
The reported device performance is presented as a summary of claims that have been established (or are ongoing).
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Directly in respiratory specimens: | |
Detection of Respiratory Syncytial virus (RSV) | Claims established |
Detection of Influenza A virus | Claims established |
Detection of Parainfluenza virus type 3 | Claims established |
Detection of Adenovirus | Claims established |
Detection of Influenza B virus | Trials ongoing to collect adequate data |
Detection of Parainfluenza virus type 1 | Trials ongoing to collect adequate data |
Detection of Parainfluenza virus type 2 | Trials ongoing to collect adequate data |
In cell culture isolates: | |
Detection of Respiratory Syncytial virus (RSV) | Data submitted to support claims |
Detection of Influenza A virus | Data submitted to support claims |
Detection of Influenza B virus | Data submitted to support claims |
Detection of Parainfluenza virus type 1 | Data submitted to support claims |
Detection of Parainfluenza virus type 2 | Data submitted to support claims |
Detection of Parainfluenza virus type 3 | Data submitted to support claims |
Detection of Adenovirus | Data submitted to support claims |
Missing critical information includes:
- Specific sensitivity and specificity values (or other relevant metrics) that were considered acceptable.
- The statistical methods used to determine if the performance met any implicit criteria.
Study Information
The document provides limited details on the specific studies conducted:
- Sample size used for the test set and the data provenance: The document does not specify the sample size used for the external clinical evaluation (test set). It also does not mention the country of origin of the data or whether the studies were retrospective or prospective. It only states "external clinical evaluation."
- Number of experts used to establish the ground truth for the test set and the qualifications of those experts: This information is not provided.
- Adjudication method for the test set: The document does not describe any adjudication method.
- If a multi-reader multi-case (MRMC) comparative effectiveness study was done: An MRMC study is not mentioned. The evaluation was against a predicate device and standard viral isolation reference methods, not described as a comparative effectiveness study with human readers.
- If a standalone (i.e., algorithm only without human-in-the-loop performance) was done: The IMAGENTM Respiratory Screen is a diagnostic kit (reagents) used by qualified technicians. Its performance is inherently linked to human interpretation, so a purely standalone (algorithm-only) performance is not applicable in the traditional sense for this type of test. Its "standalone" performance would be the performance of the assay itself when processed and interpreted by a technician, which is what the clinical evaluation would assess.
- The type of ground truth used: The ground truth for the test set was established using "the Bartels Viral Respiratory Screening and Identification Kit and standard viral isolation reference methods." This implies that viral isolation (a microbiological gold standard) was the primary ground truth, with the Bartels kit potentially used as a comparator or secondary reference.
- The sample size for the training set: This information is not provided. As an immunofluorescence assay kit, it doesn't typically have a "training set" in the machine learning sense. The "training" of the product would involve its development, antibody selection, and optimization, not a data-driven training set like an AI algorithm.
- How the ground truth for the training set was established: Not applicable in the context of an immunofluorescence test kit, as there's no "training set" in the AI sense. The development of the kit would rely on known positive and negative viral samples for reagent specificity and sensitivity optimization.
Ask a specific question about this device
Page 1 of 1