Search Results
Found 32 results
510(k) Data Aggregation
(81 days)
DIAGNOSTIC HYBRIDS, INC.
The Lyra™ Direct HSV 1 + 2/VZV Assay is an in vitro multiplex Real-Time PCR test for qualitative detection and differentiation of herpes simplex virus type 1, herpes simplex virus type 2, and varicella-zoster virus DNA isolated and purified from cutaneous or mucocutaneous lesion samples obtained from symptomatic patients suspected of active herpes simplex virus 1. herpes simplex virus 2 and/or varicella-zoster infection. The Lyra™ Direct HSV 1 + 2/VZV Assay is intended to aid in the diagnosis of herpes simplex virus 1, herpes simplex virus 2 and varicella-zoster virus active cutaneous or mucocutaneous infections. Negative results do not preclude herpes simplex virus 1, herpes simplex virus 2 and varicella-zoster virus infections and should not be used as the sole basis for diagnosis, treatment or other management decisions. The Lyra™ Direct HSV 1 + 2/VZV Assay is not intended for use with cerebrospinal fluid or to aid in the diagnosis of HSV or VZV infections of the central nervous system (CNS). The Lyra™ Direct HSV 1 + 2/VZV Assay is not intended for use in prenatal screening. The device is not intended for point-of-care use.
The Lyra™ Direct HSV 1 + 2/VZV Assay detects viral nucleic acids from a patient sample. A multiplex Real-Time PCR reaction is carried out under optimized conditions in a single tube or well generating amplicons for HSV-1, HSV-2, VZV, and the Process Control (PRC). Identification of amplicons for HSV-1, HSV-2, VZV, and the PRC occurs by the use of target-specific primers and fluorescent-labeled probes that hybridize to conserved regions in the genomes of HSV-1, HSV-2, and VZV and to the PRC, respectively.
Here's an analysis of the acceptance criteria and study details for the Lyra™ Direct HSV 1 + 2/VZV Assay, formatted as requested:
Acceptance Criteria and Device Performance for Lyra™ Direct HSV 1 + 2/VZV Assay
The Lyra™ Direct HSV 1 + 2/VZV Assay is an in vitro multiplex Real-Time PCR test for qualitative detection and differentiation of HSV-1, HSV-2, and VZV DNA from cutaneous or mucocutaneous lesion samples. The regulatory classification is Class II with special controls (21 CFR 866.3309).
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state pre-defined acceptance criteria with numerical thresholds in a single table. Instead, performance is presented through various analytical and clinical studies. We will infer the de facto acceptance criteria based on the demonstrated performance that led to the device's classification. For clinical performance, the comparable effectiveness is evaluated against established methods (ELVIS® HSV ID and D3 Typing Test for HSV and DSFA and Culture with DFA for VZV). Successful performance in these comparisons implies the acceptance criteria were met.
Inferred Acceptance Criteria Table and Reported Device Performance:
Performance Metric | Target Analyte | Acceptance Criteria (Inferred from successful study) | Reported Device Performance (Life Technologies QuantStudio™ Dx) | Reported Device Performance (Applied Biosystems® 7500 Fast Dx) | Reported Device Performance (Cepheid SmartCycler® II) |
---|---|---|---|---|---|
CLINICAL PERFORMANCE (Cutaneous Lesions) | |||||
Sensitivity | HSV-1 | High, approaching 100% | 100% (24/24) | 100% (24/24) | 100% (24/24) |
Specificity | HSV-1 | High, >95% | 98.4% (250/254) | 98.8% (252/254) | 98.8% (251/254) |
Sensitivity | HSV-2 | High, approaching 100% | 100% (35/35) | 97.1% (34/35) | 100% (35/35) |
Specificity | HSV-2 | High, >95% | 96.3% (234/243) | 96.7% (236/244) | 96.7% (235/243) |
Sensitivity | VZV | High, approaching 100% | 100% (27/27) | 100% (26/27) | 100% (27/27) |
Specificity | VZV | High, >90% | 95.9% (187/195) | 95.9% (189/196) | 94.9% (185/195) |
CLINICAL PERFORMANCE (Mucocutaneous Lesions) | |||||
Sensitivity | HSV-1 | High, >95% | 97.1% (100/103) | 95.1% (98/103) | 98.1% (101/103) |
Specificity | HSV-1 | High, >95% | 97.1% (527/543) | 98.2% (531/541) | 97.2% (525/540) |
Sensitivity | HSV-2 | High, approaching 100% | 100% (95/95) | 97.9% (93/95) | 98.9% (94/95) |
Specificity | HSV-2 | High, >95% | 96.2% (530/551) | 97.1% (533/549) | 97.1% (532/548) |
Sensitivity | VZV | High, approaching 100% | 100% (4/4) | 100% (4/4) | 100% (4/4) |
Specificity | VZV | High, >97% | 98.8% (423/428) | 99.3% (423/426) | 98.8% (420/425) |
ANALYTICAL PERFORMANCE | |||||
Reproducibility (Detection Rate) | Low Positive | 100% for all analytes | 90/90 (HSV-1), 90/90 (HSV-2), 89/90 (VZV) | 90/90 (HSV-1), 90/90 (HSV-2), 88/90 (VZV) | 90/90 (HSV-1), 89/90 (HSV-2), 88/90 (VZV) |
Reproducibility (Negative Samples) | Negative | 0% Detection | 0/90 (all analytes) | 0/90 (all analytes) | 0/90 (all analytes) |
Limit of Detection (LoD) | All Analytes | LoD within 3 doubling dilutions across platforms | Achieved for various strains (see section 1.e for details) | Achieved for various strains (see section 1.e for details) | Achieved for various strains (see section 1.e for details) |
Analytical Reactivity | Multiple strains | Detection at near LoD concentrations (100% positivity for all tested strains at specified concentrations) | All 16 tested strains were detected | All 16 tested strains were detected | All 16 tested strains were detected |
Analytical Specificity (Cross-Reactivity/Inhibition) | Diverse Microorganisms, Endogenous Substances | No cross-reactivity with negative samples, no inhibition of positive samples | No cross-reactivity observed, no inhibition observed | No cross-reactivity observed, no inhibition observed | No cross-reactivity observed, no inhibition observed |
Competitive Interference | Multiple analytes in same sample | No interference when multiple analytes present at varying concentrations | No competitive interference observed | No competitive interference observed | No competitive interference observed |
Carry-over Contamination | PCR products | No cross-contamination from high positives to negative samples | No cross-contamination (0/48 tested negatives positive) | No cross-contamination (0/48 tested negatives positive) | No cross-contamination (0/48 tested negatives positive) |
Study Proving Acceptance Criteria Met:
The device's performance characteristics, including both analytical and clinical studies, were conducted to demonstrate it meets the requirements for a Class II designation with special controls.
2. Sample Sizes Used for the Test Set and Data Provenance
Test Set Sample Sizes (Clinical Study):
The clinical study was a multi-center study conducted between April 2013 and October 2013.
- Life Technologies QuantStudio™ Dx:
- Cutaneous Lesions: 279 specimens initially collected (278 for HSV-1/HSV-2, 222 for VZV after exclusions).
- Mucocutaneous Lesions: 650 specimens initially collected (646 for HSV-1/HSV-2, 432 for VZV after exclusions).
- Applied Biosystems® 7500 Fast Dx:
- Cutaneous Lesions: 279 specimens (279 for HSV-1/HSV-2, 223 for VZV after exclusions).
- Mucocutaneous Lesions: 650 specimens initially collected (644 for HSV-1/HSV-2, 430 for VZV after exclusions).
- Cepheid SmartCycler® II:
- Cutaneous Lesions: 279 specimens initially collected (278 for HSV-1/HSV-2, 222 for VZV after exclusions).
- Mucocutaneous Lesions: 650 specimens initially collected (643 for HSV-1/HSV-2, 429 for VZV after exclusions).
Data Provenance:
- Country of Origin: Not explicitly stated, but the submission is to the FDA, suggesting United States given the typical regulatory context for De Novo submissions. The multi-center nature implies data was collected from different clinical sites within the same regulatory jurisdiction.
- Retrospective or Prospective: The clinical study description states: "A multi-center study was performed between April, 2013 and October, 2013 to evaluate the Lyra™ Direct HSV 1 + 2/VZV Assay using lesion swab specimens obtained from cutaneous or mucocutaneous lesions and submitted for HSV and/or VZV culture." This clearly indicates a prospective collection of specimens for the purpose of the study.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications
The ground truth for the clinical test set was established using FDA-cleared comparator methods, not directly by human experts in the sense of independent adjudication.
- For HSV-1 and HSV-2, the comparator method was the ELVIS® HSV ID and D3 Typing Test, an FDA-cleared cell culture system.
- For VZV, the comparator method involved staining cells present in the samples with an FDA-cleared VZV detection reagent (DSFA) and culturing the specimen using a mixed cell culture (H&V mixed cells) followed by staining with the same FDA-cleared reagent used for DSFA.
Therefore, there were no direct human experts establishing ground truth in the typical sense of radiologists or pathologists. The ground truth was based on the results of established, FDA-cleared laboratory methods, operated by trained laboratory personnel.
4. Adjudication Method for the Test Set
The primary adjudication method for discrepancies between the Lyra™ Direct Assay and the comparator methods was the use of an additional RT-PCR assay.
- For discrepancies in HSV-1 (e.g., Lyra™ positive, comparator negative), these cases were "positive by an additional RT-PCR assay."
- Similarly for HSV-2 and VZV discrepancies, "positives were positive by an additional RT-PCR assay."
- In some cases where Lyra™ was negative but the comparator was positive, these were also re-evaluated by an additional RT-PCR assay, with the statement "negatives were positive by an additional RT-PCR assay."
This acts as a tie-breaker or confirmatory test for discordant results, enhancing the robustness of the ground truth derived from the comparator methods.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs without AI Assistance
No, an MRMC comparative effectiveness study was not done. This device is an in vitro diagnostic (IVD) PCR assay, not an AI/CADe medical imaging device that assists human readers. Its performance is evaluated compared to established laboratory methods, not human interpretation of images. Therefore, the concept of "human readers improve with AI vs without AI assistance" is not applicable here.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, the primary clinical study represents standalone performance. The Lyra™ Direct HSV 1 + 2/VZV Assay is an automated real-time PCR test. Its results (positive/negative for each virus) are generated directly by the instrument and its software, without human interpretation of raw signals to determine the final viral status. The clinical sensitivity and specificity reported are the performance of the algorithm/device alone compared to the ground truth.
7. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, etc.)
The ground truth used for the clinical study was based on FDA-cleared conventional laboratory methods:
- Cell Culture with Immunofluorescence Staining (or similar detection):
- HSV-1 and HSV-2: ELVIS® HSV ID and D3 Typing Test.
- VZV: Direct Smear Fluorescent Antibody (DSFA) and mixed cell culture with DFA staining.
- Confirmatory RT-PCR/molecular assay: Used as an adjudication method for discordant results between the Lyra™ Direct Assay and the primary comparator methods.
8. The Sample Size for the Training Set
The document does not explicitly describe a separate "training set" for the clinical evaluation in the way machine learning algorithms typically use them. For IVD devices like this, the "development" or "training" process involves optimizing the assay components (primers, probes, reaction conditions) and setting analytical cut-offs based on analytical studies (e.g., Limit of Detection, reactivity, specificity studies) using characterized samples.
The reproducibility and precision studies used simulated samples (medium positive, low positive, high negative, negative, 5x LoD, 2x LoD,
Ask a specific question about this device
(299 days)
DIAGNOSTIC HYBRIDS, INC.
The Thyretain ™ TSI Reporter BioAssay is intended for the qualitative detection in serum of thyroid stimulating autoantibodies to the thyroid stimulating hormone (TSH) receptors (TSHRs) on the thyroid. The detection of these stimulating autoantibodies, in conjunction with other clinical and laboratory findings, may be useful as an aid in the differential diagnosis of patients with Graves' disease (GD).
The number of wells tested per Positive, Reference and Negative control has been reduced from three to two for each. The number of wells tested per patient specimen has been reduced from three to two.
The provided submission (K092229) is a special 510(k) for a device modification; therefore, it primarily focuses on demonstrating that the device, post-modification, remains substantially equivalent to its predicate device. This type of submission usually doesn't include new, large-scale clinical studies with specific acceptance criteria as would be found in a De Novo or PMAs. Instead, it relies on bridging data to show the modification doesn't negatively impact performance.
Here's an analysis based on the provided text:
Device: Thyretain™ TSI Reporter BioAssay (Modified)
Predicate Device: Thyretain™ TSI Reporter BioAssay (K083391)
Modification: Reduction of the number of wells tested per Positive, Reference, and Negative control from three to two. Reduction of wells tested per patient specimen from three to two.
1. A table of acceptance criteria and the reported device performance
The document does not explicitly state formal acceptance criteria in a quantitative manner (e.g., "sensitivity must be >90%"). Instead, the performance assessment aims to demonstrate that the modified device remains equivalent to the predicate, particularly in its non-clinical performance.
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
No significant negative impact on device performance due to modification. | "Based on the analysis of the study site data (see Attachment 2) the modification poses little risk." |
Maintain the intended use and diagnostic accuracy. | Intent to maintain the same intended use as the predicate device. The FMEA indicated little risk from the modification. |
Note: "Attachment 2" which would contain the detailed study site data is not provided in the input, so specific quantitative performance metrics like sensitivity, specificity, or agreement with the predicate are not available in this summary. The assessment focuses on risk analysis.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document refers to "study site data" in "Attachment 2". However, the details of this data, including sample size, provenance (country of origin), and whether it was retrospective or prospective, are not provided in the given text extract.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This information is not provided in the given text extract. Given this is a modification to an in vitro diagnostic (IVD) device, the ground truth would likely refer to established diagnostic methods or clinical outcomes, rather than expert interpretation of images.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not provided in the given text extract. It is unlikely to be relevant for this type of IVD modification, which assesses analytical performance rather than interpretation by human readers.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No. A MRMC comparative effectiveness study was not done. This is an in vitro diagnostic device (a bioassay) for detecting autoantibodies, not an imaging device that requires human interpretation or AI assistance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
This question is not applicable in the context of this device. The Thyretain™ TSI Reporter BioAssay is a laboratory assay for detecting antibodies, not an algorithm. The "performance" refers to the analytical performance of the assay itself. The submission is not about an algorithm, but a modification to the assay's procedure (number of wells).
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The submission is for an immunoassay. The ground truth for such an assay would typically be established by:
- Reference methods: Comparison to established, clinically validated assays for the same analyte.
- Clinical diagnosis: Correlation with the clinical diagnosis of Graves' disease (GD), which would involve a combination of clinical symptoms, other laboratory findings, and potentially outcomes data over time.
- Pathology: Less likely to be the primary ground truth for an antibody assay, but could be part of the broader clinical picture for Graves' disease.
The provided text does not explicitly state the type of ground truth used for the "study site data" referenced.
8. The sample size for the training set
This information is not provided in the given text extract. For an IVD assay modification, there isn't typically a "training set" in the machine learning sense. Any data used to validate the modified procedure would be considered a "test set" or verification/validation data.
9. How the ground truth for the training set was established
As there is no "training set" in the context of this IVD device, this question is not applicable. Any ground truth for validation samples would be established by methods similar to those described in point 7.
Ask a specific question about this device
(51 days)
DIAGNOSTIC HYBRIDS, INC.
The Diagnostic Hybrids, Inc. device, D3 FastPoint L-DFA Parainfluenza Virus/Adenovirus Identification Kit is intended for the qualitative identification of adenovirus and to screen for the presence of parainfluenza virus types 1, 2, and 3 in nasal and nasopharyngeal swabs and aspirates/washes specimens from patients with signs and symptoms of respiratory infection by direct detection of immunofluorescence using monoclonal antibodies (MAbs).
It is recommended that specimens found to be negative for parainfluenza virus and adenovirus after examination of the direct specimen result be confirmed by cell culture. Negative results do not preclude parainfluenza virus and adenovirus infection and should not be used as the sole basis for diagnosis, treatment or other management decisions.
The D3 FastPoint L-DFA Parainfluenza Virus/Adenovirus Identification Kit (D3 FastPoint PIV/ADV Kit) uses a blend (called an "L-DFA Reagent") of viral antigen-specific murine monoclonal antibodies that are directly labeled with either R-phycoerythin (PE) (parainfluenza virus types 1, 2 and 3) or fluorescein isothiocyanate (FITC) (adenovirus) for the qualitative identification of adenovirus and to screen for the presence of parainfluenza virus types 1, 2, and 3.
The cells to be tested are derived from respiratory specimens from patients with signs and symptoms of respiratory infection. The cells are permeabilized and stained concurrently in a liquid suspension format with the L-DFA Reagent. After incubating at 35℃ to 37℃ for 5-minutes, the stained cell suspensions are rinsed with 1X PBS. The rinsed cells are pelleted by centrifugation and then re-suspended with the Resuspension Buffer and loaded onto a specimen slide well. The cells are examined using a fluorescence microscope. Cells infected with parainfluenza virus types 1, 2 and 3 will exhibit golden-yellow fluorescence. Cells infected with adenovirus will exhibit apple green fluorescence due to the FITC. Non-infected cells will exhibit red fluorescence due to the Evans Blue counter-stain. Nuclei of intact cells will exhibit orange-red fluorescence due to the propidium iodide.
Here's an analysis of the provided document regarding the acceptance criteria and study for the D3 FastPoint L-DFA Parainfluenza Virus/Adenovirus Identification Kit:
1. Table of Acceptance Criteria and Reported Device Performance:
The document doesn't explicitly state "acceptance criteria" in a numerical target format (e.g., "sensitivity must be >90%"). Instead, it presents the achieved performance metrics in comparison to a composite comparator method (FDA-cleared device + viral culture). For the purpose of this table, I'll extract the reported performance from the clinical study, which implicitly serves as the demonstration of meeting acceptable clinical performance for substantial equivalence.
Acceptance Criteria (Implied by Study Results & Predicate Equivalence) and Reported Device Performance
Performance Metric | Adenovirus (NP Wash/Aspirate) | Parainfluenza Virus (NP Wash/Aspirate) | Adenovirus (NP Swab) | Parainfluenza Virus (NP Swab) |
---|---|---|---|---|
Sensitivity | 92.3% (95% CI: 64.0-99.8%) | 92.0% (95% CI: 74.0-99.0%) | 100% (95% CI: N/A - due to low prevalence) | 92.9% (95% CI: 66.1-99.8%) |
Specificity | 100% (95% CI: 99.4-100%) | 99.3% (95% CI: 98.3-99.8%) | 100% (95% CI: 99.5-100%) | 100% (95% CI: 99.4-100%) |
Reproducibility (Total Agreement) | 100% (All sites, for both Adenovirus and hPIV-1 in various combinations) | 100% (All sites, for both Adenovirus and hPIV-1 in various combinations) | 100% (All sites, for both Adenovirus and hPIV-1 in various combinations) | 100% (All sites, for both Adenovirus and hPIV-1 in various combinations) |
Limit of Detection (LOD) | 100 infected cells/mL | 100 infected cells/mL (hPIV-1), 25 infected cells/mL (hPIV-2), 50 infected cells/mL (hPIV-3) | Not applicable (analytical study) | Not applicable (analytical study) |
Analytical Reactivity (Inclusivity) | Positive detection for 10 adenovirus strains | Positive detection for 3 hPIV strains | Not applicable (analytical study) | Not applicable (analytical study) |
2. Sample Size and Data Provenance for the Test Set:
-
Test Set (Clinical Performance Study):
- Total Specimen Sample Size: 1519 specimens (across all age groups and specimen types).
- NP Wash/Aspirate (Combined Sites 1, 2, 3):
- Parainfluenza Virus: 628 specimens
- Adenovirus: 632 specimens
- NP Swab (Combined Sites 3, 4):
- Parainfluenza Virus: 682 specimens
- Adenovirus: 681 specimens
- Data Provenance: Prospective, collected from 4 geographically diverse U.S. clinical laboratories during the 2009 respiratory virus season (January-March 2009). The specimens were "excess, remnants of respiratory specimens that were prospectively collected from symptomatic individuals suspected of respiratory infection" and were de-identified.
-
Test Set (Reproducibility Study):
- Sample Size: A reproducibility panel consisting of 5 randomized panel members, tested daily in two separate runs for 5 days by four different laboratories (40 total runs). Each panel member contained defined levels of infected or non-infected cells.
- Data Provenance: Not specified beyond being conducted by "four different laboratories." This would be part of a controlled laboratory study, not clinical specimens.
3. Number of Experts and Qualifications for Ground Truth (Clinical Test Set):
- The document does not explicitly state the number of experts (e.g., medical doctors, virologists) used to establish the ground truth for the clinical test set.
- Qualifications of Experts (Implied): The ground truth was established using a "composite comparator method" which included:
- Direct Specimen Fluorescent Antibody (DSFA) test with an FDA-cleared device. This implies that the interpretation of the comparator DSFA test would have been performed by qualified laboratory personnel trained in using that specific FDA-cleared device.
- Viral culture confirmation. This would have been performed by trained microbiologists or virologists capable of performing and interpreting viral cultures.
4. Adjudication Method for the Test Set (Clinical Study):
- Adjudication Method: The ground truth was established using a composite comparator method.
- "True" positive was defined as any sample that either tested positive by the comparator DSFA test or viral culture.
- "True" negative was defined as any sample that tested negative by both the comparator DSFA test and viral culture.
- This is a form of adjudicated reference standard, where agreement between multiple methods (or sequential application of methods, as implied by "negatives followed by culture") establishes the "truth." This method is commonly used when a single perfect gold standard is not available or practical.
5. Multi Reader Multi Case (MRMC) Comparative Effectiveness Study:
- No, an MRMC comparative effectiveness study was not explicitly described. The study focused on the standalone performance of the D3 FastPoint kit against a composite comparator. There is no mention of human readers using the D3 FastPoint kit with and without AI assistance (as would be the case for a typical MRMC study involving AI) nor an effect size for human reader improvement with AI.
6. Standalone (Algorithm Only) Performance Study:
- Yes, a standalone study was performed. The clinical performance section directly assesses the D3 FastPoint L-DFA Parainfluenza Virus/Adenovirus Identification Kit's performance (device only, without human-in-the-loop assistance beyond the interpretation of the kit's results) against a composite comparator method. The sensitivity and specificity values reported in Tables 5.6, 5.7, 5.8, and 5.9 represent this standalone performance.
7. Type of Ground Truth Used (Clinical Study):
- Composite Comparator Method: The ground truth for the clinical study was established using a composite comparator method. This method combined:
- Performance of an existing FDA-cleared DSFA device.
- Viral culture confirmation for all specimens negative by the initial comparator DSFA.
- This combines elements of an established diagnostic method (FDA-cleared device) with a traditional gold standard for viral identification (viral culture).
8. Sample Size for the Training Set:
- The document does not report on a training set sample size. This is because the device described is a diagnostic kit (reagents and controls for immunofluorescence), not an AI/machine learning algorithm that typically requires a distinct training phase. Performance is evaluated through analytical and clinical studies, not by training a model.
9. How Ground Truth for the Training Set Was Established:
- Not applicable, as there is no training set described for an AI/machine learning model. The kit itself is the "algorithm" in a sense, and its performance is validated through analytical studies (reproducibility, LOD, inclusivity) and clinical studies against established comparator methods.
Ask a specific question about this device
(51 days)
DIAGNOSTIC HYBRIDS, INC.
The Diagnostic Hybrids, Inc. device, D3 FastPoint L-DFA RSV/MPV Identification Kit is intended for the qualitative identification of respiratory syncytial virus and human metapneumovirus in nasal and nasopharyngeal swabs and aspirates/washes specimens from patients with signs and symptoms of respiratory infection by direct detection of immunofluorescence using monoclonal antibodies (MAbs).
It is recommended that specimens found to be negative for respiratory syncytial virus after examination of the direct specimen result be confirmed by cell culture. Specimens found to be negative for human metapneumovirus after examination of the direct specimen results should be confirmed by an FDA-cleared human metapneumovirus molecular assay. Negative results do not preclude respiratory syncytial virus and human metapneumovirus infection and should not be used as the sole basis for diagnosis, treatment or other management decisions.
The D3 FastPoint L-DFA RSV/MPV Identification Kit uses a blend (called a "L-DFA Reagent'") of viral antigen-specific murine monoclonal antibodies that are directly labeled with either R-phycoerythin (PE) (respiratory syncytial virus) or fluorescein isothiocyanate (FITC) (human metapneumovirus) for the rapid identification of respiratory syncytial virus and human metapneumovirus in nasal and nasopharyngeal swabs and aspirates from patients with signs and symptoms of respiratory infection.
The cells to be tested are derived from respiratory specimens from patients with signs and symptoms of respiratory infection. The cells are permeabilized and stained concurrently in a liquid suspension format with the L-DFA Reagent. After incubating at 35℃ to 37℃ for 5-minutes, the stained cell suspensions are rinsed with 1X PBS. The rinsed cells are pelleted by centrifugation and then re-suspended with the Resuspension Buffer and loaded onto a specimen slide well. The cells are examined using a fluorescence microscope. Cells infected with RSV will exhibit golden-yellow fluorescence due to the PE. Cells infected with hMPV will exhibit apple-green fluorescence due to the FITC. Non-infected cells will exhibit red fluorescence due to the Evans Blue counter-stain. Nuclei of intact cells will exhibit orange-red fluorescence due to the propidium iodide.
Acceptance Criteria and Study for D3 FastPoint L-DFA RSV/MPV Identification Kit
1. Table of Acceptance Criteria and Reported Device Performance
The provided document doesn't explicitly state numerical acceptance criteria for all performance metrics. However, for a 510(k) submission, implied acceptance is often "substantially equivalent" to predicate devices, and for clinical performance, high sensitivity and specificity are expected. The reproducibility study explicitly aims for 100% agreement.
Note: The "acceptance criteria" are inferred based on standard expectations for diagnostic device performance and the detailed reporting of study outcomes, particularly the 100% agreement for reproducibility and the very high sensitivity/specificity for RSV. The lower sensitivity for hMPV in NP swab samples might be within acceptable limits given the context of medical device approval for challenging targets.
Performance Metric | Acceptance Criteria (Implied/Explicit) | Reported Device Performance (D3 FastPoint L-DFA RSV/MPV Identification Kit) |
---|---|---|
Reproducibility (Overall Agreement with Expected Result) | 100% (Implied by study design expecting full agreement) | 100% (280/280) across all sites and panel members |
Limit of Detection (LoD) - RSV | The lowest dilution at which at least 9/10 replicates are detected. | 100 infected cells/mL |
Limit of Detection (LoD) - hMPV | The lowest dilution at which at least 9/10 replicates are detected. | 100 infected cells/mL |
Analytical Reactivity (Inclusivity) - RSV | Detection of various RSV strains at 10x LoD. | All 3 tested RSV strains detected at 10x LoD. |
Analytical Reactivity (Inclusivity) - hMPV | Detection of various hMPV strains at 10x LoD. | All 4 tested hMPV strains detected at 10x LoD. |
Clinical Sensitivity (RSV - NP wash/aspirate) | High sensitivity for diagnosis. | 98.6% (204/207) [95% CI: 95.8-99.7%] |
Clinical Specificity (RSV - NP wash/aspirate) | High specificity for diagnosis. | 99.8% (462/463) [95% CI: 98.8-100%] |
Clinical Sensitivity (hMPV - NP wash/aspirate) | High sensitivity for diagnosis. | 68.8% (55/80) [95% CI: 57.4-78.7%] |
Clinical Specificity (hMPV - NP wash/aspirate) | High specificity for diagnosis. | 100.0% (614/614) [95% CI: 99.4-100%] |
Clinical Sensitivity (RSV - NP swab) | High sensitivity for diagnosis. | 97.5% (39/40) [95% CI: 86.8-99.9%] |
Clinical Specificity (RSV - NP swab) | High specificity for diagnosis. | 100.0% (647/647) [95% CI: 99.4-100%] |
Clinical Sensitivity (hMPV - NP swab) | High sensitivity for diagnosis. | 54.5% (24/44) [95% CI: 38.8-69.9%] |
Clinical Specificity (hMPV - NP swab) | High specificity for diagnosis. | 100.0% (632/632) [95% CI: 99.4-100%] |
2. Sample Size Used for the Test Set and Data Provenance
- Reproducibility Test Set:
- Sample Size: A reproducibility panel consisting of 5 members (low RSV, low hMPV, mixed RSV/hMPV, mixed hMPV/RSV, negative). Each panel member was tested daily in two separate runs for 5 days by 4 different laboratories, resulting in 40 total runs. This yielded 280 total tests (across all panel members and runs) with individual results reported for expected positive and negative wells.
- Data Provenance: The study was conducted at four different laboratories. The document does not specify the country of origin but implies U.S. clinical laboratories (referencing "U.S. clinical laboratories" for clinical performance). It's a prospective study in the sense that the testing itself was performed to assess reproducibility.
- Limit of Detection (LoD) Test Set:
- Sample Size: Dilution series of infected model cells were used. For each virus (RSV and hMPV A1), 10 replicate microscope slides were prepared for each dilution level. The specific number of dilutions isn't explicitly stated as a single number but spanned from 1000 infected cells/mL down to 0.8 or 1.5 infected cells/mL, with 10 replicates for each dilution.
- Data Provenance: Laboratory study, likely internal to the manufacturer or a contracted lab.
- Analytical Reactivity (Inclusivity) Test Set:
- Sample Size: 3 RSV virus strains and 4 hMPV virus strains were evaluated. For each strain, "low concentration infected cell suspensions (approximately 4% cells infected, 25 to 50 infected cells)" were prepared.
- Data Provenance: Laboratory study.
- Clinical Performance Test Set:
- Sample Size: 1519 total respiratory specimens (nasal and nasopharyngeal swabs and aspirates/washes).
- Data Provenance: Prospective studies at 4 geographically diverse U.S. clinical laboratories during the 2009 respiratory virus seasons (January 2009 - March 2009). The specimens were "excess, remnants of respiratory specimens that were prospectively collected from symptomatic individuals suspected of respiratory infection, and were submitted for routine care or analysis by each site, and that otherwise would have been discarded." Individual specimens were delinked from patient identifiers.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
The document does not explicitly state the number or qualifications of experts for establishing ground truth as a separate role. Instead, the ground truth for clinical performance was established using a composite comparator method:
- RSV: Direct Specimen Fluorescent Antibody (DSFA) test with an FDA-cleared predicate device, followed by viral culture confirmation of all negatives from the comparator DSFA test.
- hMPV: DSFA with an FDA-cleared predicate device, followed by confirmation of all negative specimens from the comparator DSFA test using a validated hMPV real-time RT-PCR, which was then followed by bi-directional sequencing analysis.
This implies that the "ground truth" was determined by the results of these established and confirmed laboratory methods, rather than by human expert consensus or adjudication of raw images/output from the test device solely.
4. Adjudication Method for the Test Set
For the clinical studies, an explicit "adjudication method" involving human experts reviewing conflicting results is not detailed. Instead, a composite comparator algorithm was used to define "true positive" and "true negative":
- "True positive" for RSV was defined as any sample that tested positive by the comparator DSFA test or viral culture.
- "True positive" for hMPV was defined as any sample that tested positive by the comparator DSFA test OR had bi-directional sequencing data meeting pre-defined quality acceptance criteria that matched hMPV sequences in GenBank.
- "True negative" was defined as any sample that tested negative by both the comparator DSFA test and either viral culture (for RSV) or the hMPV real-time RT-PCR comparator assay (for hMPV).
This approach essentially pre-defines how discordant results between screening and confirmatory tests contribute to the final ground truth, replacing a separate human adjudication step.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not conducted. This study is an evaluation of an in-vitro diagnostic device (IVD), specifically a direct fluorescent antibody (DFA) test, which is read by trained laboratory personnel, but the study focuses on the device's performance against comparator methods, not on comparing reader performance with and without AI assistance (as it is not an AI device).
6. If a Standalone (i.e. algorithm only without human-in-the loop performance) was done
Yes, the studies presented are essentially standalone performance evaluations of the device. The D3 FastPoint L-DFA RSV/MPV Identification Kit is an immunofluorescent assay where a human technician observes fluorescent staining patterns under a microscope. However, the performance metrics (sensitivity, specificity) are for the device's ability to detect the viral antigens in specimens, without involving a study design where human readers using the device are compared to human readers using another method, or AI assistance. The results in the tables reflect the diagnostic performance of the kit itself when used according to its instructions.
7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc)
The ground truth used for the clinical performance studies was based on a composite comparator method combining:
- FDA-cleared predicate DFA devices
- Viral culture (for RSV)
- Validated real-time RT-PCR with bi-directional sequencing analysis (for hMPV)
This is a form of reference standard composite, aiming for a highly accurate and confirmed diagnosis of the presence or absence of the target viruses.
8. The Sample Size for the Training Set
The document does not explicitly describe a separate "training set" or "validation set" in the context of machine learning. This is a traditional in vitro diagnostic device, not an AI/ML-based device.
All the described studies (reproducibility, LoD, analytical reactivity, clinical performance) contribute to the overall evidence for the device. The 1519 clinical specimens (fresh nasal/nasopharyngeal wash/aspirate and swab specimens) can be considered the test set for evaluating clinical performance.
9. How the Ground Truth for the Training Set Was Established
As noted above, there is no explicit "training set" mentioned in the context of machine learning. The studies assess the performance of the pre-developed D3 FastPoint L-DFA RSV/MPV Identification Kit using the specified ground truth methods.
Ask a specific question about this device
(33 days)
DIAGNOSTIC HYBRIDS, INC.
The Diagnostic Hybrids, Inc. device, D3 FastPoint L-DFA Influenza A/Influenza B Virus Identification Kit is intended for the qualitative identification of influenza A virus and influenza B virus in nasal and nasopharyngeal swabs and aspirates/washes specimens from patients with signs and symptoms of respiratory infection by direct detection of immunofluorescence using monoclonal antibodies (MAbs).
It is recommended that specimens found to be negative for influenza A or influenza B virus after examination of the direct specimen result be confirmed by cell culture. Negative results do not preclude influenza A or influenza B virus infection and should not be used as the sole basis for diagnosis, treatment or other management decisions.
Performance characteristics for influenza A virus detection and identification were established when influenza A (H3N2) and influenza A (H1N1) were the predominant influenza A strains circulating in the United States. Since influenza strains display antigenic drift and shift from year to year, performance characteristics may vary. If infection with a novel influenza A virus is suspected, based on clinical and epidemiological screening criteria communicated by public health authorities, collect specimens following appropriate infection control precautions and submit to state or local health departments, for testing. Viral culture should not be attempted in these cases unless a BSL 3+facility' is available to receive and culture specimens .
The D3 FastPoint L-DFA Influenza A/Influenza B Virus Identification Kit (D3 FastPoint A/B Kit) uses a blend (called a "L-DFA Reagent") of viral antigen-specific murine monoclonal antibodies that are directly labeled with either R-PE (influenza A virus) or fluorescein (influenza B virus) for the rapid identification of influenza A virus and influenza B virus in nasal and nasopharyngeal swabs and aspirates/washes specimens from patients with signs and symptoms of respiratory infection.
The cells to be tested are derived from respiratory specimens from patients with signs and symptoms of respiratory infection. The cells are permeabilized and stained concurrently in a liquid suspension format with the L-DFA reagent. After incubating at 35℃ to 37℃ for 5-minutes, the stained cell suspensions are rinsed with 1X PBS. The rinsed cells are pelleted by centrifugation and then re-suspended with the re-suspension buffer and loaded onto a specimen slide well. The cells are examined using a fluorescence microscope. Cells infected with influenza A virus will exhibit goldenyellow fluorescence due to the PE. Cells infected with influenza B virus will exhibit apple-green fluorescence due to the FITC. Non-infected cells will exhibit red fluorescence due to the Evans Blue counter-stain. Nuclei of intact cells will exhibit orange-red fluorescence due to the propidium iodide.
Here's a breakdown of the acceptance criteria and the study details for the D³ FastPoint L-DFA Influenza A/Influenza B Virus Identification Kit:
1. Table of Acceptance Criteria and Reported Device Performance:
The document doesn't explicitly state "acceptance criteria" in a quantitative format for clinical performance beyond the presented sensitivity and specificity. However, based on the clinical trial results, these would be the implied performance metrics.
Performance Metric | Target/Acceptance Criteria (Implied) | Reported Device Performance (Influenza A - Wash/Aspirate) | Reported Device Performance (Influenza A - Swab) | Reported Device Performance (Influenza B - Wash/Aspirate) | Reported Device Performance (Influenza B - Swab) |
---|---|---|---|---|---|
Clinical Sensitivity | High (e.g., >80%) | 84.8% (95% CI: 73.9-92.5%) | 87.7% (95% CI: 77.2-94.5%) | 81.8% (95% CI: 48.2-97.7%) | 87.9% (95% CI: 83.7-92.1%) |
Clinical Specificity | High (e.g., >95%) | 99.5% (95% CI: 98.5-99.9%) | 99.8% (95% CI: 99.1-100%) | 100.0% (95% CI: 99.4-100%) | 99.8% (95% CI: 98.8-100%) |
Analytical Performance (Reproducibility):
Performance Metric | Acceptance Criteria (Implied) | Reported Device Performance |
---|---|---|
Flu A detection | 100% agreement expected | 100% (120/120) |
Flu B detection | 100% agreement expected | 100% (120/120) |
Negative (no infected cells) | High agreement (e.g. >95%) | 95% (38/40) |
Total % Agreement | High (e.g. >95%) | 99.3% (278/280) |
Analytical Performance (Limit of Detection - LoD):
Virus Strain | Acceptance Criteria (Defined as lowest dilution with >= 9/10 replicates detected) | Reported Device Performance (LOD) |
---|---|---|
Influenza A (Victoria) | Detection in >= 9/10 replicates | 50 infected cells/mL |
Influenza B (Taiwan) | Detection in >= 9/10 replicates | 50 infected cells/mL |
Analytical Reactivity (Inclusivity):
The document states "MAbs are reactive with all listed strains" for both Influenza A and B. The acceptance criterion is implicitly 100% reactivity with the tested strains. The reported performance confirms this by showing detection of a specified number of fluorescent cells for all tested strains at 20x LoD.
2. Sample Size Used for the Test Set and Data Provenance:
- Clinical Test Set Sample Size:
- Total Specimens Evaluated: 1519
- Nasal/Nasopharyngeal Wash/Aspirate (Influenza A): 637 specimens
- Nasal/Nasopharyngeal Wash/Aspirate (Influenza B): 628 specimens
- Nasal/Nasopharyngeal Swab (Influenza A): 690 specimens
- Nasal/Nasopharyngeal Swab (Influenza B): 711 specimens (Note: Summing these individual specimen counts for A and B across different sample types is not appropriate to get a single "total," as the same specimen might be tested for both A and B, and different subsets might be used for different sample types or sites.)
- Data Provenance: Prospective collection from symptomatic individuals suspected of respiratory infection in 4 geographically diverse U.S. clinical laboratories during the 2009 respiratory virus season (January 2009 - March 2009). The specimens were "excess, remnants" that would have otherwise been discarded, implying real-world clinical samples.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:
The document describes the ground truth as a "predetermined algorithm that used composite comparator methods" involving an FDA-cleared Direct Specimen Fluorescent Antibody (DSFA) test and viral culture confirmation. This method does not explicitly state the involvement of human experts establishing the ground truth beyond the likely requirement of trained laboratory personnel to perform and interpret the comparator tests. Therefore, details on the number and qualifications of experts for ground truth establishment are not provided in this document.
4. Adjudication Method for the Test Set:
The ground truth was established using a "composite comparator method":
- "True" positive: Any sample that tested positive by either the comparator DSFA test OR viral culture.
- "True" negative: Any sample that tested negative by BOTH the comparator DSFA test AND viral culture.
This is a form of adjudication by algorithm/composite reference standard rather than human consensus.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was Done:
No, a Multi Reader Multi Case (MRMC) comparative effectiveness study of human readers with vs. without AI assistance was not done. The study compared the device (D3 FastPoint A/B Kit) directly against a composite reference standard, not against human readers. This device is a diagnostic kit, not an AI or imaging device that would typically involve human reader studies for interpretation.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done:
Yes, the performance presented for the D3 FastPoint L-DFA Influenza A/Influenza B Virus Identification Kit is a standalone performance of the device (kit) itself, as read and interpreted by laboratory personnel according to its instructions. It is not an "algorithm" in the modern AI sense, but a lab kit where the reported accuracy reflects the kit's ability to correctly identify the viruses based on fluorescence.
7. The Type of Ground Truth Used:
The ground truth used was a composite reference standard consisting of:
- An FDA-cleared Direct Specimen Fluorescent Antibody (DSFA) test.
- Viral culture confirmation for all negatives from the comparator DSFA test.
8. The Sample Size for the Training Set:
The document does not explicitly mention a separate "training set" for the D3 FastPoint L-DFA Influenza A/Influenza B Virus Identification Kit in the context of machine learning or algorithm development. This device is a traditional in-vitro diagnostic kit. Its development would involve analytical studies and optimization (e.g., antibody selection, reagent formulation) rather than a separate "training set" in the AI sense.
However, the "Analytical Reactivity (inclusivity)" study, which tested the reactivity against 13 influenza A strains and 7 influenza B strains, could be considered part of the analytical development and validation process to ensure the kit's broad detection capabilities for known strains.
9. How the Ground Truth for the Training Set Was Established:
As there is no explicitly defined "training set" in the AI sense for this traditional diagnostic kit, the concept of establishing ground truth for a training set is not applicable here. The analytical studies (like inclusivity) involved preparing known infected cell suspensions with specified viral strains, where the "ground truth" was inherent in the preparation of these controlled samples.
Ask a specific question about this device
(142 days)
DIAGNOSTIC HYBRIDS, INC.
The Diagnostic Hybrids, Inc. device, D3 FastPoint L-DFA Respiratory Virus Identification Kit is intended for the qualitative identification of influenza A virus, influenza B virus, respiratory syncytial virus, human metapneumovirus, adenovirus and to screen for the presence of parainfluenza virus types 1, 2, and 3 in nasal and nasopharyngeal swabs and aspirates/washes specimens from patients with signs and symptoms of respiratory infection by direct detection of immunofluorescence using monoclonal antibodies (MAbs).
It is recommended that specimens found to be negative for influenza A virus, influenza B virus, respiratory syncytial virus, adenovirus or parainfluenza viruses after examination of the direct specimen result be confirmed by cell culture. Specimens found to be negative for human metapneumovirus after examination of the direct specimen results should be confirmed by an FDA cleared human metapneumovirus molecular assay. Negative results do not preclude respiratory virus infection and should not be used as the sole basis for diagnosis, treatment or other management decisions.
Performance characteristics for influenza A virus detection and identification were established when influenza A (H3N2) and influenza A (H1N1) were the predominant influenza A strains circulating in the United States. Since influenza strains display antigenic drift and shift from year to year, performance characteristics may vary. If infection with a novel influenza A virus is suspected, based on clinical and epidemiological screening criteria communicated by public health authorities, collect specimens following appropriate infection control precautions and submit to state or local health departments, for testing. Viral culture should not be attempted in these cases unless a BSL 3+ facility is available to receive and culture specimens.
The D3 FastPoint L-DFA Respiratory Virus Identification Kit uses three blends (each called a "L-DFA Reagent") of viral antigen-specific murine monoclonal antibodies that are directly labeled with either R-PE (influenza A virus, respiratory syncytial virus, and parainfluenza virus) or fluorescein (influenza B virus, metapneumovirus, and adenovirus) for the rapid identification of respiratory viruses in nasal and nasopharyngeal swabs and aspirates from patients with signs and symptoms of respiratory infection.
Kit Components:
- D3 FastPoint L-DFA Influenza A/Influenza B Reagent, 4.0-mL. One dropper bottle containing a mixture of PE-labeled murine monoclonal antibodies directed against influenza A virus antigens and FITC-labeled murine monoclonal antibodies directed against influenza B virus antigens. The buffered, stabilized, aqueous solution contains Evans Blue and propidium iodide as counter-stains and 0.1% sodium azide as preservative.
- D3 FastPoint L-DFA RSV/MPV Reagent, 4.0-mL. One dropper bottle containing a mixture of PE-labeled murine monoclonal antibodies directed against respiratory syncytial virus antigens and FITC-labeled murine monoclonal antibodies directed against metapneumovirus antigens. The buffered, stabilized, aqueous solution contains Evans Blue and propidium iodide as counter-stains and 0.1% sodium azide as preservative.
- D3 FastPoint L-DFA PIV/Adenovirus Reagent, 4.0-mL. One dropper bottle containing a mixture of PE-labeled murine monoclonal antibodies directed against parainfluenza virus types 1, 2, or 3 antigens and FITClabeled murine monoclonal antibodies directed against adenovirus antigens. The buffered, stabilized, aqueous solution contains Evans Blue and propidium iodide as counter-stains and 0.1% sodium azide as preservative.
- 40X PBS Concentrate, 25-mL. One bottle of 40X PBS concentrate containing 4% sodium azide (0.1% sodium azide after dilution to 1X using de-mineralized water).
- Re-suspension Buffer, 6.0-mL. One bottle of a buffered glycerol solution and 0.1% sodium azide.
- D3 FastPoint L-DFA Respiratory Virus Antigen Control Slides, 5-slides. Five individually packaged control slides containing 6 wells with cell culture-derived positive and negative control cells. Each positive well is identified as to the virus infected cells present, i.e., influenza A virus, influenza B virus, respiratory syncytial virus, metapneumovirus, parainfluenza virus, and adenovirus. The negative wells contain noninfected cells. Each slide is intended to be stained only one time.
The cells to be tested are derived from respiratory specimens from patients with signs and symptoms of respiratory infection. The cells are permeabilized and stained concurrently in a liquid suspension format in 3 separate vials, each containing one of the 3 above reagents. After incubating at 35℃ to 37℃ for 5 minutes, the stained cell suspensions are rinsed with 1X PBS. The rinsed cells are pelleted by centrifugation and then re-suspended with the resuspension buffer and loaded onto a specimen slide well. The cells are examined using a fluorescence microscope. Cells infected with influenza A virus, respiratory syncytial virus, or parainfluenza virus types 1, 2 and 3 will exhibit goldenyellow fluorescence due to the PE. Cells infected with influenza B virus, metapnemovirus or adenovirus will exhibit apple-green fluorescence due to the FITC. Non-infected cells will exhibit red fluorescence due to the Evans Blue counter-stain. Nuclei of intact cells will exhibit orange-red fluorescence due to the propidium iodide.
Here's a summary of the acceptance criteria and study details for the D3 FastPoint L-DFA Respiratory Virus Identification Kit, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state pre-defined acceptance criteria (e.g., "Sensitivity must be >X%"). Instead, it presents the results of reproducibility and clinical performance studies, implying that these results met the necessary standards for clearance. The "Reported Device Performance" column reflects the results from the clinical sensitivity and specificity studies presented in Tables 5.10-5.15 (for NP wash/aspirate) and 5.16-5.21 (for NP swab). For analytical performance, the reproducibility and LOD results are provided.
Criterion Type | Acceptance Criteria (Implicit) | Reported Device Performance (Summary) |
---|---|---|
Analytical Performance | ||
Reproducibility | Consistent detection of viral antigens across different sites and runs. | Influenza A/B Reagent: Total agreement 99.3% (278/280). |
RSV/hMPV Reagent: Total agreement 100% (280/280). | ||
HPIV/Adenovirus Reagent: Total agreement 100% (280/280). | ||
Limit of Detection (LoD) | Detection of specific viral strains at low concentrations. | Flu A: 50 infected cells/mL |
Flu B: 50 infected cells/mL | ||
RSV: 100 infected cells/mL | ||
hMPV A1: 100 infected cells/mL | ||
Adenovirus: 100 infected cells/mL | ||
HPIV-1: 100 infected cells/mL | ||
HPIV-2: 25 infected cells/mL | ||
HPIV-3: 50 infected cells/mL | ||
Analytical Reactivity (Inclusivity) | Detection of various strains of targeted viruses. | Detected all tested strains of Influenza A (13), Influenza B (7), RSV (3), hMPV (4), HPIV (3), and Adenovirus (10). |
Clinical Performance (NP Wash/Aspirate) | Acceptable sensitivity and specificity compared to comparator methods. | Flu A: Sensitivity 84.8%, Specificity 99.5% |
Flu B: Sensitivity 81.8%, Specificity 100.0% | ||
RSV: Sensitivity 98.6%, Specificity 99.8% | ||
Adenovirus: Sensitivity 92.3%, Specificity 100.0% | ||
HPIV: Sensitivity 92.0%, Specificity 99.3% | ||
hMPV: Sensitivity 68.8%, Specificity 100.0% | ||
Clinical Performance (NP Swab) | Acceptable sensitivity and specificity compared to comparator methods. | Flu A: Sensitivity 87.7%, Specificity 99.8% |
Flu B: Sensitivity 87.9%, Specificity 99.8% | ||
RSV: Sensitivity 97.5%, Specificity 100.0% | ||
Adenovirus: Sensitivity 100.0%, Specificity 100.0% (Note: Low prevalence, caution advised) | ||
HPIV: Sensitivity 92.9%, Specificity 100.0% | ||
hMPV: Sensitivity 54.5%, Specificity 100.0% |
2. Sample Size Used for the Test Set and Data Provenance
-
Clinical Test Set:
- Total Specimens Evaluated: 1519
- Provenance: Prospectively collected excess remnants of respiratory specimens from symptomatic individuals suspected of respiratory infection.
- Country of Origin: 4 geographically diverse U.S. clinical laboratories.
- Retrospective or Prospective: Prospective studies (January 2009 - March 2009).
- Specific Breakdown for Clinical Performance Tables:
- NP Wash/Aspirate (Sites 1, 2, and 3 combined): Number of specimens varies per virus (e.g., 637 for Flu A, 694 for hMPV).
- NP Swab (Sites 3 and 4 combined): Number of specimens varies per virus (e.g., 689 for Flu A, 675 for hMPV).
-
Analytical Test Set (Reproducibility):
- Sample Size: 5 randomized panel members for each of the 3 panels (Influenza A/B, RSV/hMPV, HPIV/Adenovirus). Each panel was tested daily in two separate runs for 5 days by four different laboratories (40 total runs per virus group). This means 40 replicates for each panel member for a given virus.
- Provenance: Proficiency-level antigen control slides with infected cells.
-
Analytical Test Set (Limit of Detection):
- Sample Size: 10 replicate microscope slides for each dilution level of 8 characterized respiratory virus isolates.
- Provenance: Dilution series of infected model cells.
-
Analytical Test Set (Analytical Reactivity/Inclusivity):
- Sample Size: Low concentration infected cell suspensions (approximately 4% cells infected, 25-50 infected cells) for various viral strains for each of the 3 reagents.
- Provenance: Culture isolates of various influenza A (13 strains), influenza B (7 strains), RSV (3 strains), hMPV (4 strains), HPIV (3 strains), and Adenovirus (10 strains).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document does not specify the number or qualifications of experts used to establish the ground truth for the clinical test set. Instead, it defines "True" positive and "True" negative based on a composite comparator method:
- For Influenza A, Influenza B, RSV, Parainfluenza, and Adenovirus: Direct Specimen Fluorescent Antibody (DSFA) test with an FDA cleared device, and viral culture confirmation of all negatives (as determined by the comparator DSFA test).
- For Human Metapneumovirus (hMPV): DSFA with an FDA cleared device, and confirmation of all negative specimens (as determined by the comparator DSFA test) using a validated hMPV real-time RT-PCR followed by bi-directional sequencing analysis.
4. Adjudication Method for the Test Set
The adjudication method used for establishing the ground truth for the clinical test set was a composite comparator method. This means results from multiple established methods (DSFA, viral culture, and for hMPV, RT-PCR with sequencing) were combined to determine the "true" status of a specimen. There is no mention of a specific expert panel adjudication method like "2+1" or "3+1" for interpreting these comparator results in case of discrepancies; rather, the definition of "true" positive/negative indicates the hierarchy of methods (e.g., positive by DSFA or viral culture is "true" positive).
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC comparative effectiveness study was done. This device is a diagnostic kit read by a human using a fluorescence microscope, but the study focuses on the kit's performance against comparator methods, not on human reader improvement with or without AI assistance.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done
This is not applicable as the D3 FastPoint L-DFA Respiratory Virus Identification Kit is a diagnostic kit that relies on a human reading the results under a fluorescence microscope. It is not an AI algorithm. The performance presented is of the kit as used by a human.
7. The Type of Ground Truth Used
The ground truth used for the clinical test set was a composite comparator method, which included:
- FDA cleared Direct Specimen Fluorescent Antibody (DSFA) tests
- Viral Culture
- Validated hMPV real-time RT-PCR followed by bi-directional sequencing analysis (for hMPV only)
- NCBI GenBank database matching with acceptable E-values for bi-directional sequencing data.
8. The Sample Size for the Training Set
The document does not explicitly mention a "training set" in the context of machine learning or AI development. This device is a diagnostic reagent kit, not an AI algorithm. The studies conducted are for analytical and clinical validation of the kit itself.
9. How the Ground Truth for the Training Set was Established
As there is no "training set" in the context of this diagnostic kit, this question is not applicable. The ground truth for the comparator methods was established using established laboratory techniques and FDA-cleared devices, as described in point 7.
Ask a specific question about this device
(30 days)
DIAGNOSTIC HYBRIDS, INC.
The Diagnostic Hybrids. Inc. D Ultra DFA (direct fluorescent antibody) Respiratory Virus Screening & ID Kit is intended for the qualitative detection and identification of the Influenza A. Influenza B, Respiratory Syncytial Virus (RSV), Adenovirus, Parainfluenza 1, Parainfluenza 2 and Parainfluenza 3 virus in respiratory specimens, by either direct detection or cell culture method, by immunofluorescence using fluoresceinated monoclonal antibodies (MAbs). It is recommended that specimens found to be negative after examination of the direct specimen result be confirmed by cell culture. Negative results do not preclude respiratory virus infection and should not be used as the sole basis for diagnosis, treatment or other management decisions.
- Performance characteristics for influenza A were established when influenza A/H3 and A/H1 were the predominant influenza A viruses in circulation. When other influenza A viruses are emerging, performance characteristics may vary.
- If infection with a novel influenza A virus is suspected based on current clinical and epidemiological screening criteria recommended by public health authorities, specimens should be collected with appropriate infection control precautions for novel virulent influenza viruses and sent to state or local health departments for testing. Viral culture should not be attempted in these cases unless a BSL3+ facility is available to receive and culture specimens.
The Diagnostic Hybrids, Inc. D3 Ultra DFA RESPIRATORY VIRUS SCREENING & ID KIT uses viral antigen-specific murine monoclonal antibodies that are directly labeled with fluorescein for the rapid detection and identification of respiratory viruses. The kit includes a DFA Screening Reagent that contains a blend of murine monoclonal antibodies (MAbs) directed against seven respiratory viruses (Influenza A, Influenza B, Respiratory Syncytial Virus, Adenovirus, Parainfluenza 1, Parainfluenza 2, and Parainfluenza 3) plus seven separate DFA Reagents, each consisting of MAb blends directed against a single respiratory virus. The kit can be used for direct specimen or cell culture screening and final virus identification. The cells to be tested, either derived from a clinical specimen or cell culture, are fixed in acetone. The DFA Screening Reagent is added to the cells to determine the presence of viral antigens. After incubating at 35℃ to 37℃, the stained cells are rinsed with the diluted Wash Solution. A drop of the supplied Mounting Fluid is added and a coverslip is placed on the prepared cells. The cells are examined using a fluorescence microscope. Virus infected cells will be stained with viral specific apple-green fluorescence when stained with the DFA Screening Reagent while uninfected cells will contain no fluorescence but will be stained red by the Evan's Blue counter-stain. If the specimen contains fluorescent cells, the particular virus is identified using the separate DFA Reagents on new, separate cell preparations.
Here's an analysis of the provided text regarding the acceptance criteria and supporting study for the D3 Ultra DFA Respiratory Virus Screening & ID Kit:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria for this device are implied by its comparison to a "Predicate" device (D3 Ultra DFA Respiratory Virus Screening & ID Kit, K061101) which is already legally marketed. The study focuses on demonstrating high agreement between the modified device ("Subject" test) and the predicate device. Specific numerical acceptance criteria are not explicitly stated in the provided text as pass/fail thresholds against a gold standard, but rather presented as high concordance rates.
Performance Metric | Acceptance Criteria (Implied by Predicate Comparison) | Reported Device Performance (Subject Test) |
---|---|---|
Direct Specimen (DS) Method (Fresh Specimens) | High Positive Percent Agreement (PPA) with Predicate | 95.5% PPA (95% CI: 89.0-98.2%) |
High Negative Percent Agreement (NPA) with Predicate | 98.3% NPA (95% CI: 95.7-99.3%) | |
Individual Virus PPA (DS Fresh) | High agreement with Predicate | Adenovirus: 100%, Influenza A: 100%, Influenza B: 100%, Parainfluenza 1: 100%, Parainfluenza 2: --- (0 cases), Parainfluenza 3: 100%, RSV: 100% |
Individual Virus NPA (DS Fresh) | High agreement with Predicate | Adenovirus: 100%, Influenza A: 100%, Influenza B: 98.7%, Parainfluenza 1: 100%, Parainfluenza 2: 100%, Parainfluenza 3: 96.7%, RSV: 100% |
Direct Specimen (DS) Method (Frozen Specimens) | 100% PPA with Predicate (archived specimens) | 100% PPA (95% CI: 97.8-100%) |
100% NPA with Predicate (archived specimens) | 100% NPA (95% CI: 98.8-100%) | |
Cell Culture (CC) Method (Frozen Specimens) | 100% PPA with Predicate (for R-Mix™ Too FreshCells™ in shell vials) | 100% PPA (95% CI: 98.0-100%) |
100% NPA with Predicate | 100% NPA (95% CI: 98.5-100%) | |
Cross-reactivity | No cross-reactivity with specified viruses/bacteria/cells | No cross-reactivity observed for 64 virus strains, 18 host cell types, and 17/18 bacterial cultures (S. aureus showed non-specific staining as small points of fluorescence) |
2. Sample Size Used for the Test Set and Data Provenance
- Study 1-DS (Direct Specimen Method - Fresh):
- Sample Size: 326 evaluated specimens (329 collected, 3 had insufficient cells).
- Data Provenance: Prospectively collected from February through May 2006, from a reference laboratory in the northeast United States.
- Study 2-DS (Direct Specimen Method - Frozen):
- Sample Size: 192 specimens.
- Data Provenance: Residual specimen material from December 2005 through February 2006, stored at -70°C, from a hospital laboratory in the northeast United States. Processed between Feb 13-17, 2006.
- Study 2-CC (Cell Culture Method - Frozen):
- Sample Size: 192 specimens.
- Data Provenance: Same as Study 2-DS.
- Study 3-DS (Direct Specimen Method - Frozen):
- Sample Size: 282 evaluated specimens (298 collected, 16 inadequate).
- Data Provenance: Residual specimen material from January through March 2006, stored at -70°C, from a hospital laboratory in the eastern US. Processed between May 30 - June 1, 2006, at DHI.
- Study 3-CC (Cell Culture Method - Frozen):
- Sample Size: 298 specimens.
- Data Provenance: Same as Study 3-DS.
- Study 3a-DS (Direct Specimen Method - Frozen, Non-prospective archival):
- Sample Size: 26 evaluated specimens (30 collected, 4 had insufficient cells).
- Data Provenance: Non-prospective archival specimens, previously determined to contain Parainfluenza (types 1, 2, or 3) from October 2005 through April 2006, stored at -70°C, from a hospital laboratory in Italy. Tested at an internal reference laboratory (DHI) between June 7-8, 2006.
- Study 3a-CC (Cell Culture Method - Frozen, Non-prospective archival):
- Sample Size: 29 specimens (30 collected, 1 unsuitable).
- Data Provenance: Same as Study 3a-DS.
- Study 3b-CC (Cell Culture Method - Frozen, Non-prospective archival clinical isolates):
- Sample Size: 81 clinical isolates.
- Data Provenance: Banked clinical isolates from a frozen archival repository known to contain respiratory viruses from the 2005/2006 respiratory season.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
The document does not explicitly state the number of experts used or their specific qualifications (e.g., "radiologist with 10 years of experience"). However, the studies compare the "Subject" device's performance to a "Predicate" device. The results of the Predicate device were considered the reference for comparison, implying that the ground truth for the test set was established by the predicate device's results. It's likely that the predicate device's results themselves were established through expert interpretation of a DFA assay.
4. Adjudication Method for the Test Set
The document does not describe an explicit adjudication method (e.g., 2+1, 3+1). The performance is assessed by comparing the Subject device's results directly against the Predicate device's results. Any discrepancies were noted, for example: "With the exception of 4 specimens, the DS test results were concordant... the Predicate device identified 4 specimens as being negative while the Subject device identified one as Flu B and three as Para 3 infections. All but one of the Para 3 specimens were confirmed by culture." This suggests a follow-up investigation for discordant results, but not a formal adjudication panel.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, Effect Size of Human-AI Improvement
A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not conducted. This study is focused on the performance of the device itself (DFA kit), not on how human readers (e.g., clinicians interpreting the results) improve with or without AI assistance. The D3 Ultra DFA assay is a laboratory diagnostic test.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
This refers to a traditional in vitro diagnostic device, not an AI-powered one. The performance reported is that of the assay itself, which is inherently "standalone" in its ability to detect viral antigens based on the immunofluorescence reaction. The "human-in-the-loop" aspect would be a trained technologist interpreting the fluorescent microscopy, which is part of the standard procedure for such assays. The study design implicitly describes the standalone performance of the assay kit as interpreted by laboratory personnel, not an AI algorithm.
7. Type of Ground Truth Used
The primary "ground truth" used for comparison in the performance studies was the results obtained from the legally marketed Predicate device (D3 Ultra DFA Respiratory Virus Screening & ID Kit, K061101). In cases of discrepancies or for certain non-prospective archival specimens, viral cell culture was used for confirmation (e.g., "All but one of the Para 3 specimens were confirmed by culture"). Cross-reactivity testing used known virus strains, bacterial cultures, and cell types.
8. Sample Size for the Training Set
The document does not explicitly mention a "training set" in the context of an AI/algorithm-driven device. This is a traditional IVD. The performance data presented are from various "test sets" as described above. The development of the monoclonal antibodies (MAbs) in the kit would have involved internal validation and optimization, but not in the sense of an algorithm training on a dataset.
9. How the Ground Truth for the Training Set was Established
Since there is no "training set" in the AI/algorithm sense, this question is not applicable. The device relies on specific fluoresceinated monoclonal antibodies. The ground truth for developing these antibodies would be established by isolating and characterizing known viral strains, then producing antibodies that specifically bind to antigens from those strains. The "cross-reactivity testing" (Table 15) provides insight into the specificity of these antibodies against a wide range of other organisms, which is crucial for the reliability of the "ground truth" the antibodies are built upon.
Ask a specific question about this device
(73 days)
DIAGNOSTIC HYBRIDS, INC.
The ELVIS®HSV ID and D3 Typing Test System provides Cells, Replacement Medium and Test Reagents for the culture, qualitative identification and typing of Herpes simplex virus (HSV) from cutaneous or mucocutaneous specimens collected from patients with clinical suspicion of HSV infection. The performance characteristics of this assay have not been established for antiviral therapy, prenatal monitoring or CSF specimens.
The ELVIS®HSV ID and D3 Typing Test System provides Cells, Replacement Medium and Test Reagents for the culture, qualitative identification and typing of herpes simplex virus (HSV) from cutaneous or mucocutaneous specimens as an aid in the diagnosis of HSV type 1 (HSV-1) and HSV type 2 (HSV-2) infections. The performance characteristics of this assay have not been established for antiviral therapy, prenatal monitoring or use with cerebral spinal fluid specimens.
ELVIS HSV Cells are genetically engineered Baby Hamster Kidney (BHK) cells, which, when infected with either HSV-1 or HSV-2, are induced to generate and accumulate an endogenous, intracellular bacterial enzyme, ßgalactosidase. Other related viruses (e.g., Varicella zoster) are not capable of inducing the formation of this enzyme. HSV infection of the ELVIS®HSV Cells also results in the formation of HSV-type-specific proteins. The presence of these proteins can be detected microscopically when fluorescent labeled HSV-type-specific antibodies are used. The two Type 1 monoclonal antibodies used in ELVIS® are directed against specific to epitopes on the HSV-1 protein. The three Type 2 monoclonal antibodies are directed against the HSV-2 glycoproteins C, G and a recombinant glycoprotein G that occur in the cytoplasm of infected cells.
The ELVIS®HSV ID and D2 Typing Test System consists of:
- ELVIS®HSV Cells: The ELVIS®HSV Cells have a routine use period of 7 days from customer receipt while all other components have a shelflife of months (see expiration date on label of each component). ELVIS® HSV Cells are provided as 75% to 95% confluent monolavers in shell-vials with or without coverslips, or in multi-well plates with or without coverslips, and up to 24 monolayers per plate. Each monolayer is covered by at least 0.75-mL of Eagle's Minimum Essential Medium (EMEM) with fetal bovine serum (FBS), penicillin, and streptomycin. Cells are characterized by isoenzyme analysis and have been tested and found free of Mycoplasma spp. and other adventitious organisms.
- ELVIS HSV Replacement Medium: Sterile EMEM containing FBS, Penicillin, Streptomycin and Amphotericin B. ELVIS®HSV Replacement Medium is for use with ELVIS®HSV Shell-Vials and Multi-well Plates.
- ELVIS® HSV Solution 1 (Cell Fixative): an aqueous acetone solution.
- ELVIS® HSV Solution 2T (Staining Buffer): A diluted solution of X-Gal (5-Bromo-4-Chloro-3-Indolyl-B-D-Galactopyranoside), N.N-Dimethylformamide, iron, sodium and magnesium salts, fluorescein-labeled HSV-2-specific murine MAbs (directed against HSV-2 glycoproteins C, G, and a recombinant glycoprotein G) and nonlabeled HSV-1-specific murine MAbs (specific to epitopes on the HSV-1 protein UL42), penicillin, streptomycin, bovine serum albumin and Evans Blue in an aqueous, buffered solution.
- ELVIS®HSV Solution 3: An aqueous, stabilized, buffered solution containing fluorescein-labeled, affinity purified goat-anti-mouse IgG antibody and Evans Blue with sodium azide as preservative.
- ELVIS®HSV Mounting Fluid (Buffered Glycerol): Aqueous, stabilized, buffered glycerol (pH 7.3 +/- 0.5), containing sodium azide as preservative.
- 40X PBS Concentrate. 25-mL: One bottle of a 40X PBS concentrate consisting of 0.4% sodium azide (0.1% sodium azide after dilution to 1X using de-mineralized water).
Here's a breakdown of the acceptance criteria and the study details for the ELVIS®HSV ID & D3 Typing Test System, based on the provided 510(k) summary:
Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category | Specific Criteria/Metric | Reported Device Performance (Subject Device) |
---|---|---|
Analytical Sensitivity (Limit of Detection) | Lowest inoculum level at which positive wells (blue staining cells) are observed. | HSV-1: Averages between 0.65-TCID50 and 8.5-TCID50 depending on the strain. |
HSV-2: Averages between 0.1-TCID50 and 8.0-TCID50 depending on the strain. | ||
Cross-Reactivity (Specificity) | No reactivity (negative result) when tested against a panel of common viruses and bacteria. | All tested Adenovirus, Influenza A, Influenza B, RSV, Parainfluenza, CMV, Varicella-zoster, Echovirus 7, Coxsackievirus A9, Coxsackievirus B2, Enterovirus 71, and most bacterial/yeast strains showed negative reactivity. |
Note: Staphylococcus aureus showed light background fluorescent staining due to protein A binding, but distinguishable from viral antigen binding. | ||
Reproducibility (Presence of HSV) | 100% detection of HSV in infected wells across multiple sites and runs. | 100% (120/120) of wells with infected cells reported presence of HSV. |
Reproducibility (Typing Accuracy - HSV-1) | 100% correct typing of HSV-1 in infected wells across multiple sites and runs. | 100% (60/60) reported expected type for HSV-1. |
Reproducibility (Typing Accuracy - HSV-2) | 100% correct typing of HSV-2 in infected wells across multiple sites and runs. | 100% (60/60) reported expected type for HSV-2. |
Reproducibility (Negative Control) | 100% absence of HSV reported in negative control vials across multiple sites and runs. | 100% (30/30) reported absence of HSV in vials with no virus. |
Clinical Performance (Positive Percent Agreement - PPA) for HSV Isolation | High PPA compared to predicate device. | 99.6% (250/251) with a 95% CI of 97.8 - 100% |
Clinical Performance (Negative Percent Agreement - NPA) for HSV Isolation | High NPA compared to predicate device. | 98.9% (463/468) with a 95% CI of 97.5 – 99.7% |
Clinical Performance (PPA) for HSV-2 Typing | High PPA compared to predicate device. | 99.3% (145/146) with a 95% CI of 96.2 – 100% |
Clinical Performance (NPA) for HSV-2 Typing | High NPA compared to predicate device. | 94.2% (98/104) with a 95% CI of 87.9 – 97.9% |
Clinical Performance (PPA) for HSV-1 Typing | High PPA compared to predicate device. | 100% (32/32) with a 95% CI of 96.0 – 100% |
Clinical Performance (NPA) for HSV-1 Typing | High NPA compared to predicate device. | 87.5% (7/8) with a 95% CI of 47.3 – 99.7% |
Study Details
This device is not an AI/ML device, so certain categories below (like multi-reader multi-case studies, human-in-the-loop performance, and training set details) are not applicable.
-
Sample size used for the test set and the data provenance:
- Clinical Test Set: 735 specimens initially, with 719 specimens included in the final analysis after exclusions (16 were excluded due to toxicity to cell culture or contamination).
- Data Provenance: The origin of the data (country) is not explicitly stated. The study was conducted at "three locations" and specimens were "submitted, April through May, 2009, for HSV culture." This indicates a prospective collection of clinical samples during that period.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- For the clinical performance study, the ground truth was established by comparing the subject device's results against the legally marketed predicate device (ELVIS®HSV ID/Typing Test System), which itself is an in vitro diagnostic (IVD) device. Therefore, it relies on the established performance of that predicate device as the reference standard, rather than a panel of human experts establishing ground truth for each case. No specific number or qualifications of human experts establishing ground truth are mentioned for the clinical study.
-
Adjudication method for the test set:
- Not applicable as the comparison was made against a predicate device, not through expert adjudication of images.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study was not done. This device is an in vitro diagnostic (IVD) test system for virus identification and typing, not an AI or imaging device involving human readers.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- This is an IVD test system, which inherently operates without a human-in-the-loop for result generation, but requires human interpretation of microscopic staining. The performance described (e.g., PPA, NPA) represents the standalone performance of the device itself (cells + reagents + staining procedure + microscopic observation) compared to the predicate device. It's not an "algorithm" in the AI sense.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For the clinical study, the ground truth was the results obtained from the legally marketed predicate device (ELVIS®HSV ID/Typing Test System).
- For the analytical and reproducibility studies, the ground truth was based on known virus strains and concentrations (TCID50) and confirmed uninfected samples.
-
The sample size for the training set:
- Not applicable. This is not an AI/ML device that requires a training set.
-
How the ground truth for the training set was established:
- Not applicable. This is not an AI/ML device that requires a training set.
Ask a specific question about this device
(185 days)
DIAGNOSTIC HYBRIDS, INC.
The Thyretain ™ TSI Reporter BioAssay is intended for the qualitative detection in serum of thyroid stimulating autoantibodies to the thyroid stimulating hormone (TSH) receptors (TSHRs) on the thyroid. The detection of these stimulating autoantibodies, in conjunction with other clinical and laboratory findings, may be useful as an aid in the differential diagnosis of patients with Graves' disease (GD).
The Thyretain ™ TSI Reporter BioAssay (TSI Reporter) utilizes a patented bioassay technology to detect thyroid stimulating immunoglobulin (TSI) in human serum. Genetically engineered Chinese hamster ovary (CHO) cells, expressing a chimeric form' of the human thyroid stimulating hormone receptor (TSHR) and a cyclic adenosine monophosphate (cAMP) induced luciferase reporter gene, are cryogenically preserved and provided in measured aliquots. The CHO Mc4 cell line has a nucleic acid sequence encoding a chimeric human TSH receptor, designed for reduced response to thyroid blocking immunoglobulin (TBI) activity. Thus, the hTSH receptor, comprised of 730 amino acids, has amino acid residues 262 to 335 replaced by the equivalently located 73 amino acid residues of the rat Luteinizing Hormone receptor to form the chimeric TSHR. This chimeric receptor is linked to a firefly luciferase reporter gene in operable combination with a glycoprotein hormone a-subunit promoter. The cells are seeded and grown for 15 to 18 hours to confluent monolayers in a 96-well plate. Patient sera, reference control, positive and normal controls and are diluted with a proprietary reaction buffer (RB), added to the cell monolayers and allowed to react with the cells for 3 hours. During this induction period TSI, if present in the patient serum, bind to the chimeric human TSHR on the cell surface. This binding event induces a signaling cascade resulting in increased production of intra-cellular cAMP. This increased production of cAMP is evidenced by increased production of luciferase. At the conclusion of the 3 hour induction period the cells are lysed. Luciferase levels are then measured using a luminometer. A significant increase in luminescence over the Reference Control indicates the presence of TSI antibodies in the sample.
Here's a summary of the acceptance criteria and the studies that prove the device meets them, based on the provided text:
Thyretain™ TSI Reporter BioAssay: Acceptance Criteria and Study Summary
The provided document describes the Thyretain™ TSI Reporter BioAssay as an in-vitro diagnostic device intended for the qualitative detection of thyroid stimulating autoantibodies (TSI) in human serum, to aid in the differential diagnosis of Graves' disease (GD). The performance was assessed through non-clinical and clinical studies, primarily comparing it to a legally marketed predicate device, the KRONUS TSH Receptor Antibody (TRAb) Coated Tube (CT) Assay Kit.
1. Table of Acceptance Criteria and Reported Device Performance
The document doesn't explicitly state "acceptance criteria" for clinical performance in a numerical format that would typically precede a study. Instead, it presents the results of comparative studies against a predicate device and clinical sensitivity/specificity. The implied acceptance is that the device demonstrates comparable performance to the predicate and acceptable clinical utility (sensitivity and specificity).
Performance Measure | Acceptance Criteria (Implied by Comparison) | Reported Device Performance (Thyretain™ TSI Reporter BioAssay) |
---|---|---|
Non-Clinical Performance | ||
Limit of Detection (LoD) | Not explicitly stated as a numerical criterion for acceptance, but demonstrated according to CLSI EP17A. | 89.14% SRR |
Interference | No observed interference (i.e., less than a defined threshold) from specified levels of bilirubin, hemoglobin, and lipids. | No interference observed with bilirubin up to 36.6 mg/dL, hemoglobin up to 250 mg/dL, and lipids up to 1,168 mg/dL. |
Cross-reactivity (Glycoprotein Hormones) | No significant cross-reactivity (i.e., less than a defined threshold) with specified levels of other glycoprotein hormones. | No cross-reactivity observed with luteinizing hormone up to 625 mIU/mL, human chorionic gonadatrophin up to 40,625 mIU/mL, follicle stimulating hormone up to 2,000 mIU/mL, and thyroid stimulating hormone up to 0.35 mIU/mL. |
Cross-reactivity (Other Autoantibodies) | Most samples with other autoimmune diseases should test negative for TSI. | 1 out of 36 samples with autoimmune diseases (16 Hashimoto's, 10 RA, 10 SLE) tested positive (Hashimoto's sample with TSH levels near interference level). All other samples tested negative. |
Intra-Assay Precision (CV%) | Implied: Low variability (e.g., typically ≤ 10-15% for immunoassays depending on the analyte level). | Average intra-plate (n=16) variation (CV %) was 4.7%. |
Inter-Assay Precision (CV% - Intra-Day) | Implied: Low variability (e.g., typically ≤ 10-15% for immunoassays depending on the analyte level). | Average inter-assay CV% values (day one) were: High TSI (3.6%), Medium TSI (2.6%), Low TSI (4.2%), Reference Control (2.4%), TSI Positive Control (5.0%), Normal Control (5.0%). Overall inter-assay variation within this day was 3.8%. |
Inter-Assay Precision (CV% - Inter-Day) | Implied: Acceptable long-term variability. | Overall average inter-assay variation across 20 days was 12% (individual sample types ranged from 7% to 16%). |
Reproducibility (CV%) | For Samples A, B, C, D: Overall CV% for Sample A (23.7%), Sample B (23.7%), Sample C (24.6%), Sample D (17.9%). For Samples E, F, G: Overall CV% for Sample E (15.0%), Sample F (20.3%), Sample G (20.5%). | |
Reproducibility (Accuracy/Ratio) | For Samples A, B, C, D: Samples A & B: 100% positive ratio. Sample C: 100% negative ratio. Sample D: 50% positive ratio (139/180). For Samples E, F, G: Sample E: 100% positive ratio (60/60). Samples F & G: 100% negative ratio (60/60). Note: The acceptance criteria were defined as "Expected Accuracy" of 100% for A, B, C, E, F, G and 50% for D, and the device met these except for Sample D which was 139/180 positive, which is close to the 50% target. |
Clinical Performance | Reported Device Performance (Thyretain™ TSI Reporter BioAssay) | |
---|---|---|
Comparative Study vs. Predicate | "Comparable" to the predicate device (KRONUS TRAb). No specific numerical thresholds are provided for PPA/NPA. | Combined Sites 1-COH and 2-MN (n=299 valid specimens): |
- Positive Percent Agreement (PPA): 93.8% (95% CI: 88.2% to 96.8%)
- Negative Percent Agreement (NPA): 89.5% (95% CI: 84.0% to 93.2%)
Site 3-NC (n=231 valid specimens):
- Positive Percent Agreement: 74.6% (95% CI: 63.5% to 83.3%)
- Negative Percent Agreement: 97.5% (95% CI: 93.8% to 99.0%)
Post-hoc analysis removing hypothyroid patients at Site 3-NC increased PPA to 81.5% (95% CI: 70.4% to 89.1%). |
| Clinical Sensitivity/Specificity | Not explicitly stated as a numerical criterion for acceptance, but demonstrated to provide useful diagnostic information. | Study with 249 characterized specimens: - Clinical Sensitivity: 92.0% (46/50 Graves Disease positive)
- Clinical Specificity: 99.5% (198/199 Other autoimmune diseases and healthy controls negative) |
2. Sample Sizes and Data Provenance
- Non-Clinical Studies:
- Assay Cutoff:
- "Training set": 30 subjects with diagnosed Graves' disease and 44 normal subjects.
- "Testing-set" (pre-clinical verification): 50 GD positive sera, 140 normal sera.
- Precision (Intra-assay, Inter-assay, Inter-Day): Varies per test; for Inter-Day, n=120 for high, medium, low TSI serum, n=40 for normal serum, n=80 for controls.
- Reproducibility:
- Panel 1: 4 specimens tested at multiple sites (3 sites, with one having 2 technicians). Each site/technician performed testing twice a day over 8 days, leading to 180 total tests for each sample (3 sites x 2 tests/day x 8 days x 4 samples = 192, or 3 sites x 2 techs x 2 tests/day x 8 days = 96 for specific site/tech data, actual calculation seems to be 180 total for "positive ratio").
- Panel 2: 3 samples tested at 2 sites twice a day for 5 days, totaling 60 tests for each sample (2 sites x 2 tests/day x 5 days x 3 samples = 60).
- Assay Cutoff:
- Clinical Performance Studies:
- Comparative Study:
- Combined Sites 1-COH and 2-MN: 312 specimens initially, 299 analyzed (1 excluded for insufficient quantity, 12 excluded due to indeterminate results on comparator).
- Site 3-NC: 247 specimens initially, 231 analyzed (16 excluded due to indeterminate results on comparator).
- Clinical Sensitivity and Specificity: 249 characterized specimens.
- Comparative Study:
- Data Provenance: The document implies the data is retrospective/archived samples, as it refers to "patients with diagnosed Graves' disease," "normal subjects with no known or clinically diagnosed thyroid disease," and "sera obtained from physicians with diagnostic information." The multi-site nature (COH, MN, NC) suggests geographical diversity within the US. There is no explicit mention of the country of origin for all samples.
3. Number of Experts and Qualifications for Ground Truth
- Non-Clinical Studies (Assay Cutoff): Ground truth was established based on "diagnosed Graves' disease" and "normal subjects with no known or clinically diagnosed thyroid disease," implied to be clinical diagnosis by physicians. No specific number or qualifications of experts are provided.
- Clinical Performance Studies (Comparative Study, Clinical Sensitivity/Specificity):
- The "ground truth" for the comparative study was the results from the KRONUS TSH Receptor Antibody (TRAb) Coated Tube (CT) Assay Kit (the predicate device). This implies that the predicate device's results were accepted as the reference for comparison against the subject device.
- For the Clinical Sensitivity and Specificity study, specimens were "249 characterized specimens" categorized as "Graves Disease" and "Other autoimmune diseases and healthy controls." This categorization itself acts as the ground truth. The method of characterization (e.g., expert clinical diagnosis, pathology) is not explicitly detailed, but it is implied to be based on clinical diagnosis ("Diagnosis"). No specific number of experts or their qualifications are provided for establishing these fundamental diagnoses.
4. Adjudication Method
- The document does not describe an adjudication method for establishing ground truth using multiple experts.
- For the comparative study, discordant results between the subject device and the predicate device were analyzed, especially at Site 3-NC. The analysis focused on patient TSH results and ATA guidelines for hypothyroidism to explain the discrepancies, rather than an expert adjudication of the initial diagnosis. No multi-reader, observer, or expert consensus adjudication is described for the ground truth of the patient samples.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No MRMC comparative effectiveness study is described where human readers' performance with and without AI assistance is evaluated. The device is an in-vitro diagnostic assay read by a luminometer, not an AI interpreting images for human readers.
6. Standalone Performance
- Yes, the clinical performance studies (both the comparative study and the clinical sensitivity/specificity study) describe the standalone performance of the Thyretain™ TSI Reporter BioAssay. The device produced its own results (positive/negative) which were then compared to either the predicate device's results or defined clinical diagnoses.
7. Type of Ground Truth Used
- Non-Clinical (Assay Cutoff): Clinical diagnosis of Graves' disease or no known thyroid disease.
- Clinical Performance (Comparative Study): The results of the predicate device (KRONUS TSH Receptor Antibody (TRAb) Coated Tube (CT) Assay Kit) were used as the reference for comparison.
- Clinical Performance (Clinical Sensitivity and Specificity): Clinical diagnosis ("Graves Disease", "Other autoimmune diseases and healthy controls"). This would typically be established based on a combination of clinical signs/symptoms, other laboratory tests, and possibly imaging, by a physician.
8. Sample Size for the Training Set
- For establishing the preliminary assay cutoff:
- 30 subjects with diagnosed Graves' disease.
- 44 normal subjects with no known or clinically diagnosed thyroid disease.
- (An additional "testing-set" of 50 GD positive sera and 140 normal sera was used for verification).
9. How the Ground Truth for the Training Set Was Established
- The ground truth for the training set (for assay cutoff determination) was established through clinical diagnosis: "subjects with diagnosed Graves' disease" and "normal subjects with no known or clinically diagnosed thyroid disease." This implies traditional clinical assessment by physicians.
Ask a specific question about this device
(53 days)
DIAGNOSTIC HYBRIDS, INC.
The Diagnostic Hybrids, Inc. device, D3 DFA Metapneumovirus Identification Kit, is intended for the qualitative detection and identification of human metapneumovirus (hMPV) in nasal and nasopharyngeal swabs and aspirates/washes or cell culture. The assay detects hMPV antigens by immunofluorescence using a blend of three monoclonal antibodies (MAbs), from patients with signs and symptoms of acute respiratory infection. This assay detects but is not intended to differentiate the four recognized genetic sub-lineages of hMPV.
Negative results do not preclude hMPV infection and should not be used as the sole basis for diagnosis, treatment or other management decisions. It is recommended that specimens found to be negative after examination of the direct specimen results be confirmed by an FDA cleared hMPV molecular assay.
The D DFA Metapneumovirus Identification Kit uses a blend of three hMPV antigen-specific murine MAbs that are directly labeled with fluorescein for detection of hMPV. The reagent detects but does not differentiate between the four recognized subtypes of hMPV (subtypes A1, A2, B1, and B2).
Kit Components:
-
- Metapneumovirus DFA Reagent, 5-mL. One dropper bottle containing a blend (see below for MAb discussion) of fluorescein-labeled murine monoclonal antibodies directed against MPV. The buffered, stabilized, aqueous solution contains Evans Blue as a counter-stain and 0.1% sodium azide as a preservative.
-
- hMPV Antigen Control Slides, 5 slides. Five individually packaged control slides, each with a well containing cell culture-derived MPV positive cells and a well containing cell culture-derived negative cells. Each slide is intended to be stained only one time. Control material has been treated to be non-infectious; however normal laboratory precautions are required when the material is handled.
-
- 40X PBS Concentrate, 25-mL. One bottle containing a 40X concentrate consisting of 4% sodium azide (0.1% sodium azide after dilution to 1X using de-mineralized water) in PBS.
-
- Mounting Fluid, 7-mL. One dropper bottle containing an aqueous, buffer-stabilized solution of glycerol and 0.1% sodium azide.
The cells to be tested, derived from a clinical specimen or cell culture, are placed onto a glass slide, allowed to air dry and are fixed in acetone. The Metapneumovirus DFA Reagent is added to the cells which are then incubated for 15 to 30 minutes at 35℃ to 37℃ in a humidified chamber or humidified incubator. The stained cclls are then washed with the diluted phosphate buffered saline (PBS), a drop of the supplied Mounting Fluid is added and a coverslip is placed on the prepared cells. The cells are examined using a fluorescence microscope. The hMPV infected cells will fluoresce apple-green. Uninfected cells will contain no fluorescence but will be stained red by the Evans Blue counter-stain.
It is recommended that specimens found to contain no fluorescent cells after examination of the direct specimen be confirmed by an FDA cleared hMPV molecular assay.
Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:
Device Name: D3 DFA Metapneumovirus Identification Kit
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are implied by the performance results required for clearance. As this is not a modern AI/ML device, the performance metrics are clinical sensitivity and specificity, or Positive Percent Agreement (PPA) and Negative Percent Agreement (NPA). Acceptance usually implies that these metrics meet certain thresholds, particularly with lower bounds of the 95% Confidence Interval (CI) demonstrating adequate performance.
For Direct Specimen Testing (Clinical Studies Sites 1-3):
Metric | Acceptance Criteria (Implied) | Reported Device Performance (Site 1: Nasal Wash/Aspirate) | Reported Device Performance (Site 2: Nasal/Nasopharyngeal Swab) | Reported Device Performance (Site 3: Nasal Wash/Aspirate)* | Reported Device Performance (Site 3: Nasal/Nasopharyngeal Swab)* |
---|---|---|---|---|---|
Sensitivity | Sufficiently high (e.g., >80% or >90%, with acceptable CI) | 53.0% (95% CI: 46.6%-59.5%) | 70.7% (95% CI: 57.3%-81.9%) | PPA: 100.0% (95% CI: 66.4%-100%) | PPA: 75.0% (95% CI: 19.4%-99.4%) |
Specificity | Sufficiently high (e.g., >90% or >95%, with acceptable CI) | 99.8% (95% CI: 99.3%-99.9%) | 99.7% (95% CI: 98.2%-100%) | NPA: 100.0% (95% CI: 85.2%-100%) | NPA: 100.0% (95% CI: 94.2%-100%) |
*Note: For Study Site 3, Positive Percent Agreement (PPA) and Negative Percent Agreement (NPA) were reported instead of sensitivity and specificity due to the comparator method used.
For Cultured Cells Testing (Clinical Study Site 4):
Metric | Acceptance Criteria (Implied) | Reported Device Performance (Site 4: Freeze-thawed Nasopharyngeal Swab Amplified in Cell Culture)* |
---|---|---|
Sensitivity | Sufficiently high | PPA: 83.3% (95% CI: 35.9%-99.6%) |
Specificity | Sufficiently high | NPA: 100.0% (95% CI: 99.7%-100%) |
For Reproducibility (Analytical Performance):
Metric | Acceptance Criteria (Implied) | Reported Device Performance |
---|---|---|
Agreement with Expected Result (Positive) | 100% agreement | 100% (120/120) |
Agreement with Expected Result (Negative) | 100% agreement | 100% (90/90) |
Total Percent Agreement | 100% agreement | 100% (210/210) |
2. Sample Size Used for the Test Set and Data Provenance
The clinical performance studies used the following sample sizes for the test set:
- Study Site 1: 1482 fresh nasal wash/nasopharyngeal aspirate specimens.
- Study Site 2: 368 fresh nasal/nasopharyngeal swab specimens.
- Study Site 3: 32 fresh nasal wash/nasopharyngeal aspirate specimens and 66 fresh nasal/nasopharyngeal swab specimens.
- Study Site 4 (Cultured Cells): 74 freeze-thawed nasopharyngeal swab specimens that were cultured.
Data Provenance: The data was collected during prospective studies at 3 geographically diverse U.S. clinical laboratories (Study Sites 1-3) during the 2005-2006 and 2006-2007 respiratory virus seasons (December 2005 - April 2006 and December 2006 - March 2007). Study Site 4, for cultured cells, was performed at DHI during the 2007-2008 respiratory virus season (January - April 2008). Specimens were "excess, remnants of respiratory specimens that were prospectively collected from symptomatic individuals suspected of respiratory infection, and were submitted for routine care or analysis by each site, and that otherwise would have been discarded." Individual specimens were de-linked from all patient identifiers. All clinical sites were granted waivers of informed consent by their IRBs.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document does not explicitly state the number of experts or their qualifications for establishing the ground truth. It describes the ground truth as composite comparator methods or a single comparator assay in different sites:
- Sites 1 and 2: Ground truth was a composite comparator method consisting of viral culture and a validated real-time RT-PCR comparator assay with bi-directional sequencing analysis. "True" hMPV positive was defined as positive by viral culture OR positive by RT-PCR with sequencing matching hMPV. "True" hMPV negative was defined as negative by both viral culture and RT-PCR.
- Site 3 and 4: Ground truth was based solely on the validated hMPV real-time RT-PCR followed by bi-directional sequencing analysis comparator assay. Positive was defined by sequencing data matching hMPV, and negative by negative RT-PCR.
While these methods are considered reference standards in microbiology, the document does not specify human expert involvement in interpreting these results or adjudicating discrepancies, beyond the inherent expertise in running and interpreting these laboratory assays.
4. Adjudication Method for the Test Set
The document does not explicitly detail an "adjudication method" in the sense of multiple human readers resolving disagreements, as would be typical for image-based AI studies. Instead, the ground truth itself is a carefully defined reference standard.
- For Sites 1 and 2, the ground truth was a composite definition:
- Positive: Viral culture positive OR Real-time RT-PCR positive with bi-directional sequencing matching hMPV.
- Negative: Viral culture negative AND Real-time RT-PCR negative.
- For Sites 3 and 4, the ground truth was based on the hMPV real-time RT-PCR followed by bi-directional sequencing analysis comparator assay. Positive was defined by acceptable sequencing data matching hMPV, and negative by negative RT-PCR.
This structure inherently handles potential discrepancies between methods by prioritizing certain outcomes (e.g., a positive by either method for the composite ground truth).
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance
No, an MRMC comparative effectiveness study involving human readers and AI assistance was not done. This device is a diagnostic kit (an immunofluorescence assay), not an AI-powered system designed to assist human readers in interpreting complex data. The clinical studies evaluated the standalone performance of the kit itself against a reference standard.
6. If a Standalone (i.e. algorithm only, without human-in-the-loop performance) Was Done
Yes, the studies described are essentially standalone performance studies for the D3 DFA Metapneumovirus Identification Kit. The kit's results (fluorescence detection by microscopy) were compared directly against the established ground truth without any involvement of a human-in-the-loop for interpreting the kit's results in the context of an AI system. The "algorithm" here is the biochemical and optical detection mechanism of the DFA test combined with human interpretation of fluorescence under a microscope, as per standard laboratory practice.
7. The Type of Ground Truth Used
The type of ground truth used varied slightly across study sites but primarily involved molecular and classical microbiological methods:
- Clinical Study Sites 1 and 2: Composite Ground Truth combining:
- Viral Culture: A classical microbiological method for isolating and identifying viruses.
- Validated Real-time RT-PCR followed by bi-directional sequencing analysis: A molecular method to detect hMPV nucleic acid, with sequencing confirming the identity.
- Clinical Study Sites 3 and 4: Molecular Ground Truth based solely on a validated hMPV real-time RT-PCR followed by bi-directional sequencing analysis. This method is considered a highly specific and sensitive reference standard.
8. The Sample Size for the Training Set
No information about a "training set" for an algorithm is provided. This device is a diagnostic kit (DFA assay), not an AI/ML model that undergoes a training phase.
9. How the Ground Truth for the Training Set Was Established
As there is no "training set" for an AI/ML algorithm in this context, this question is not applicable. The device relies on direct antigen detection via immunofluorescence, not on learned patterns from a training dataset.
Ask a specific question about this device
Page 1 of 4