Search Results
Found 1 results
510(k) Data Aggregation
(892 days)
This enzyme-linked immunosorbent assay (ELISA) is intended to determine an individual's serologic status with respect to IgG antibodies to CMV. When the assay is used in the qualitative mode, a reactive result may indicate current or past infection with CMV. When used in the semi-quantitative mode, this test can detect significant antibody rises associated with seroconversion. reinfection, or reactivation of latent disease. This product is not FDA-cleared for use in screening blood or plasma donors. The performance of this assay has not been established for neonates and pregnant women. Results from Immunocompromised patients should be interpreted with caution.
An enzyme-linked immunosorbent assay (ELISA) designed for the qualitative or semi- quantitative detection of circulating InG antibodies to cytomegalovirus in human serum.
The ELISA methodology is commonly used for serum antibody evaluations. Purified antigens from Övtomegalovirus have been attached to the inner surfaces of the microwell plate. During the initial incubation step, antibodies in patient serum bind specifically to the immobilized antigen and remain in place after a wash step.
A second antibody, which is coniugated to the enzyme horseradish peroxidase, is used to recognize the gamma-chain region of the bound anti-cytomegalovirus antibodies. In the wells where the second antibody remains bound, the enzyme catalyzes a color change in the substrate, tetramytbenzidine (TMB). After the reaction is stopped, the color is read in an EIA plate reader.
Here's an analysis of the provided 510(k) summary regarding the Hemagen CMV IgG Kit, addressing your specific questions:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are not explicitly stated as distinct pass/fail thresholds in the summary. However, we can infer the performance targets based on the "Comparative Device" and the overall goal of demonstrating "substantial equivalence." The "relative sensitivity" and "relative specificity" reported clearly relate to the comparison with the predicate device. For the "CDC CMV/HSV serum panel," 100% concordance with CDC's characterization would implicitly be the acceptance criteria.
Performance Metric | Acceptance Criteria (Inferred) | Reported Device Performance |
---|---|---|
Comparative Testing (vs. Predicate) | ||
Relative Sensitivity | High agreement with predicate for positive samples (close to 100%) | 100% (189/188) (98.0 to 100% CI) |
Relative Specificity | High agreement with predicate for negative samples (close to 100%) | 98.4% (63/64) |
Alternate Site Evaluations | ||
Agreement with Predicate & Between Sites | Consistent results with predicate and across sites (100% agreement for blind panel) | 100% agreement for all 9 samples (positive/negative) with both devices across 3 sites |
Interfering Substances | ||
Lack of Significant Effect | No significant effect on assay results (≤ 500 mg/dL hemoglobin, ≤ 3,000 mg/dL lipid) | No significant effect with hemoglobin ≤ 500 mg/dL and lipid ≤ 3,000 mg/dL. |
Prozone and "Hook Effect" | ||
Absence of "Hook Effect" | Appropriately high positive results with high titered sera | Kit gives appropriately high positive results with high titered sera. |
Cross-Reactivity | ||
No Effect on Specificity | IgG antibodies to rubella, VZV, HSV should not affect specificity | IgG to rubella, VZV, HSV did not affect specificity. |
CDC CMV/HSV Serum Panel | ||
Correct Characterization | Correctly characterize all samples (100% agree with CDC) | 66 of 66 CMV positives correctly characterized, 34 of 34 negatives correctly characterized. (Implies 100% concordance). |
Semi-Quantitation Evaluations (Precision) | ||
Intra-assay %CV | Acceptable variability (e.g., 3.2 indicative of a significant rise. |
2. Sample Size Used for the Test Set and Data Provenance
- Comparative Testing (vs. Predicate):
- Sample Size: 252 specimens.
- Data Provenance: "serum specimens from normal blood donors." (Implies human origin, likely from the country where the study was conducted, which is not specified, but typically US for FDA submissions unless stated otherwise. This is retrospective data, as the samples pre-existed the testing.)
- Alternate Site Evaluations:
- Sample Size: 9 "blind" serum samples.
- Data Provenance: Not explicitly stated, but these were a prepared "panel" (prospective use of pre-selected samples).
- Interfering Substances:
- Sample Size: Not specified (samples with varying hemoglobin and lipid concentrations).
- Data Provenance: Not specified, but likely laboratory-prepared spiked samples (controlled study).
- Prozone and "Hook Effect":
- Sample Size: "a high titered serum sample." (One sample, potentially replicated).
- Data Provenance: Not specified, likely a laboratory-prepared sample.
- Cross-Reactivity:
- Sample Size: 10 samples (5 CMV positive, 5 CMV negative, all with other viral antibodies).
- Data Provenance: Samples positive/negative for CMV by "another commercially available assay," and also containing other viral antibodies (retrospective or commercially sourced clinical samples).
- CDC CMV/HSV Serum Panel:
- Sample Size: 100 samples (50 pairs).
- Data Provenance: "blind coded serum samples from clinically evaluated patients." (This is retrospective clinical data). The CDC provided and evaluated (scored) the results, suggesting a US origin for the panel.
- Semi-Quantitation Evaluations (Precision):
- Sample Size: 4 positive samples for intra-assay, 5 positive samples for inter-assay.
- Data Provenance: "CMV reactive serum samples" (likely laboratory-selected clinical samples).
- AU Ratio Establishment:
- Sample Size: 43 high titered serum samples.
- Data Provenance: "high titered serum samples" (likely laboratory-selected clinical samples).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
There's no explicit mention of specific experts establishing ground truth in the traditional sense of consensus reading from medical images.
- Comparative Testing (vs. Predicate): The ground truth is established by the predicate device (Gull Laboratories CMV IgG ELISA Test). The accuracy of the predicate is assumed.
- Alternate Site Evaluations: The "blind" panel consists of "Four CMV IgG negatives" and "Five CMV IgG positives." The inherent positivity/negativity of these samples serves as the ground truth, presumably established beforehand by a reference method or consensus, though no specific expert panel is detailed.
- CDC CMV/HSV Serum Panel: The "ground truth" for this panel was established by the CDC's characterization of the 100 samples. The qualifications of the individuals at the CDC who established this characterization are not provided, but it would be assumed to be highly qualified scientists/clinicians in serology.
- Other studies (Interfering Substances, Prozone, Cross-Reactivity, Precision, AU Ratio): The ground truth for these is typically inherent to the experimental design (e.g., known concentrations of interferents, known high-titer samples, samples pre-characterized for other antibodies, or the intrinsic value derived from replicate measurements and dilutions).
4. Adjudication Method for the Test Set
- Comparative Testing (vs. Predicate): No explicit adjudication method is described. Differences between the proposed and predicate device would lead to classification as false positives/negatives based on the predicate's result.
- Alternate Site Evaluations: No adjudication is mentioned other than the "blind" nature of the samples. Agreement among sites and with the pre-defined panel truth was assessed.
- CDC CMV/HSV Serum Panel: The CDC "scored" the results. This implies the CDC had its own reference standard or agreed-upon ground truth for each of the 100 samples, against which the device's results were compared. This acts as an external adjudication of the device's performance against the CDC's accepted truth.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No, an MRMC comparative effectiveness study was not done. This submission is for an in vitro diagnostic (IVD) assay (an ELISA kit), not an AI-powered diagnostic imaging device or software that assists human readers. Therefore, the concept of "human readers improve with AI vs. without AI assistance" does not apply here.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
Yes, in essence, standalone performance was evaluated. For an IVD assay like this ELISA kit, the device itself generates a quantitative (optical density) or qualitative (positive/negative) result based on chemical reactions. The performance metrics (sensitivity, specificity, precision, etc.) are a direct measurement of the device's output compared to ground truth or a predicate. There isn't a "human-in-the-loop" component in the interpretation of the primary result; the assay is designed to provide a direct output.
7. The Type of Ground Truth Used
- Predicate Device: For the primary comparison studies, the results from the Gull Laboratories CMV IgG ELISA Test served as the de facto ground truth.
- Pre-defined Panel Status: For the alternate site evaluations, pre-characterized CMV IgG negative/positive panel samples served as ground truth.
- CDC Characterization: For the critical CDC panel study, the CDC's independent characterization of the 100 samples served as the ground truth. This likely involved a combination of reference methods and clinical evaluation.
- Experimental Design/Known Status: For interfering substances, prozone, cross-reactivity, and precision studies, the ground truth was based on the known characteristics of the samples used (e.g., spiked concentrations, known high titers, samples positive for other viruses).
8. The Sample Size for the Training Set
The concept of a "training set" in the context of machine learning (AI development) is not directly applicable here. This is an ELISA kit, which relies on biochemical reactions, not an algorithm that learns from data.
Therefore, there is no explicit "training set" mentioned in the traditional AI/machine learning sense. The development of ELISA kits involves extensive research and development, optimization of reagents, and establishment of cutoff values, but this is a different process than algorithmic training. The "standardization to a characterized material" and the use of calibrators mentioned in section 6.(A) could be seen as analogous to internal calibration or optimization, but not a separate "training set" for an algorithm.
9. How the Ground Truth for the Training Set Was Established
As there is no "training set" in the AI/machine learning context, this question is not directly applicable.
However, if we consider how the assay itself was optimized or calibrated, section 6.(A) states:
- "The cutoff for the proposed device is based upon the comparative performance with the predicate device. The optimal cutoff value was selected utilizing receiver operating characteristic (ROC) methods."
- "The proposed device is supplied with three calibrators (low, medium, and high) that have been standardized to a characterized material."
This indicates that:
- The cutoff value was established using mathematical methods (ROC) on data compared against the predicate device.
- The calibrators were standardized against a "characterized material," meaning a reference standard with known activity levels.
These steps are part of developing and optimizing the assay's performance rather than training an AI model.
Ask a specific question about this device
Page 1 of 1