Search Results
Found 1 results
510(k) Data Aggregation
(133 days)
This test system is for in vitro diagnostic use for the detection of antibodies to nuclear antigen Jo-1 in human serum.
This is an enzyme immunoassay for the detection of antibodies to nuclear antigen Jo-1 in human serum.
Here's a breakdown of the acceptance criteria and study details based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document doesn't explicitly state "acceptance criteria" with numerical thresholds. Instead, it justifies substantial equivalence based on the comparison to a predicate device. I've inferred the performance metrics used for this justification.
Performance Metric | Acceptance Criteria (Implied) | Reported Device Performance (with Borderline as Positive) |
---|---|---|
Relative Sensitivity | High (demonstrate equivalence to predicate) | 100.0% |
Relative Specificity | High (demonstrate equivalence to predicate) | 99.3% |
Overall Agreement | High (demonstrate equivalence to predicate) | 90.4% |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size:
- Positive Cases (from predicate): 21 + 2 + 0 = 23 (assuming "borderline" on the predicate is not explicitly categorized but contributes to comparison)
- Borderline Cases (from predicate): 1 + 4 + 0 = 5
- Negative Cases (from predicate): 0 + 1 + 140 = 141
- Total Sample Size Analyzed: 23 (positive by predicate) + 5 (borderline by predicate) + 141 (negative by predicate) = 169 samples (implied from the sum of the table cells).
- Data Provenance: Not specified in the provided text (e.g., country of origin, retrospective or prospective).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
Not applicable. The ground truth was established by a predicate device (Immuno Concepts RELISA® Screening Assay K935129), not by human experts for this specific comparative study.
4. Adjudication Method for the Test Set
Not applicable. The ground truth was established by a predicate device, not by human adjudication of individual results.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
Not applicable. This is a comparison between two immunoassays, an "algorithm" as described in your prompt for AI is not involved.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, in a sense. The "device" being evaluated is essentially an automated immunoassay test system (RELISA® Jo-1 Antibody Test System). Its performance is measured directly against a predicate immunoassay system (RELISA® ENA Antibody Screening Tests System) without human interpretation as part of the core measurement. The output of both devices is quantitative/qualitative (positive, borderline, negative).
7. The type of ground truth used
The ground truth used for this study was the results from a legally marketed predicate device: the Immuno Concepts RELISA® Screening Assay (K935129).
8. The sample size for the training set
Not applicable. This is not a machine learning model, but a traditional immunoassay. Therefore, there is no "training set" in the context of AI.
9. How the ground truth for the training set was established
Not applicable, as there is no training set for a machine learning model.
Ask a specific question about this device
Page 1 of 1