Search Results
Found 1 results
510(k) Data Aggregation
(209 days)
Maverick Diagnostic System TC1000; Maverick Test Panel A0.B0
The Maverick Test Panel A0.B0 is an immunoassay for the quantitative determination of human thyroid stimulating hormone (thyrotropin, TSH) in human serum and K2EDTA plasma on the Maverick Diagnostic System TC1000. Measurements of thyroid stimulating hormone produced by the anterior pituitary are used in the diagnosis of thyroid or pituitary disorders.
The Maverick Diagnostic System TC1000 is an automated immunoassay analyzer intended for in vitro diagnostic use to determine analytes in a clinical laboratory. The system's assay applications utilize silicon photonics technology.
The Maverick Diagnostic System TC1000 is an automated immunoassay analyzer intended for in vitro diagnostic use to determine analytes in a clinical laboratory. The system's assay applications utilize silicon photonics technology. The Maverick Test Panel A0.B0 is an immunoassay for the quantitative determination of human thyroid stimulating hormone (thyrotropin, TSH) in human serum and K2EDTA plasma on the Maverick Diagnostic System TC1000.
The provided text is an FDA 510(k) clearance letter for an in vitro diagnostic (IVD) device, specifically the Maverick Diagnostic System TC1000 and Maverick Test Panel A0.B0, which is an immunoassay for quantitative determination of human thyroid stimulating hormone (TSH).
Therefore, the concepts of "AI models," "human readers," "radiologists," "MRMC studies," "effect size," and establishing "ground truth for test/training sets by expert consensus/pathology/outcomes data" are not applicable to this type of device and its clearance process.
The FDA clearance for this IVD device is based on demonstrating substantial equivalence to a legally marketed predicate device. This typically involves performance studies (e.g., analytical performance, clinical performance) to show the new device performs as intended and is as safe and effective as the predicate. The "acceptance criteria" and "study that proves the device meets the acceptance criteria" for an IVD device like this would revolve around its analytical and clinical performance characteristics, not AI model metrics or reader studies.
Since the prompt's requested information format is tailored for an AI/CADe (Computer-Assisted Detection/Diagnosis) device, and the provided document describes an IVD device, directly answering the prompt's specific points (1-9) is not possible based on the text. The text does not contain information about AI model performance, human reader studies, or how a "ground truth" for an image-based AI would be established.
To address the spirit of the prompt, had this been an AI/CADe device, and assuming the information was present, here's how a response might look. However, it's crucial to reiterate that this specific document is not for such a device.
(Hypothetical response if the document were about an AI/CADe device, assuming the information was present within the document)
Disclaimer: The provided document is an FDA 510(k) clearance for an In Vitro Diagnostic (IVD) device (Maverick Diagnostic System TC1000 for TSH immunoassay), not an AI/CADe medical device. Therefore, the specific details requested in the prompt, such as AI model performance, expert interpretation of images, MRMC studies, or training/test set ground truth establishment for an AI, are not applicable to the content of this document. The following sections are provided as an example of how the prompt would be answered if the document were for an AI/CADe device and contained the relevant information, but the information below is NOT derived from the provided text.
(Hypothetical/Illustrative Answer - NOT based on the provided document)
(1) A table of acceptance criteria and the reported device performance
Acceptance Criterion (e.g., Performance Metric Threshold) | Reported Device Performance (e.g., AI Model X) |
---|---|
Sensitivity ≥ 90% for detecting Condition A | Sensitivity: 92.5% |
Specificity ≥ 85% for Condition A | Specificity: 88.0% |
AUC (Area Under the ROC Curve) ≥ 0.90 | AUC: 0.915 |
False Positive Rate ≤ 5 per image | False Positive Rate: 4.2 per image |
Mean processing time ≤ 5 seconds per image | Mean processing time: 3.8 seconds |
(2) Sample size used for the test set and the data provenance
- Test Set Sample Size: 500 cases (e.g., 250 positive for Condition A, 250 negative).
- Data Provenance: Retrospectively collected data from multiple institutions across the United States, Germany, and Japan.
(3) Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: 3 independent expert readers.
- Qualifications of Experts: Each expert was a board-certified Radiologist with at least 10 years of experience specializing in the relevant imaging modality (e.g., thoracic imaging for lung nodules, breast imaging for mammography).
(4) Adjudication method for the test set
- Adjudication Method: 2+1 adjudication. If at least 2 of the 3 initial expert readers agreed on the ground truth, that was considered the consensus. If there was a disagreement (e.g., 1 agreed, 2 disagreed; or all 3 disagreed), a fourth, highly experienced senior expert (or an expert panel) performed a final review and adjudication.
(5) If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study Status: Yes, an MRMC comparative effectiveness study was conducted.
- Effect Size of Improvement: The study demonstrated a statistically significant improvement in reader performance (e.g., AUC). Human readers assisted by the AI model showed a mean increase of 0.05 in AUC (from 0.85 without AI to 0.90 with AI assistance) when interpreting cases for Condition A, compared to their performance without AI assistance. This corresponded to a reduction in diagnostic error rate of 15%.
(6) If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Performance: Yes, standalone performance was evaluated. The algorithm's standalone AUC for Condition A was 0.915.
(7) The type of ground truth used
- Type of Ground Truth: Expert consensus with confirmation by pathology for positive cases of Condition A. Negative cases were confirmed through follow-up imaging and clinical outcomes over a specified period.
(8) The sample size for the training set
- Training Set Sample Size: 10,000 cases.
(9) How the ground truth for the training set was established
- Training Ground Truth Establishment: The ground truth for the training set was primarily established by a single expert radiologist's initial review, followed by confirmation from a second expert. Cases with disagreement were reviewed by a third, senior expert to reach consensus. A subset of cases (e.g., 20%) had pathology confirmation available. Automated labeling techniques, where feasible and validated, were also used to augment the manually reviewed data.
Ask a specific question about this device
Page 1 of 1