Search Results
Found 1 results
510(k) Data Aggregation
(123 days)
RT Elements (4.5); (Elements) Multiple Brain Mets SRS; (Elements) Cranial SRS; (Elements) Spine SRS;
The device is intended for radiation treatment planning for use in stereotactic, conformal, computer planned, Linac based radiation treatment and indicated for cranial, head and neck and extracranial lesions.
RT Elements are computed-based software applications for radiation therapy treatment planning and dose optimization for linac-based conformal radiation treatments, i.e. stereotactic radiosurgery (SRS), fractionated stereotactic radiotherapy (SRT) or stereotactic ablative radiotherapy (SABR), also known as stereotactic body radiation therapy (SBRT) for use in stereotactic, conformal, computer planned, Linac based radiation treatment of cranial, head and neck, and extracranial lesions.
The device consists of the following software modules: Multiple Brain Mets SRS 4.5, Cranial SRS 4.5, Spine SRS 4.5, Cranial SRS w/ Cones 4.5, RT Contouring 4.5, RT QA 4.5, Dose Review 4.5, Brain Mets Retreatment Review 4.5, and Physics Administration 7.5.
Here's the breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) clearance letter for RT Elements 4.5, specifically focusing on the AI Tumor Segmentation feature:
Acceptance Criteria and Reported Device Performance
Diagnostic Characteristics | Minimum Acceptance Criteria (Lower Bound of 95% Confidence Interval) | Reported Device Performance (Mean 95% CI Lower Bound) |
---|---|---|
All Tumor Types | Dice ≥ 0.7 | Dice: 0.74 |
Recall ≥ 0.8 | Recall: 0.83 | |
Precision ≥ 0.8 | Precision: 0.85 | |
Metastases to the CNS | Dice ≥ 0.7 | Dice: 0.73 |
Recall ≥ 0.8 | Recall: 0.82 | |
Precision ≥ 0.8 | Precision: 0.83 | |
Meningiomas | Dice ≥ 0.7 | Dice: 0.73 |
Recall ≥ 0.8 | Recall: 0.85 | |
Precision ≥ 0.8 | Precision: 0.84 | |
Cranial and paraspinal nerve tumors | Dice ≥ 0.7 | Dice: 0.88 |
Recall ≥ 0.8 | Recall: 0.93 | |
Precision ≥ 0.8 | Precision: 0.93 | |
Gliomas and glio-/neuronal tumors | Dice ≥ 0.7 | Dice: 0.76 |
Recall ≥ 0.8 | Recall: 0.74 | |
Precision ≥ 0.8 | Precision: 0.88 |
Note: For "Gliomas and glio-/neuronal tumors," the reported lower bound 95% CI for Recall (0.74) is slightly below the stated acceptance criteria of 0.8. Additional clarification from the submission would be needed to understand how this was reconciled for clearance. However, for all other categories and overall, the reported performance meets or exceeds the acceptance criteria.
Study Details for AI Tumor Segmentation
2. Sample size used for the test set and the data provenance:
- Sample Size: 412 patients (595 scans, 1878 annotations)
- Data Provenance: De-identified 3D CE-T1 MR images from multiple clinical sites in the US and Europe. Data was acquired from adult patients with one or multiple contrast-enhancing tumors. ¼ of the test pool corresponded to data from three independent sites in the USA.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: Not explicitly stated as a number, but referred to as an "external/independent annotator team."
- Qualifications of Experts: US radiologists and non-US radiologists. No further details on years of experience or specialization are provided in this document.
4. Adjudication method for the test set:
- The document mentions "a well-defined data curation process" followed by the annotator team, but it does not explicitly describe a specific adjudication method (e.g., 2+1, 3+1) for resolving disagreements among annotators.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, a multi-reader multi-case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance was not reported for the AI tumor segmentation. The study focused on standalone algorithm performance against ground truth.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone performance study was done. The validation was conducted quantitatively by comparing the algorithm's automatically-created segmentations with the manual ground-truth segmentations.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Expert Consensus Segmentations: The ground truth was established through "manual ground-truth segmentations, the so-called annotations," performed by the external/independent annotator team of radiologists.
8. The sample size for the training set:
- The sample size for the training set is not explicitly stated in this document. The document mentions that "The algorithm was trained on MRI image data with contrast-enhancing tumors from multiple clinical sites, including a wide variety of scanner models and patient characteristics."
9. How the ground truth for the training set was established:
- How the ground truth for the training set was established is not explicitly stated in this document. It can be inferred that it followed a similar process to the test set, involving expert annotations, but the details are not provided.
Ask a specific question about this device
Page 1 of 1