Search Results
Found 1 results
510(k) Data Aggregation
(197 days)
AI-Rad Companion Organs RT is a post-processing software intended to automatically contour DICOM CT and MR pre-defined structures using deep-learning-based algorithms.
Contours that are generated by AI-Rad Companion Organs RT may be used as input for clinical workflows including external beam radiation therapy treatment planning. AI-Rad Companion Organs RT must be used in conjunction with appropriate software such as Treatment Planning Systems and Interactive Contouring applications, to review, edit, and accept contours generated by AI-Rad Companion Organs RT.
The outputs of AI-Rad Companion Organs RT are intended to be used by trained medical professionals.
The software is not intended to automatically detect or contour lesions.
AI-Rad Companion Organs RT provides automatic segmentation of pre-defined structures such as Organs-at-risk (OAR) from CT or MR medical series, prior to dosimetry planning in radiation therapy. AI-Rad Companion Organs RT is not intended to be used as a standalone diagnostic device and is not a clinical decision-making software.
CT or MR series of images serve as input for AI-Rad Companion Organs RT and are acquired as part of a typical scanner acquisition. Once processed by the AI algorithms, generated contours in DICOMRTSTRUCT format are reviewed in a confirmation window, allowing clinical user to confirm or reject the contours before sending to the target system. Optionally, the user may select to directly transfer the contours to a configurable DICOM node (e.g., the Treatment Planning System (TPS), which is the standard location for the planning of radiation therapy).
AI-Rad Companion Organs RT must be used in conjunction with appropriate software such as Treatment Planning Systems and Interactive Contouring applications, to review, edit, and accept the automatically generated contours. Then the output of AI-Rad Companion Organs RT must be reviewed and, where necessary, edited with appropriate software before accepting generated contours as input to treatment planning steps. The output of AI-Rad Companion Organs RT is intended to be used by qualified medical professionals, who can perform a complementary manual editing of the contours or add any new contours in the TPS (or any other interactive contouring application supporting DICOM-RT objects) as part of the routine clinical workflow.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:
Acceptance Criteria and Device Performance Study for AI-Rad Companion Organs RT
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria for the AI-Rad Companion Organs RT device, particularly for the enhanced CT contouring algorithm, are based on comparing its performance to the predicate device and relevant literature/cleared devices. The primary metrics used are Dice coefficient and Absolute Symmetric Surface Distance (ASSD).
Table 3: Acceptance Criteria of AIRC Organs RT VA50
| Validation Testing Subject | Acceptance Criteria | Reported Device Performance (Summary) |
|---|---|---|
| Organs in Predicate Device | All organs segmented in the predicate device are also segmented in the subject device. | Confirmed. The device continued to segment all organs previously handled by the predicate. |
| The average (AVG) Dice score difference between the subject and predicate device is < 3%. | Confirmed. "For existing organs, the average (AVG) Dice score difference between the subject device and predicate device is smaller than 3%." | |
| New Organs for Subject Device | The subject device in the selected reference metric has a higher value than the defined baseline value. | Confirmed. "The performance results of the subject device for the new CT organs are comparable to the reference literature & cleared devices. Here equivalence for the new organs is defined such that the selected reference metric has a higher value than the defined baseline." |
Table 3: Performance Summary of the Subject Device CT Contouring (Overall Average Dice Coefficients)
| Anatomic Region | Avg Dice (%) | Std Dice (%) | 95% CI |
|---|---|---|---|
| Head & Neck | 76.1 | 14.3 | [75.1, 77.2] |
| Head & Neck lymph nodes | 69.3 | 13.9 | [68.7, 70.0] |
| Thorax | 76.9 | 15.8 | [76.2, 77.6] |
| Abdomen | 87.3 | 10.1 | [86.3, 88.2] |
| Pelvis | 85.7 | 9.6 | [85.0, 86.5] |
| Cardiac | 75.6 | 15.1 | [74.1, 77.1] |
Table 4: Detailed Performance Evaluation of the New Organs in the Subject Device (Selected Examples)
| Organ Name | No. | AVG Dice (%) | STD Dice (%) | MED Dice (%) | 95%CI Dice | AVG ASSD (mm) | STD ASSD (mm) | MED ASSD (mm) | 95%CI ASSD |
|---|---|---|---|---|---|---|---|---|---|
| Left Breast | 30 | 90.4 | 3.8 | 91 | [89, 91.8] | 2.4 | 2.2 | 1.8 | [1.5, 3.2] |
| Right Breast | 30 | 90.2 | 3.7 | 90.8 | [88.8, 91.5] | 1.9 | 0.7 | 1.8 | [1.7, 2.2] |
| Bowel Bag | 33 | 95 | 3.6 | 96.5 | [93.7, 96.3] | 1.9 | 1.5 | 1.4 | [1.4, 2.5] |
| Pituitary | 30 | 75.8 | 7.4 | 77 | [73.1, 78.6] | 0.7 | 0.3 | 0.6 | [0.5, 0.8] |
| Brainstem | 30 | 88.4 | 2.5 | 88.8 | [87.5, 89.3] | 1 | 0.3 | 0.9 | [0.9, 1.1] |
| Esophagus | 30 | 85.6 | 4.2 | 86 | [84, 87.2] | 0.6 | 0.3 | 0.6 | [0.5, 0.7] |
| MEDIASTINAL LN 9L | 31 | 38.3 | 21.1 | 42.9 | [30.6, 46.1] | 5.3 | 4.4 | 3.7 | [3.7, 6.9] |
(Note: The full Table 4 from the document provides detailed performance for all 37 new organs. This table includes a selection for illustrative purposes.)
2. Sample Sizes and Data Provenance
- Test Set Sample Size:
- CT Contouring Algorithm: N = 579 cases
- MR Contouring Algorithm: The MR algorithm is unchanged from the predicate, so its performance is unchanged. The predicate was validated using 66 cases.
- Data Provenance (CT Contouring Algorithm Test Set):
- Geographic Origin (Overall N=579): Data from multiple clinical sites across North American, South American, Asia, Australia, and Europe.
- Example Cohorts (Table 5: Validation Testing Data Information based on Cohort):
- Cohort A.1 (N=73): Germany (14), Brazil (59)
- Cohort A.2 (N=40): Canada (40)
- Cohort A.3 (N=301): South/North America (184), EU (44), Asia (33), Australia (28), Unknown (12)
- Cohort B (N=165): South/North America (100), EU (51), Asia (6), Australia (3), Unknown (5)
- Retrospective/Prospective: "retrospective performance study on CT data previously acquired for RT treatment planning."
3. Number of Experts Used to Establish Ground Truth and Qualifications
- Number of Experts for Ground Truth: "a team of experienced annotators mentored by radiologists or radiation oncologists" for initial manual annotation. "a board-certified radiation oncologist" performed a quality assessment including review and correction of each annotation. The document does not specify an exact number of individuals for these teams, but describes the roles and qualifications.
- Qualifications of Experts:
- "experienced annotators"
- "radiologists or radiation oncologists" (mentors for annotators)
- "board-certified radiation oncologist" (for quality assessment/review)
4. Adjudication Method for the Test Set
The document describes the ground truth establishment process as: "manual annotation" by experienced annotators mentored by radiologists/radiation oncologists, followed by a "quality assessment including review and correction of each annotation was done by a board-certified radiation oncologist." This indicates a hierarchical review/correction process rather than a multi-reader consensus adjudication between equally-weighted readers (e.g., 2+1 or 3+1). The final accepted contour after the board-certified radiation oncologist's review served as the ground truth.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No MRMC comparative effectiveness study was described. The study focused on the standalone performance of the AI algorithm against established ground truth and comparison with a predicate device and literature. The document does not mention an effect size of how much human readers improve with AI vs. without AI assistance. The intended use specifies that the AI-generated contours must be reviewed, edited, and accepted by trained medical professionals, implying a human-in-the-loop workflow, but the validation study presented focuses on the AI's autonomous segmentation accuracy.
6. Standalone (Algorithm Only) Performance Study
Yes, a standalone performance study was done. The performance metrics (Dice coefficient, ASSD) and the comparison to an expert-established ground truth demonstrate the algorithm's autonomous segmentation capability. The study validated the "autocontouring algorithms" and their performance.
7. Type of Ground Truth Used
The ground truth used for the test set was expert consensus / manual annotation based on clinical guidelines. Specifically: "Ground truth annotations were established following RTOG and clinical guidelines using manual annotation." This was further reviewed and corrected by a board-certified radiation oncologist.
8. Sample Size for the Training Set
The document provides the sample sizes for the training set for new organs introduced:
- Table 6: Training Dataset Characteristics (Examples):
- Lacrimal Glands Left/Right: 247
- Pituitary Gland: 247
- Humeral Head Left/Right: 207
- Bowel Bag: 544
- Pelvic Bone Left/Right: 160
- Sacrum: 160
- Mediastinal LN (various): 136
- Femoral Head Left/Right: 160
- Brainstem: 247
- Esophagus: 247
- Breast Left/Right: 172
- Supraglottic Larynx: 247
- Glottis: 247
The total training set size for all organs is not explicitly summed, but these numbers indicate the scale of the training data used for the specific new organs.
9. How the Ground Truth for the Training Set Was Established
"In both the annotation process for the training and validation testing data, the annotation protocols for the OAR were defined following the applicable guidelines. The ground truth annotations were drawn manually by a team of experienced annotators mentored by radiologists or radiation oncologists using an internal annotation tool. Additionally, a quality assessment including review and correction of each annotation was done by a board-certified radiation oncologist using validated medical image annotation tools."
This indicates the same rigorous process of expert manual annotation and review was applied to establish ground truth for the training set as for the test set. The validation testing and training data were explicitly stated to be independent.
Ask a specific question about this device
Page 1 of 1