Search Results
Found 1 results
510(k) Data Aggregation
(251 days)
The Radiation Planning Assistant (RPA) is used to plan radiotherapy treatments with cancers of the head and neck, cervix, breast, and metastases to the brain. The RPA is used to plan external beam irradiation with photon beams using CT images. The RPA is used to create contours and treatment plans that the user imports into their own Treatment Planning System (TPS) for review, editing, and re-calculation of the dose.
Some functions of the RPA use Eclipse 15.6. The RPA is not intended to be used as a primary treatment planning system. All automatically generated contours and plans must be imported into the user's own treatment planning system for review, edit, and final dose calculation.
The Radiation Planning Assistant (RPA) is a web-based contouring and radiotherapy treatment planning software tool that incorporates the basic radiation planning functions from automated contouring, automated planning with dose optimization, and quality control checks. The system is intended for use for patients with cancer of the head and neck, cervix, breast, and metastases to the brain. The RPA system is integrated with the Eclipse Treatment Planning System v15.6 software cleared under K181145. The RPA radiation treatment planning software tool was trained against hundreds / thousands of CT Scans of normal and diseased tissues from patients receiving radiation for head and neck, cervical, breast, and whole brain at MD Anderson Cancer Center.
Here's a breakdown of the acceptance criteria and study information for the Radiation Planning Assistant (RPA) device:
1. Table of Acceptance Criteria and Reported Device Performance
| Criteria Number | Criteria | Reported Device Performance (Overall, across all sites/anatomical locations where available) |
|---|---|---|
| 1. | Assess the safety of using the RPA plan for normal structures for treatment planning by comparing the number of patient plans that pass accepted dosimetric metrics when assessed on the RPA contour with the number that pass when assessed on the clinical contour. The difference should be 5% or less. When there are multiple metrics for a single structure at least one should pass this criterion. | Cervix: < 5% difference between RPA Plan and Clinical Plan for all bony structures and critical soft tissue structures with VMAT and 4 field box.Chest Wall: ≤ 7% difference between RPA Plan and Clinical Plan for all assessed structures.Head & Neck: < 5% difference between RPA Plan and Clinical Plan for the majority of assessed structures.Whole Brain: < 6% difference between RPA Plan and Clinical Plan for all assessed structures (Right and Left Lens). |
| 2. | Assess the effectiveness of the RPA plan for normal structures by comparing the dose to RPA normal structures for RPA plans and clinical normal structures for clinical plans. The difference in the number of RPA plans that pass accepted dosimetric metrics and the number of clinical plans that pass accepted dosimetric metrics should be 5% or less. When there are multiple metrics for a single structure at least one should pass this criterion. | Cervix: < 5% difference between RPA Plan and Clinical Plan for all bony structures and critical soft tissue structures with VMAT and 4 field box.Chest Wall: ≤ 5% difference between RPA Plan and Clinical Plan for all assessed structures.Head & Neck: < 5% difference between RPA Plan and Clinical Plan for the majority of assessed structures.Whole Brain: < 9% difference between RPA Plan and Clinical Plan for all assessed structures (Right and Left Lens). |
| 3. | Assess the effectiveness of the RPA plan for target structures by comparing the number of RPA plans that pass accepted dosimetric metrics (e.g., percentage volume of the PTV receiving 95% of the prescribed dose) when compared with clinical plans. The difference should be 5% or less. When there are multiple metrics used to assess a single structure, at least one coverage and one maximum criterion should pass this criterion. | Cervix: < 5% difference between RPA Plan and Clinical Plan for all assessed structures.Head & Neck: < 5% difference between RPA Plan and Clinical Plan for the majority of assessed criteria.Whole Brain: < 5% difference between RPA Plan and Clinical Plan for all assessed structures. |
| 4. | Assess the geometric effectiveness of the RPA targets using recall. A low value for this metric represents under-contouring. The 25th percentile of the recall must be 0.7 or greater. | Cervix: 25th percentile for recall > 0.7Head & Neck: 25th percentile for recall > 0.7 |
| 5. | Assess the quality of body contouring generated by the RPA by comparing primary and secondary body contours generated by the RPA with manual body contours. Surface DSC (2mm) should be greater than 0.8 for 95% of the CT scans. | Cervix: Surface DSC > 0.8 for 95% of CT scansChest Wall: Surface DSC > 0.8 for 95% of CT scansHead & Neck: Surface DSC > 0.8 for >95% of CT scansWhole Brain: > 0.8 difference between RPA Plan and Clinical Plan for all assessments. |
| 6. | Assess the ability of the RPA to accurately identify the marked isocenter. This is achieved by comparing the automatically generated isocenters with manually generated ones. 95% of automatically generated marked isocenters (primary and verification approaches) should agree with manually generated marked isocenters within 3mm in all orthogonal directions (AP, lateral, cranial-caudal). | Cervix: < 3mm difference between RPA Plan and Clinical Plan for all orthogonal directions.Head & Neck: < 3mm difference between RPA Plan and Clinical Plan for all orthogonal directions.Whole Brain: < 3mm difference between RPA Plan and Clinical Plan for all orthogonal directions. |
Note on Differences in Reported Performance: The document provides specific results for each anatomical location (Cervix, Chest Wall, Head & Neck, Whole Brain). The table above aggregates these where applicable, noting that specific percentages might vary slightly by location for a given criterion.
Study Details Proving Acceptance Criteria
2. Sample Size Used for the Test Set and Data Provenance:
- Test Set Description: The test datasets were composed of CT images of patients previously treated with radiotherapy. The sampling method involved selecting data prospectively from January 1, 2022, onwards until sufficient data was collected. If needed, data collection was extended back to January 1, 2021. Critically, testing datasets were unique, with no overlap with data used for model creation or previous validation studies.
- Sample Sizes per Anatomical Location/Study:
- Cervix: 50 unique patients for VMAT plans, 47 unique patients for 3D soft tissue plans, and 45 unique patients for 3D bony landmark plans.
- Chest Wall: 46 unique patients.
- Head and Neck: 86 unique patients.
- Whole Brain: 46 unique patients.
- Data Provenance (Source): The document doesn't explicitly state the country of origin for the multicenter clinical data, but the training data largely originated from MD Anderson Cancer Center (Houston, TX, USA), and one of the GYN normal tissue test sets included "30 cervical cancer patients from 3 centers in S. Africa." Given "multicenter clinical data," it implies multiple clinical sites that could be within the US or international, but the primary training and internal validation appear to be from MD Anderson. The studies are retrospective, using previously acquired patient data.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:
The document does not explicitly state the number of experts used to establish ground truth for the test set specifically. However, for the training data (which implies the source of the ground truth methodology):
- Ground Truth Treatment Plans (Training): The ground truth treatment plans were generated via the "primary 4-field box automation technique for cervical cancer by Kisling et al." and were "rated by physicians."
- Contouring Ground Truth: The various anatomical location specific training sets mention that the "original clinical contours of anatomic structures and treatment targets" were part of the datasets.
- Qualification: The general qualification stated is "physicians" for plan rating and "trained medical professionals" for the predicate device, implying clinical experts like radiation oncologists for plan assessment and potentially dosimetrists or other clinicians for contouring. No specific number or years of experience are detailed for the ground truth experts for the test set.
4. Adjudication Method for the Test Set:
The document does not explicitly describe an adjudication method (such as 2+1 or 3+1) used for establishing the ground truth for the test set. The ground truth for dose evaluation was based on "accepted dosimetric metrics" compared to "clinical contour" or "clinical plans," suggesting a reference standard already deemed clinically acceptable. For body contouring, manual contours were used as the reference.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
A multi-reader multi-case (MRMC) comparative effectiveness study comparing human readers with AI assistance versus without AI assistance was not explicitly described in this submission. The studies presented focus on the standalone performance of the RPA device in generating contours and treatment plans against established clinical ground truths or conventional clinical plans. The device is intended to be used with human review and editing in the workflow ("All automatically generated contours and plans must be imported into the user's own Treatment Planning System (TPS) for review, edit, and final dose calculation").
6. Standalone (Algorithm Only Without Human-in-the-Loop) Performance:
Yes, a standalone performance assessment was conducted. The acceptance criteria and the results presented directly evaluate the performance of the RPA device's generated contours and plans against clinical standards. The metrics (e.g., dosimetric differences, recall, Surface DSC, isocenter agreement) quantify the algorithm's output independently before human intervention. The device's integration into the user's workflow still requires human review, but the reported performance metrics are of the automated output.
7. Type of Ground Truth Used:
- Treatment Plans: "Ground truth treatment plans were generated by the primary 4-field box automation technique for cervical cancer by Kisling et al. (Kisling 2019) with beam apertures based on a patient's bony anatomy. Only the clinically acceptable plans were used (rated by physicians)." This indicates a clinician-adjudicated "clinically acceptable plan" or expert-validated reference plan.
- Contours: "Original clinical contours of anatomic structures and treatment targets" were used for comparison, suggesting expert-drawn clinical contours as ground truth.
- Isocenter: "Manually generated marked isocenters" were used as ground truth.
- Body Contours: "Manual body contours" were used as ground truth.
- Overall, the ground truth is primarily based on expert consensus/clinical practice and reference standards derived from previously treated cases and expert-rated plans/contours.
8. Sample Size for the Training Set:
The training set sizes vary greatly by anatomical location and tissue type:
- Head and Neck:
- Normal Tissue (primary): 3,288 patients (3,495 CT scans) from MD Anderson (Sept 2004 - June 2018).
- Normal Tissue (secondary): 160 patients from MD Anderson (2018-2020).
- Lymph Node CTVs: 61 patients from MD Anderson (2010-2019).
- Whole Brain (Spinal Canal CNN, VB labeling, VB segmentation): 1,966 (CNN), 803 (VB labeling), 107 (VB segmentation) from 930 MDACC patients and 355 external patients.
- GYN:
- Normal Tissue (primary): 1,999 patients (2,254 CT scans) from MD Anderson (Sept 2004 - June 2018).
- Normal Tissue (secondary): 192 patients (316 CT scans) from MD Anderson (2006-2020).
- CTVs (UteroCervix, Nodal CTV, PAN, Vagina, Parametria): 406 to 490 CT scans from 131-388 patients from MD Anderson (2006-2020).
- Liver: 119 patients (169 CT scans) from MD Anderson.
- Chest Wall:
- Whole Body (secondary): 250 patients from MD Anderson (Aug 2016 - June 2021).
9. How the Ground Truth for the Training Set Was Established:
The ground truth for the training set was established through:
- Clinically Accepted Plans: For treatment plans, "only the clinically acceptable plans were used for training (rated by physicians)." This implies a form of expert review and selection of existing clinical plans.
- Existing Clinical Data: The training data largely consisted of "CT Scans of normal and diseased tissues from patients receiving radiation" at MD Anderson Cancer Center. These datasets included "original clinical contours of anatomic structures and treatment targets, and the dose distributions used for patient treatment." This suggests that the ground truth was derived from the routine clinical data, which is implicitly considered the standard of care as performed by clinicians at MD Anderson.
- External Patient Data: Some training data also included external patient data, such as for the Vertebral Bodies model (355 external patients) and publicly available data (MICCAI challenge data).
- Published Methodology: For cervical cancer, the "ground truth treatment plans were generated by the primary 4-field box automation technique for cervical cancer by Kisling et al. (Kisling 2019)," referring to a published methodology that would have defined how these plans (and implicitly their underlying contours and dose calculations) were considered ground truth.
Ask a specific question about this device
Page 1 of 1