(251 days)
The Radiation Planning Assistant (RPA) is used to plan radiotherapy treatments with cancers of the head and neck, cervix, breast, and metastases to the brain. The RPA is used to plan external beam irradiation with photon beams using CT images. The RPA is used to create contours and treatment plans that the user imports into their own Treatment Planning System (TPS) for review, editing, and re-calculation of the dose.
Some functions of the RPA use Eclipse 15.6. The RPA is not intended to be used as a primary treatment planning system. All automatically generated contours and plans must be imported into the user's own treatment planning system for review, edit, and final dose calculation.
The Radiation Planning Assistant (RPA) is a web-based contouring and radiotherapy treatment planning software tool that incorporates the basic radiation planning functions from automated contouring, automated planning with dose optimization, and quality control checks. The system is intended for use for patients with cancer of the head and neck, cervix, breast, and metastases to the brain. The RPA system is integrated with the Eclipse Treatment Planning System v15.6 software cleared under K181145. The RPA radiation treatment planning software tool was trained against hundreds / thousands of CT Scans of normal and diseased tissues from patients receiving radiation for head and neck, cervical, breast, and whole brain at MD Anderson Cancer Center.
Here's a breakdown of the acceptance criteria and study information for the Radiation Planning Assistant (RPA) device:
1. Table of Acceptance Criteria and Reported Device Performance
| Criteria Number | Criteria | Reported Device Performance (Overall, across all sites/anatomical locations where available) |
|---|---|---|
| 1. | Assess the safety of using the RPA plan for normal structures for treatment planning by comparing the number of patient plans that pass accepted dosimetric metrics when assessed on the RPA contour with the number that pass when assessed on the clinical contour. The difference should be 5% or less. When there are multiple metrics for a single structure at least one should pass this criterion. | Cervix: < 5% difference between RPA Plan and Clinical Plan for all bony structures and critical soft tissue structures with VMAT and 4 field box.Chest Wall: ≤ 7% difference between RPA Plan and Clinical Plan for all assessed structures.Head & Neck: < 5% difference between RPA Plan and Clinical Plan for the majority of assessed structures.Whole Brain: < 6% difference between RPA Plan and Clinical Plan for all assessed structures (Right and Left Lens). |
| 2. | Assess the effectiveness of the RPA plan for normal structures by comparing the dose to RPA normal structures for RPA plans and clinical normal structures for clinical plans. The difference in the number of RPA plans that pass accepted dosimetric metrics and the number of clinical plans that pass accepted dosimetric metrics should be 5% or less. When there are multiple metrics for a single structure at least one should pass this criterion. | Cervix: < 5% difference between RPA Plan and Clinical Plan for all bony structures and critical soft tissue structures with VMAT and 4 field box.Chest Wall: ≤ 5% difference between RPA Plan and Clinical Plan for all assessed structures.Head & Neck: < 5% difference between RPA Plan and Clinical Plan for the majority of assessed structures.Whole Brain: < 9% difference between RPA Plan and Clinical Plan for all assessed structures (Right and Left Lens). |
| 3. | Assess the effectiveness of the RPA plan for target structures by comparing the number of RPA plans that pass accepted dosimetric metrics (e.g., percentage volume of the PTV receiving 95% of the prescribed dose) when compared with clinical plans. The difference should be 5% or less. When there are multiple metrics used to assess a single structure, at least one coverage and one maximum criterion should pass this criterion. | Cervix: < 5% difference between RPA Plan and Clinical Plan for all assessed structures.Head & Neck: < 5% difference between RPA Plan and Clinical Plan for the majority of assessed criteria.Whole Brain: < 5% difference between RPA Plan and Clinical Plan for all assessed structures. |
| 4. | Assess the geometric effectiveness of the RPA targets using recall. A low value for this metric represents under-contouring. The 25th percentile of the recall must be 0.7 or greater. | Cervix: 25th percentile for recall > 0.7Head & Neck: 25th percentile for recall > 0.7 |
| 5. | Assess the quality of body contouring generated by the RPA by comparing primary and secondary body contours generated by the RPA with manual body contours. Surface DSC (2mm) should be greater than 0.8 for 95% of the CT scans. | Cervix: Surface DSC > 0.8 for 95% of CT scansChest Wall: Surface DSC > 0.8 for 95% of CT scansHead & Neck: Surface DSC > 0.8 for >95% of CT scansWhole Brain: > 0.8 difference between RPA Plan and Clinical Plan for all assessments. |
| 6. | Assess the ability of the RPA to accurately identify the marked isocenter. This is achieved by comparing the automatically generated isocenters with manually generated ones. 95% of automatically generated marked isocenters (primary and verification approaches) should agree with manually generated marked isocenters within 3mm in all orthogonal directions (AP, lateral, cranial-caudal). | Cervix: < 3mm difference between RPA Plan and Clinical Plan for all orthogonal directions.Head & Neck: < 3mm difference between RPA Plan and Clinical Plan for all orthogonal directions.Whole Brain: < 3mm difference between RPA Plan and Clinical Plan for all orthogonal directions. |
Note on Differences in Reported Performance: The document provides specific results for each anatomical location (Cervix, Chest Wall, Head & Neck, Whole Brain). The table above aggregates these where applicable, noting that specific percentages might vary slightly by location for a given criterion.
Study Details Proving Acceptance Criteria
2. Sample Size Used for the Test Set and Data Provenance:
- Test Set Description: The test datasets were composed of CT images of patients previously treated with radiotherapy. The sampling method involved selecting data prospectively from January 1, 2022, onwards until sufficient data was collected. If needed, data collection was extended back to January 1, 2021. Critically, testing datasets were unique, with no overlap with data used for model creation or previous validation studies.
- Sample Sizes per Anatomical Location/Study:
- Cervix: 50 unique patients for VMAT plans, 47 unique patients for 3D soft tissue plans, and 45 unique patients for 3D bony landmark plans.
- Chest Wall: 46 unique patients.
- Head and Neck: 86 unique patients.
- Whole Brain: 46 unique patients.
- Data Provenance (Source): The document doesn't explicitly state the country of origin for the multicenter clinical data, but the training data largely originated from MD Anderson Cancer Center (Houston, TX, USA), and one of the GYN normal tissue test sets included "30 cervical cancer patients from 3 centers in S. Africa." Given "multicenter clinical data," it implies multiple clinical sites that could be within the US or international, but the primary training and internal validation appear to be from MD Anderson. The studies are retrospective, using previously acquired patient data.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:
The document does not explicitly state the number of experts used to establish ground truth for the test set specifically. However, for the training data (which implies the source of the ground truth methodology):
- Ground Truth Treatment Plans (Training): The ground truth treatment plans were generated via the "primary 4-field box automation technique for cervical cancer by Kisling et al." and were "rated by physicians."
- Contouring Ground Truth: The various anatomical location specific training sets mention that the "original clinical contours of anatomic structures and treatment targets" were part of the datasets.
- Qualification: The general qualification stated is "physicians" for plan rating and "trained medical professionals" for the predicate device, implying clinical experts like radiation oncologists for plan assessment and potentially dosimetrists or other clinicians for contouring. No specific number or years of experience are detailed for the ground truth experts for the test set.
4. Adjudication Method for the Test Set:
The document does not explicitly describe an adjudication method (such as 2+1 or 3+1) used for establishing the ground truth for the test set. The ground truth for dose evaluation was based on "accepted dosimetric metrics" compared to "clinical contour" or "clinical plans," suggesting a reference standard already deemed clinically acceptable. For body contouring, manual contours were used as the reference.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
A multi-reader multi-case (MRMC) comparative effectiveness study comparing human readers with AI assistance versus without AI assistance was not explicitly described in this submission. The studies presented focus on the standalone performance of the RPA device in generating contours and treatment plans against established clinical ground truths or conventional clinical plans. The device is intended to be used with human review and editing in the workflow ("All automatically generated contours and plans must be imported into the user's own Treatment Planning System (TPS) for review, edit, and final dose calculation").
6. Standalone (Algorithm Only Without Human-in-the-Loop) Performance:
Yes, a standalone performance assessment was conducted. The acceptance criteria and the results presented directly evaluate the performance of the RPA device's generated contours and plans against clinical standards. The metrics (e.g., dosimetric differences, recall, Surface DSC, isocenter agreement) quantify the algorithm's output independently before human intervention. The device's integration into the user's workflow still requires human review, but the reported performance metrics are of the automated output.
7. Type of Ground Truth Used:
- Treatment Plans: "Ground truth treatment plans were generated by the primary 4-field box automation technique for cervical cancer by Kisling et al. (Kisling 2019) with beam apertures based on a patient's bony anatomy. Only the clinically acceptable plans were used (rated by physicians)." This indicates a clinician-adjudicated "clinically acceptable plan" or expert-validated reference plan.
- Contours: "Original clinical contours of anatomic structures and treatment targets" were used for comparison, suggesting expert-drawn clinical contours as ground truth.
- Isocenter: "Manually generated marked isocenters" were used as ground truth.
- Body Contours: "Manual body contours" were used as ground truth.
- Overall, the ground truth is primarily based on expert consensus/clinical practice and reference standards derived from previously treated cases and expert-rated plans/contours.
8. Sample Size for the Training Set:
The training set sizes vary greatly by anatomical location and tissue type:
- Head and Neck:
- Normal Tissue (primary): 3,288 patients (3,495 CT scans) from MD Anderson (Sept 2004 - June 2018).
- Normal Tissue (secondary): 160 patients from MD Anderson (2018-2020).
- Lymph Node CTVs: 61 patients from MD Anderson (2010-2019).
- Whole Brain (Spinal Canal CNN, VB labeling, VB segmentation): 1,966 (CNN), 803 (VB labeling), 107 (VB segmentation) from 930 MDACC patients and 355 external patients.
- GYN:
- Normal Tissue (primary): 1,999 patients (2,254 CT scans) from MD Anderson (Sept 2004 - June 2018).
- Normal Tissue (secondary): 192 patients (316 CT scans) from MD Anderson (2006-2020).
- CTVs (UteroCervix, Nodal CTV, PAN, Vagina, Parametria): 406 to 490 CT scans from 131-388 patients from MD Anderson (2006-2020).
- Liver: 119 patients (169 CT scans) from MD Anderson.
- Chest Wall:
- Whole Body (secondary): 250 patients from MD Anderson (Aug 2016 - June 2021).
9. How the Ground Truth for the Training Set Was Established:
The ground truth for the training set was established through:
- Clinically Accepted Plans: For treatment plans, "only the clinically acceptable plans were used for training (rated by physicians)." This implies a form of expert review and selection of existing clinical plans.
- Existing Clinical Data: The training data largely consisted of "CT Scans of normal and diseased tissues from patients receiving radiation" at MD Anderson Cancer Center. These datasets included "original clinical contours of anatomic structures and treatment targets, and the dose distributions used for patient treatment." This suggests that the ground truth was derived from the routine clinical data, which is implicitly considered the standard of care as performed by clinicians at MD Anderson.
- External Patient Data: Some training data also included external patient data, such as for the Vertebral Bodies model (355 external patients) and publicly available data (MICCAI challenge data).
- Published Methodology: For cervical cancer, the "ground truth treatment plans were generated by the primary 4-field box automation technique for cervical cancer by Kisling et al. (Kisling 2019)," referring to a published methodology that would have defined how these plans (and implicitly their underlying contours and dose calculations) were considered ground truth.
{0}------------------------------------------------
May 17, 2023
Image /page/0/Picture/1 description: The image contains the logo of the U.S. Food and Drug Administration (FDA). On the left is the Department of Health & Human Services logo. To the right of that is the FDA logo, which is a blue square with the letters "FDA" in white. To the right of the blue square is the text "U.S. FOOD & DRUG ADMINISTRATION" in blue.
University of Texas, MD Anderson Cancer Center % Ms. Stella Tsai Sr. Project Manager 1515 Holcombe Blvd. HOUSTON TX 77030
Re: K222728
Trade/Device Name: Radiation Planning Assistant (RPA) Regulation Number: 21 CFR 892.5050 Regulation Name: Medical charged-particle radiation therapy system Regulatory Class: Class II Product Code: MUJ Dated: April 17, 2023 Received: April 17, 2023
Dear Ms. Stella Tsai:
We have reviewed your Section 510(k) premarket notification of intent to market the device referenced above and have determined the device is substantially equivalent (for the indications for use stated in the enclosure) to legally marketed predicate devices marketed in interstate commerce prior to May 28, 1976, the enactment date of the Medical Device Amendments, or to devices that have been reclassified in accordance with the provisions of the Federal Food, Drug, and Cosmetic Act (Act) that do not require approval of a premarket approval application (PMA). You may, therefore, market the device, subject to the general controls provisions of the Act. Although this letter refers to your product as a device, please be aware that some cleared products may instead be combination products. The 510(k) Premarket Notification Database located at https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn.cfm identifies combination product submissions. The general controls provisions of the Act include requirements for annual registration, listing of devices, good manufacturing practice, labeling, and prohibitions against misbranding and adulteration. Please note: CDRH does not evaluate information related to contract liability warranties. We remind you, however, that device labeling must be truthful and not misleading.
If your device is classified (see above) into either class II (Special Controls) or class III (PMA), it may be subject to additional controls. Existing major regulations affecting your device can be found in the Code of Federal Regulations, Title 21, Parts 800 to 898. In addition, FDA may publish further announcements concerning your device in the Federal Register.
Please be advised that FDA's issuance of a substantial equivalence determination does not mean that FDA has made a determination that your device complies with other requirements of the Act or any Federal statutes and regulations administered by other Federal agencies. You must comply with all the Act's requirements, including, but not limited to: registration and listing (21 CFR Part 807); labeling (21 CFR Part 801); medical device reporting of medical device-related adverse events) (21 CFR 803) for devices or postmarketing safety reporting (21 CFR 4, Subpart B) for combination products (see https://www.fda.gov/combination-products/guidance-regulatory-information/postmarketing-safety-reporting
{1}------------------------------------------------
combination-products); good manufacturing practice requirements as set forth in the quality systems (QS) regulation (21 CFR Part 820) for devices or current good manufacturing practices (21 CFR 4, Subpart A) for combination products; and, if applicable, the electronic product radiation control provisions (Sections 531-542 of the Act); 21 CFR 1000-1050.
Also, please note the regulation entitled, "Misbranding by reference to premarket notification" (21 CFR Part 807.97). For questions regarding the reporting of adverse events under the MDR regulation (21 CFR Part 803), please go to https://www.fda.gov/medical-device-safety/medical-device-reportingmdr-how-report-medical-device-problems.
For comprehensive regulatory information about medical devices and radiation-emitting products, including information about labeling regulations, please see Device Advice (https://www.fda.gov/medicaldevices/device-advice-comprehensive-regulatory-assistance) and CDRH Learn (https://www.fda.gov/training-and-continuing-education/cdrh-learn). Additionally, you may contact the Division of Industry and Consumer Education (DICE) to ask a question about a specific regulatory topic. See the DICE website (https://www.fda.gov/medical-device-advice-comprehensive-regulatoryassistance/contact-us-division-industry-and-consumer-education-dice) for more information or contact DICE by email (DICE@fda.hhs.gov) or phone (1-800-638-2041 or 301-796-7100).
Sincerely,
Image /page/1/Picture/5 description: The image shows a digital signature. The signature is from Lora D. Weidner -S. The date of the signature is 2023.05.17 and the time is 07:14:45-04'00'.
Lora D. Weidner, Ph.D. Assistant Director Radiation Therapy Team DHT8C: Division of Radiological Imaging and Radiation Therapy Devices OHT8: Office of Radiological Health Office of Product Evaluation and Quality Center for Devices and Radiological Health
Enclosure
{2}------------------------------------------------
Indications for Use
510(k) Number (if known) K222728
Device Name Radiation Planning Assistant (RPA)
Indications for Use (Describe)
The Radiation Planning Assistant (RPA) is used to plan radiotherapy treatments with cancers of the head and neck, cervix, breast, and metastases to the brain. The RPA is used to plan external beam irradiation with photon beams using CT images. The RPA is used to create contours and treatment plans that the user imports into their own Treatment Planning System (TPS) for review, editing, and re-calculation of the dose.
Some functions of the RPA use Eclipse 15.6. The RPA is not intended to be used as a primary treatment planning system. All automatically generated contours and plans must be imported into the user's own treatment planning system for review, edit, and final dose calculation.
| Type of Use (Select one or both, as applicable) | |
|---|---|
| ------------------------------------------------- | -- |
X Prescription Use (Part 21 CFR 801 Subpart D)
Over-The-Counter Use (21 CFR 801 Subpart C)
CONTINUE ON A SEPARATE PAGE IF NEEDED.
This section applies only to requirements of the Paperwork Reduction Act of 1995.
DO NOT SEND YOUR COMPLETED FORM TO THE PRA STAFF EMAIL ADDRESS BELOW.
The burden time for this collection of information is estimated to average 79 hours per response, including the time to review instructions, search existing data sources, gather and maintain the data needed and complete and review the collection of information. Send comments regarding this burden estimate or any other aspect of this information collection, including suggestions for reducing this burden, to:
Department of Health and Human Services Food and Drug Administration Office of Chief Information Officer Paperwork Reduction Act (PRA) Staff PRAStaff(@fda.hhs.gov
"An agency may not conduct or sponsor, and a person is not required to respond to, a collection of information unless it displays a currently valid OMB number."
{3}------------------------------------------------
K222728
DATE PREPARED: 16 May 2023
SUBMITTER 1.
| Manufacturer Name: | The University of Texas MD Anderson Cancer CenterDepartment of Radiation PhysicsDivision of Radiation Oncology1515 Holcombe Blvd.Houston, TX 77030 |
|---|---|
| Official Contact: | Stella Tsai, MHA, CCRASr. Project Director, IND OfficeThe University of Texas MD Anderson Cancer Center1515 Holcombe Blvd. Unit 1634Houston, TX 77030Telephone (713) 563-5464swtsai@mdanderson.org |
2. DEVICE
| Name of Device: | Radiation Planning Assistant (RPA) |
|---|---|
| Common or Usual Name: | System, Planning, Radiation Therapy Treatment |
| Classification Name: | 21CFR 892.5050 - Medical charged-particle radiationtherapy system |
| Regulatory Class: | II |
| Product Code: | MUJ |
PREDICATE DEVICE 3.
Eclipse Treatment Planning System v15.6 (K181145)
4. DEVICE DESCRIPTION
Design Characteristics
The Radiation Planning Assistant (RPA) is a web-based contouring and radiotherapy treatment planning software tool that incorporates the basic radiation planning functions from automated contouring, automated planning with dose optimization, and quality control checks. The system is intended for use for patients with cancer of the head and neck, cervix, breast, and metastases to the brain. The RPA system is integrated with the Eclipse Treatment Planning System v15.6 software cleared under K181145. The RPA radiation treatment planning software tool was trained against hundreds / thousands of CT Scans of normal and diseased tissues from patients receiving radiation for head and neck, cervical, breast, and whole brain at MD Anderson Cancer Center.
{4}------------------------------------------------
5. INDICATIONS FOR USE
The Radiation Planning Assistant (RPA) is used to plan radiotherapy treatments for patients with cancers of the head and neck, cervix, breast, and metastases to the brain.
The RPA is used to plan external beam irradiation with photon beams using computerized tomography (CT) images. The RPA is used to create contours and treatment plans that the user imports into their own Treatment Planning System (TPS) for review, editing, and re-calculation of the dose.
Some functions of the RPA use Eclipse 15.6. The RPA is not intended to be used as a primary treatment planning system. All automatically generated contours and plans must be imported into the user's own treatment planning system for review, edit, and final dose calculation.
COMPARISON OF TECHNOLOGICAL CHARACTERISTICS 6. WITH THE PREDICATE DEVICE
The Radiation Planning Assistant (RPA) is substantially equivalent to the Eclipse Treatment Planning System v15.6 (K181145) predicate device in the following respects:
{5}------------------------------------------------
| Subject Device | Predicate Device | |
|---|---|---|
| Radiation Planning Assistant (RPA) | Eclipse Treatment Planning System Version15.6 | |
| K181145 | ||
| CFRCitation | 892.5050 | 892.5050 |
| ProductCode | MUJ | MUJ |
| Indicationsfor Use | The Radiation Planning Assistant (RPA) isused to plan radiotherapy treatments forpatients with cancers of the head and neck,cervix, breast, and metastases to the brain. TheRPA is used to plan external beam irradiationwith photon beams using computerizedtomography (CT) images. The RPA is used tocreate contours and treatment plans that theuser imports into their own Treatment PlanningSystem (TPS) for review, editing, and re-calculation of the dose.Some functions of the RPA use Eclipsev.15.6. The RPA is not intended to be used as | The Eclipse Treatment Planning System(Eclipse TPS) is used to plan radiotherapytreatments for patients with malignant orbenign diseases. Eclipse TPS is used to planexternal beam irradiation with photon, electron,and proton beams, as well as for internalirradiation (brachytherapy) treatments. |
| a primary treatment planning system. Allautomatically generated contours and plansmust be imported into the user's owntreatment planning system for review, edit,and final dose calculation. | ||
| DeviceDescription | The Radiation Planning Assistant (RPA) is aweb-based contouring and radiotherapytreatment planning software tool thatincorporates the basic radiation planningfunctions from automated contouring,automated planning with dose optimization,and quality control checks. The system isintended for use for patients with cancer of thehead and neck, cervix, breast, and metastases tothe brain. The RPA system is integrated withthe Eclipse Treatment Planning System v.15.6software cleared under K181145. | The Varian Eclipse™ Treatment PlanningSystem (Eclipse TPS) provides software toolsfor planning the treatment of malignant orbenign diseases with radiation. Eclipse TPS isa computer-based software device used bytrained medical professionals to design andsimulate radiation therapy treatments.Eclipse TPS is capable of planning treatmentsfor external beam irradiation with photon,electron, and proton beams, as well as forinternal irradiation (brachytherapy) treatments. |
Table 1: Comparison of the Technological Characteristics of the RPA with Predicate Device
{6}------------------------------------------------
| SoftwareFunction | Description of Functions Available inEclipse | Differences | Similarities | Rationale forSubstantial Equivalence |
|---|---|---|---|---|
| SoftwareContouringFunctions | 1. Organ-specific autocontouring algorithms. Eclipsev.15.6 includes the following organ-specificautocontouring algorithms for the following organs:spine, lung, brain, eye, bone.2. Expert Segmentation. Eclipse v.15.6 includes an atlas-based contouring approach ("expert segmentation") forautocontouring of many structures, including:Head / neck region: Body, bones, brainstem,cochlea, esophagus, eyes, mandible, oral cavity,various lymph nodes Breast region: Body, heart, trachea, variouslymph nodes Pelvis: Body, bladder, femoral heads, pelvicbones, rectum, spinal canal The system calculates the anatomical points, imagefeatures, and similarity scores of the patient image andcompares them with pre-stored expert cases. Rigidregistration is used both for initializing the deformableregistration algorithm and for displaying the expert caseand patient image in aligned preview. Theautocontouring approach depends on the selectedstructures. Either the structures are heuristicallysegmented from the patient image, or the structures aregenerated via deformable registration and structurepropagation from the expert cases. If multiple expertcases are used, the propagated structures from thedifferent atlases are fused by means of the simultaneoustruth and performance level estimation (STAPLE)algorithm. | The RPA uses deep learning algorithmswhich Eclipse does not.Use and function: In Eclipse, the user editsthe contours prior to planning. This is thesame for complex planning in the RPA(VMAT planning for head / neck and cervix).It is different for simple plans, where the planis generated before the user reviews thecontours. If the user edits the contours in theRPA, they will have to delete the plan aswell. | Use and function: The RPAprovides autocontouring for arange of structures, includingmost of those listed here forEclipse.Performance data: Thealgorithm for ExpertSegmentation in Eclipse isvery similar to the Multi-AtlasContouring System (MACS)that is used to contourstructures for the chest wallplanning in the RPA.Safety and effectiveness: BothEclipse and the RPA aredesigned to provide contoursthat the users review and edit. | Devices are SubstantiallyEquivalent. Both devicesprovide autocontouringfunctions for the sameanatomical regions. Bothdevices require user edits ofcontours with 'complex plans'prior to planning. |
| Eclipse,Other PlanPreparation | Automatic marker detection - Eclipse v.15.6 includes afunction ('Calypso Beacon Detection') to automaticallydetect a specific type of marker (Calypso transponders)on CT images. | Use and function: The Eclipse function is fora specific type of marker that is differentfrom the generic markers that the RPA isdesigned for. | Use and function: Both Eclipseand the RPA can automaticallydetect markers. | Devices are SubstantiallyEquivalent. Both devices canautomatically detect markers. |
| Comparison of the RPA System's Software Functions with the | ||||
| Eclipse Treatment Planning System Version 15.6 (K181145) | ||||
| Software | Description of Functions Available in | Differences | Similarities | Rationale for Substantial |
| Function | Eclipse | Equivalence | ||
| EclipseAutomatedPlanning,VMAT | 1. Photon Optimizer (PO) algorithm. This algorithmis used to optimize IMRT or VMAT plans based onDVH constraints / objectives.2. Automated Optimization Workflow. Enabling thiscan automate the optimization workflow for IMRTplanning so that, after optimization, the leaf motioncalculation and final dose calculation areautomatically initiated, and the results are thenautomatically saved. A similar feature exists forVMAT plans.3. DVH Estimation Models for RapidPlan. DVHestimation models are created from informationextracted from a set of previous treatment plans(called 'treatment plans'). The estimation modelspredict the DVH that is achievable from the currenttreatment plan (based on the geometry in the currentplan), and also creates a set of optimization objectsthat can be based on the DVH estimates or fixed(i.e., not based on the DVH estimates). | Use and function: The main difference forVMAT planning is that Eclipse generallycreates a plan that the user reviews, makesedits to the optimization constraints, andrepeats the process to improve the planquality. The RPA uses the sameoptimization tools (i.e., the tools in Eclipse),but the optimization objectives andconstraints have been pre-set to give optimalplans for the majority of patients. The useris not able to easily edit the RPA VMATplans so, if they do not approve the plan forclinical use, they must delete it and createone using their own routine processes (i.e.,in their own treatment planning system). | Use and function: The RPAuses some Eclipse features,including DVH Estimates forRapidPlan and the PhotonOptimizer for optimizingVMAT plans. The plans lookvery similar.Safety and effectiveness: BothEclipse and the RPA aredesigned to create plans thatthe users then edit and reviewfor clinical acceptability priorto use. | Devices are SubstantiallyEquivalent. Both devices provideautoplanning features and createplans that the users then edit andreview for clinical acceptabilityprior to use. Both devices providePhoton Optimizer, automatedoptimization workflow and DVHestimation models. |
| Autoplanning,Other | 1. Beam Angle Optimization (BAO). This tooloptimizes the number and angle of treatment beams.It optimizes the objective function, which isdetermined by DVH goals / constraints and a normaltissue objective (which falls off with distance fromthe PTV). BAO can be used for IMRT plans or as astarting point for conformal treatment plans.2. Collimator Angle Optimization (CAO). Thisfunction optimizes collimator angle for each arc of aHyperArc plan such that, whenever possible, a givenpair of MLC leaves delineates only one target in thebeam's-eye-view.3. Optimize collimator jaws. Adjusts the collimatorjaws to best fit the MLC leaves to the structure.4. Use recommended jaw positions. Adjusts thecollimator jaw positions with an additional margin.5. Optimize collimator rotation. Optimizes thecollimator rotation around a structure. | Use and function: Autoplanning in Eclipseis mostly automation of individual tasks thatare controlled by the user. The user doesnot control these tasks with the RPA.Use and function: Review and editing of 3Dplans (cervix 4-field box, post-mastectomybreast plans, whole brain plans) for the RPAhappens in the users' own treatmentplanning system. | Use and function: Many of thetreatment plan details inEclipse and RPA use functionswith similar algorithms, suchas optimizing the jawpositions.Safety and effectiveness: BothEclipse and the RPA aredesigned to create plans thatthe users then edit and reviewfor clinical acceptability priorto use. | Devices are SubstantiallyEquivalent. Both devices createplans that the users then edit andreview for clinical acceptabilityprior to use through the use of AIsoftware. Both devices provideBeam Angle Optimization,Collimator Angle Optimization andcollimator jaw optimization. |
Table 2: Comparison of Functions of the Subject Device with Functions of the Predicate Device
{7}------------------------------------------------
{8}------------------------------------------------
PERFORMANCE DATA 7.
7.1 Non-Clinical Data
7.1.1 Software Verification and Validation Testing
Software verification and validation was conducted, and documentation was provided as recommended by the FDA's Guidance for Industry and FDA Staff, "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices." The software for this device was considered as a "major" level of concern. Test results demonstrate conformance to applicable requirements and specifications.
No animal studies or clinical tests have been included in this pre-market submission.
The ground truth treatment plans were generated by the primary 4-field box automation technique for cervical cancer by Kisling et al. (Kisling 2019) with beam apertures based on a patient's bony anatomy. Only the clinically acceptable plans were used for training (rated by physicians); their DRRs and corresponding beam apertures were the inputs for training (and just the DRRs for testing/prediction). No additional criteria were applied. The test set was generated in the same manner as the ground truth, but on previously unseen patients.
Initial software training for each anatomical location was successfully accomplished and is described in brief in Table 3 below.
{9}------------------------------------------------
Table 3 presents the initial testing performed for software training and testing. Multicenter performance testing is presented in Section 7.2.
| AnatomicalLocation | Tissue Type(s) | Training Data Set | Test DataIndependence |
|---|---|---|---|
| Head and Neck | Normal Tissue(primary) | 3,288 patients (3,495 CT scans) who received radiation therapy at MDAnderson Cancer Center between September 2004 and June 2018. Anypatient who received a simulation CT scan of the head/neck region in ahead -first supine position was eligible. | 174 CT scans wererandomly selected fromthis group (and excludedfor training) plusqualitative evaluation 24CT scans from anexternal dataset. |
| Head and Neck | Normal Tissue(secondary) | 160 patients who received radiation therapy at MD Anderson CancerCenter from 2018 to 2020. Any patient who received a simulation CTscan of the head/neck region in a head-first supine position was eligible. | Test patients wererandomly selected andexcluded from thetraining set. |
| Lymph NodeCTVs | 61 patients who received radiation therapy at MD Anderson CancerCenter between 2010 and 2019. Any patient who received a simulationCT scan of the head/neck region in a head-first supine position waseligible. | These 71 cases wererandomly placed in 3groups: training (51 pts.),cross-validation (10 pts.)and final test (10 pts.). | |
| Whole Brain | Whole Brain | The whole brain primary segmentation models used the same models asused for head and neck segmentation, described above, as well as anadditional vertebral body localization and segmentation model (VertebralBodies model: spinal canal CNN: 1,966, VB labeling: 803, VBsegmentation: 107, from 930 MDACC patients and 355 externalpatients). Patients who received spinal radiotherapy for spinal metastases(3DCRT and VMAT) at MD Anderson, or for whom data was publiclyavailable (MICCAI challenge data). | Test patients wererandomly selected fromthis group (and excludedfor training). |
Table 3: Software Training for Anatomical Locations
{10}------------------------------------------------
| AnatomicalLocation | Tissue Type(s) | Training Data Set | Test DataIndependence |
|---|---|---|---|
| GYN | Normal Tissue(primary) | 1,999 patients (2,254 CT scans) who received radiation therapy at MDAnderson from September 2004 and June 2018. Any patient whoreceived a simulation CT scan of the pelvic region in a head-first supineposition was eligible. | 140 CT scans wererandomly selected fromthis group (and excludedfor training) plusqualitative evaluationwith 30 cervical cancerpatients from 3 centers inS. Africa. |
| GYN | Normal Tissue(secondary) | 192 patients (316 CT scans) who were treated for locally advancedcervical cancer between 2006 and 2020. | Test patients wererandomly selected fromthis group (and excludedfor training). |
| GYN | CTVs (primary) | 406 CT scans from 308 patients (UteroCervix), 250 CT scans from 201patients (Nodal CTV), 146 CT scans from 131 patients (PAN), 490 CTscans from 388 patients (Vagina), 487 CT scans from 388 patients(Parametria) who received radiation therapy at MD Anderson CancerCenter between 2006 and 2020. | Test patients wererandomly selected fromthis group (and excludedfor training). |
| Liver | Training data for GYN Liver (normal) comprised 119 patients (169 CTscans) who had received contrast-enhanced and non-contrast CT imagingof the liver at MD Anderson Cancer Center. | Test patients wererandomly selected fromthis group (and excludedfor training). | |
| Chest Wall | Whole Body(secondary forchest wall) | Training data for whole body (secondary for chest wall) comprised 250patients who were treated at MD Anderson between August 2016 andJune 2021, with CT imaging in the thoracic region. | Test patients wererandomly selected fromthis group (and excludedfor training). |
{11}------------------------------------------------
Standards Conformance 7.1.2
The subject device conforms in whole or in part with the following standards:
- IEC 62304 Medical device software Software life cycle processes
- IEC 62083 Requirements for the safety of radiotherapy treatment planning systems
7.2 Clinical Data
A summary of the multicenter clinical data is presented in the tables below.
{12}------------------------------------------------
| Characteristic | AllCervixVMAT | AllCervix3D | All ChestWall | All Headand Neck | AllWholeBrain |
|---|---|---|---|---|---|
| No. of Unique Patients with RPA Plan(s) | 50 | 47a; 45b | 46 | 86 | 46 |
| CT Scan Equipment | |||||
| Philips | x | x | x | x | x |
| Siemens | x | x | x | x | x |
| GE | x | x | x | x | x |
| No. of Clinical Sites | 5 | 5 | 5 | 5 | 5 |
| No. of Participating Physicians / Study Site | |||||
| Site 1 | 5 | 5 | 8 | 12 | 12 |
| Site 2 | 1 | 1 | 2 | 3 | 2 |
| Site 3 | 3 | 3 | 1 | 1 | 3 |
| Site 4 | 3 | 3 | 8 | 1 | 6 |
| Site 5 | 2 | 2 | 4 | 5 | 2 |
| Clinical Subgroups and Confounding Factors | |||||
| By Study Site | None | None | None | None | None |
| By Equipment | None | None | None | None | None |
| Age | |||||
| Mean | 51 | 50 | 51 | 62 | 60 |
| Min, Max | 26, 94 | 26, 84 | 31, 80 | 27, 87 | 14, 88 |
| Sex | |||||
| Male | 0.0% | 0.0% | 2.2% | 79.3% | 39.1% |
| Female | 100.0% | 100.0% | 97.8% | 29.7% | 34.8% |
| Not Reported | 0.0% | 0.0% | 0.0% | 0.0% | 26.1% |
| Race | |||||
| Asian | 9.8% | 2.1% | 26.1% | 5.4% | 6.5% |
| Black/African American | 13.7% | 14.9% | 13.0% | 12.0% | 8.7% |
| White | 39.2% | 78.7% | 32.6% | 73.9% | 54.3% |
| Native Hawaiian or Pacific Islander | 0.0% | 0.0% | 0.0% | 1.1% | 0.0% |
| British | 7.8% | 0.0% | 0.0% | 0.0% | 0.0% |
| American Indian or Alaskan Native | 2.0% | 0.0% | 4.3% | 1.1% | 0.0% |
| Other / not available | 27.5% | 4.3% | 23.9% | 6.5% | 28.3% |
| Ethnicity | |||||
| Hispanic or Latino | 25.5% | 10.6% | 8.7% | 7.6% | 6.5% |
| Not Hispanic or Latino | 41.2% | 46.8% | 43.5% | 50.0% | 58.7% |
| Other / not available | 33.3% | 42.6% | 47.8% | 42.4% | 34.8% |
| Table 4: | Demographics, Number of Patients, Number of Samples, and Clinical Sites | |
|---|---|---|
ª4-field box soft tissue plan
b4-field box bony landmark plan
{13}------------------------------------------------
| CriteriaNumber | Criteria | Results |
|---|---|---|
| 1 | Assess the safety of using the RPA plan for normal structures for treatmentplanning by comparing the number of patient plans that pass accepteddosimetric metrics when assessed on the RPA contour with the number thatpass when assessed on the clinical contour. The difference should be 5% orless. When there are multiple metrics for a single structure at least oneshould pass this criterion. | < 5% differencebetween RPA Plan andClinical Plan for allbony structures andcritical soft tissuestructures with VMATand 4 field box.* |
| 2 | Assess the effectiveness of the RPA plan for normal structures bycomparing the dose to RPA normal structures for RPA plans and clinicalnormal structures for clinical plans. The difference in the number of RPAplans that pass accepted dosimetric metrics and the number of clinical plansthat pass accepted dosimetric metrics should be 5% or less. When there aremultiple metrics for a single structure at least one should pass this criterion. | < 5% differencebetween RPA Plan andClinical Plan for allbony structures andcritical soft tissuestructures with VMATand 4 field box.** |
| 3 | Assess the effectiveness of the RPA plan for target structures by comparingthe number of RPA plans that pass accepted dosimetric metrics (e.g.,percentage volume of the PTV receiving 95% of the prescribed dose) whencompared with clinical plans. The difference should be 5% or less. Whenthere are multiple metrics used to assess a single structure, at least onecoverage and one maximum criterion should pass this criterion. | < 5% differencebetween RPA Plan andClinical Plan for allassessed structures |
| 4 | Assess the geometric effectiveness of the RPA targets using recall. A lowvalue for this metric represents under-contouring. The 25th percentile of therecall must be 0.7 or greater. | 25th percentile forrecall > 0.7 |
| 5 | Assess the quality of body contouring generated by the RPA by comparingprimary and secondary body contours generated by the RPA with manualbody contours. Surface DSC (2mm) should be greater than 0.8 for 95% ofthe CT scans. | Surface DSC > 0.8 for95% of CT scans |
| 6 | Assess the ability of the RPA to accurately identify the marked isocenter.This is achieved by comparing the automatically generated isocenters withmanually generated ones. 95% of automatically generated markedisocenters (primary and verification approaches) should agree withmanually generated marked isocenters within 3mm in all orthogonaldirections (AP, lateral, cranial-caudal). | < 3mm differencebetween RPA Plan andClinical Plan for allorthogonal directions |
Table 5: Summary of Statistical Results—Cervix
- With the exception of bowel bag in the 4-field box plans, the RPA contour gives a more conservative result. ** RPA plan and clinical plan had 6% - 13% difference in passing rates using VMAT on right kidney, bladder, and bowel. The RPA Plan for rectum exceeded passing rates of the clinical plans in excess of 5%. However, when the RPA plan (which was created using the RPA normal contours) was assessed using the clinical normal contours, the passing rates for the clinical plan and RPA plan are within 5% for all normal structures. This is a result of the conservative nature of the RPA contours.
{14}------------------------------------------------
| CriteriaNumber | Inclusion Criteria | Exclusion Criteria | Sampling Method |
|---|---|---|---|
| 1 | CT scan of the female pelvic anatomy. | Poor Image Quality | |
| 2 | Clear CT image of the pelvic region withoutdistortions. | - | |
| 3 | Test datasets consisted of CT images ofpatients previously treated for cervicalcancer using radiotherapy following one ofthe following treatment schemes:• 4-field box (based on bonylandmarks or soft tissue)• VMAT | - | |
| 4 | Scan was obtained with patient head-first,supine. | - | Test datasets were chosengoing forward in timeuntil sufficient data were |
| 5 | The datasets included CT images, originalclinical contours of anatomic structures andtreatment targets, and the dose distributionsused for patient treatment. | - | collected, starting with CTscans collected on January1, 2022. If insufficientpatient scans were found, |
| 6 | Test datasets were chosen going forward intime until sufficient data was collected,starting with CT scans collected on January1, 2022. If insufficient patient scans werefound, data collection could be restarted withJanuary 1, 2021 (for patients treated in 2021)and so forth, until sufficient data wascollected. | - | data collection wasrestarted with January 1,2021 (for patients treatedin 2021) and so forth, untilsufficient data wascollected. |
| 7 | Testing datasets were unique, with nooverlap with data used for model creation orin previous validation studies. | - | |
| 8 | CT scans included the manufacturer andmodel of the scanner used to obtain the CTimage. | - |
Table 6: Summary of Cervix Protocol
{15}------------------------------------------------
| CriteriaNumber | Criteria | Results |
|---|---|---|
| 1 | Assess the safety of use of the RPA by comparing the number of patientplans that pass accepted dosimetric metrics when assessed on the RPAcontour with the number that pass when assessed on the clinical contour.The difference should be 5% or less. When multiple metrics were used toassess a single structure at least one had to pass the criteria (similar to themanner in which doses are assessed in clinical practice). | ≤7% differencebetween RPA Plan andClinical Plan for allassessed structures. |
| 2 | Assess the effectiveness of use of the RPA by comparing the number ofRPA plans that pass accepted dosimetric metrics (e.g., mean dose to theorgan-at-risk) when compared with clinical plans.The difference should be 5% or less. When multiple metrics were used toassess a single structure, at least one should pass this criterion (similar tothe manner in which doses are assessed in clinical practice). This wasconsidered on a structure-by-structure basis. | ≤5% differencebetween RPA Plan andClinical Plan for allassessed structures. |
| 3 | Assess the quality of body contouring generated by the RPA bycomparing primary and secondary body contours generated by the RPAwith manual body contours. Surface DSC (2mm) should be greater than0.8 for 95% of the CT scans | Surface DSC > 0.8 for95% of CT scans |
Table 7: Chest Wall Summary of Statistical Results
{16}------------------------------------------------
| CriteriaNumber | Inclusion Criteria | Exclusion Criteria | Sampling Method |
|---|---|---|---|
| 1 | CT scan of the breast (thorax) region. | Poor Image Quality | Test datasets were chosengoing forward in timeuntil sufficient data wascollected, starting with CTscans collected on January1, 2022. If insufficientpatient scans were found,data collection wasrestarted with January 1,2021 (for patients treatedin 2021) and so forth, untilsufficient data wascollected. |
| 2 | Clear CT image of the breast (thorax) regionwithout distortions. | - | |
| 3 | Test datasets must consist of CT images ofpatients previously treated forpostmastectomy breast radiotherapyfollowing one of the following treatmentschemes:i. Tangent fields with supraclavicularfields. Similar approaches,including those that also treat theintramammary lymph nodes arealso acceptable. | - | |
| 4 | Scan was obtained with patient head-first,supine. | - | |
| 5 | The datasets must include CT images,original clinical contours of anatomicstructures and treatment targets, and the dosedistributions used for patient treatment. | - | |
| 6 | Test datasets were chosen going forward intime until sufficient data was collected,starting with CT scans collected on January1, 2022. If insufficient patient scans werefound, data collection can be restarted withJanuary 1, 2021 (for patients treated in 2021)and so forth, until sufficient data wascollected. | - | |
| 7 | Testing datasets must be unique, with nooverlap with data used for model creation orin previous validation studies. | - | |
| 8 | CT scan must include the manufacturer andmodel of the scanner used to obtain the CTimage or recorded separately. | - |
Table 8: Chest Wall Protocol Summary
{17}------------------------------------------------
| CriteriaNumber | Criteria | Results |
|---|---|---|
| 1 | Assess the safety of use of RPA normal structures for treatment planningby comparing the number of patient plans that passed accepted dosimetricmetrics (e.g., mean dose to the parotid) when assessed on the RPAcontour with the number that passed when assessed on the clinicalcontour. The difference should be 5% or less. When multiple metricswere used to assess a single structure at least one should pass thiscriterion. | <5% differencebetween RPA Plan andClinical Plan for themajority of assessedstructures. |
| 2 | Assess the effectiveness of use of RPA normal structures for treatmentplanning by comparing the number of RPA plans that passed accepteddosimetric metrics (e.g., mean dose to the parotid) when compared withclinical plans. The difference should be 5% or less. When multiplemetrics were used to assess a single structure, at least one should pass thiscriterion. | <5% differencebetween RPA Plan andClinical Plan for themajority of assessedstructures* . |
| 3 | Assess the effectiveness of the RPA plan for target structures bycomparing the number of RPA plans that pass accepted dosimetric metrics(e.g., percentage volume of the PTV receiving 95% of the prescribeddose) when compared with clinical plans. The difference should be 5% orless. When there are multiple metrics used to assess a single structure, atleast one coverage and one maximum criterion should pass this criterion. | <5% differencebetween RPA Plan andClinical Plan for themajority of assessedcriteria. |
| 4 | Assess the geometric effectiveness of the RPA targets using recall. Alow value for this metric represents under-contouring and a potential riskof creating a plan that misses the target (although the user is expected toreview and edit contours). The 25th percentile of the recall must be 0.7 orgreater. | 25th percentile forrecall > 0.7 |
| 5 | Assess the quality of body contouring generated by the RPA bycomparing body contours generated by the RPA with manual bodycontours. Surface DSC (2mm) should be greater than 0.8 for 95% of theCT scans. | Surface DSC > 0.8 for>95% of CT scans |
| 6 | Assess the ability of the RPA to accurately identify the marked isocenter.This is achieved by comparing the automatically generated isocenters withmanually generated ones. 95% of automatically generated marked shouldagree with manually generated marked isocenters within 3mm in allorthogonal directions (AP, lateral, cranial-caudal). | <3mm differencebetween RPA Plan andClinical Plan for allorthogonal directions. |
Table 9: Head and Neck Summary of Statistical Results
*RPA plan and clinical plan had 6% - 13% difference with the cochlea.
{18}------------------------------------------------
| CriteriaNumber | Inclusion Criteria | Exclusion Criteria | Sampling Method |
|---|---|---|---|
| 1 | CT scan of head and neck anatomy. | Poor Image Quality | Test datasets were chosengoing forward in timeuntil sufficient data wascollected, starting with CTscans collected on January1, 2022. If insufficientpatient scans were found,data collection wasrestarted with January 1,2021 (for patients treatedin 2021) and so forth, untilsufficient data wascollected. |
| 2 | Clear CT image of head and/or neckwithout distortions. | ||
| 3 | Test datasets consisted of CT images ofpatients previously treated for head andneck cancer using radiotherapy followingthis treatment scheme:i. VMAT or IMRT treatmentsii. 1-3 dose levels in the prescription | ||
| 4 | Scan was obtained with patient head-first,supine. | ||
| 5 | The datasets included CT images, originalclinical contours of anatomic structures andtreatment targets, and the dose distributionsused for patient treatment. | ||
| 6 | Test datasets were chosen going forward intime until sufficient data was collected,starting with CT scans collected on January1, 2022. If insufficient patient scans werefound, data collection could be restartedwith January 1, 2021 (for patients treated in2021) and so forth, until sufficient data wascollected. | ||
| 7 | Testing datasets were unique, with nooverlap with data used for model creationor in previous validation studies. | ||
| 8 | CT scans included the manufacturer andmodel of the scanner used to obtain the CTimage or were recorded separately. |
Table 10: Head and Neck Protocol Summary
{19}------------------------------------------------
| CriteriaNumber | Criteria | Results |
|---|---|---|
| 1 | Assess the safety of using the RPA plan for normal structures bycomparing the number of patient plans that pass accepted dosimetricmetrics when assessed on the RPA contour with the number that passwhen assessed on the clinical contour. The difference should be 5% orless. When there are multiple metrics for a single structure at least oneshould pass this criterion. | <6% differencebetween RPA Plan andClinical Plan for allassessed structures(Right and LeftLens)*. |
| 2 | Assess the effectiveness of the RPA plan for normal structures bycomparing the number of RPA plans that pass accepted dosimetric metricswhen compared with clinical plans. The difference should be 5% or less.When there are multiple metrics for a single structure at least one shouldpass this criterion. | <9% differencebetween RPA Plan andClinical Plan for allassessed structures(Right and LeftLens)**. |
| 3 | Assess the effectiveness of the RPA plan for target structures bycomparing the number of RPA plans that pass accepted dosimetric metrics(e.g., percentage volume of the brain receiving 95% of the prescribeddose) when compared with clinical plans. The difference should be 5% orless. When there are multiple metrics used to assess a single structure, atleast one coverage and one maximum criterion should pass this criterion. | <5% differencebetween RPA Plan andClinical Plan for allassessed structures. |
| 4 | Assess the quality of body contouring generated by the RPA bycomparing primary and secondary body contours generated by the RPAwith manual body contours. Surface DSC (2mm) should be greater than0.8 for 95% of the CT scans. | > 0.8 differencebetween RPA Plan andClinical Plan for allassessments. |
| 5 | Assess the ability of the RPA to accurately identify the marked isocenter.This is achieved by comparing the automatically generated isocenters withmanually generated ones. 95% of automatically generated markedisocenters (primary and verification approaches) should agree withmanually generated marked isocenters within 3mm in all orthogonaldirections. | <3mm differencebetween RPA Plan andClinical Plan for allorthogonal directions. |
Table 11: Whole Brain Summary of Statistical Results
- The RPA contours gave more conservative results, with lower passing rates than using clinical contours.
** Passing rates were higher for RPA plans than for clinical plans.
{20}------------------------------------------------
| CriteriaNumber | Inclusion Criteria | Exclusion Criteria | Sampling Method |
|---|---|---|---|
| 1 | CT scan of the head/neck region. | Poor Image Quality | Test datasets were chosengoing forward in timeuntil sufficient data wascollected, starting with CTscans collected on January1, 2022. If insufficientpatient scans were found,data collection wasrestarted with January 1,2021 (for patients treatedin 2021) and so forth, untilsufficient data wascollected. |
| 2 | Clear CT image of the head/neck regionwithout distortions. | - | |
| 3 | Test datasets consisted of CT images ofpatients previously treated for whole brainradiotherapy following one of the followingtreatment schemes:i. Opposed laterals or slight obliques. | - | |
| 4 | Scan was obtained with patient head-first,supine. | - | |
| 5 | The datasets included CT images, originalclinical contours of anatomic structures andtreatment targets, and the dose distributionsused for patient treatment. | - | |
| 6 | Test datasets were chosen going forward intime until sufficient data was collected,starting with CT scans collected on January1, 2022. If insufficient patient scans werefound, data collection could be restarted withJanuary 1, 2021 (for patients treated in 2021)and so forth, until sufficient data wascollected. | - | |
| 7 | Testing datasets were unique, with nooverlap with data used for model creation orin previous validation studies. | - | |
| 8 | CT scans included the manufacturer andmodel of the scanner used to obtain the CTimage. | - |
Table 12: Whole Brain Protocol Summary
CONCLUSIONS 8.
The Radiation Planning Assistant (RPA) is substantially equivalent to the Eclipse Treatment Planning System v15.6 (K181145) predicate device. The intended use and indications for use are substantially equivalent. The major technological characteristics are substantially equivalent to the predicate devices, and the differences do not raise new questions of safety and effectiveness. The results of verification and validation as well as conformance to relevant safety standards demonstrate that the Radiation Planning Assistant (RPA) and the Eclipse Treatment Planning System v15.6 (K181145) meet safety and performance criteria and are substantially equivalent devices. The retrospective clinical data demonstrates that the device is safe and effective for its intended use as compared to the predicate device.
9. REFERENCES
Kisling K, Zhang L, Simonds H, Fakie N, Yang J, McCarroll R, Balter P, Burger H, Bogler O, Howell R, Schmeler K, Mejia M, Beadle BM, Jhingran A, Court L. Fully Automatic Treatment Planning for External-Beam Radiation Therapy of Locally Advanced Cervical Cancer: A Tool for Low-Resource Clinics. J Glob Oncol. 2019 Jan;5:1-9. doi: 10.1200/JGO.18.00107. PMD: 30629457; PMCID: PMC6426517.
§ 892.5050 Medical charged-particle radiation therapy system.
(a)
Identification. A medical charged-particle radiation therapy system is a device that produces by acceleration high energy charged particles (e.g., electrons and protons) intended for use in radiation therapy. This generic type of device may include signal analysis and display equipment, patient and equipment supports, treatment planning computer programs, component parts, and accessories.(b)
Classification. Class II. When intended for use as a quality control system, the film dosimetry system (film scanning system) included as an accessory to the device described in paragraph (a) of this section, is exempt from the premarket notification procedures in subpart E of part 807 of this chapter subject to the limitations in § 892.9.