Search Results
Found 3 results
510(k) Data Aggregation
(251 days)
The Radiation Planning Assistant (RPA) is used to plan radiotherapy treatments with cancers of the head and neck, cervix, breast, and metastases to the brain. The RPA is used to plan external beam irradiation with photon beams using CT images. The RPA is used to create contours and treatment plans that the user imports into their own Treatment Planning System (TPS) for review, editing, and re-calculation of the dose.
Some functions of the RPA use Eclipse 15.6. The RPA is not intended to be used as a primary treatment planning system. All automatically generated contours and plans must be imported into the user's own treatment planning system for review, edit, and final dose calculation.
The Radiation Planning Assistant (RPA) is a web-based contouring and radiotherapy treatment planning software tool that incorporates the basic radiation planning functions from automated contouring, automated planning with dose optimization, and quality control checks. The system is intended for use for patients with cancer of the head and neck, cervix, breast, and metastases to the brain. The RPA system is integrated with the Eclipse Treatment Planning System v15.6 software cleared under K181145. The RPA radiation treatment planning software tool was trained against hundreds / thousands of CT Scans of normal and diseased tissues from patients receiving radiation for head and neck, cervical, breast, and whole brain at MD Anderson Cancer Center.
Here's a breakdown of the acceptance criteria and study information for the Radiation Planning Assistant (RPA) device:
1. Table of Acceptance Criteria and Reported Device Performance
| Criteria Number | Criteria | Reported Device Performance (Overall, across all sites/anatomical locations where available) |
|---|---|---|
| 1. | Assess the safety of using the RPA plan for normal structures for treatment planning by comparing the number of patient plans that pass accepted dosimetric metrics when assessed on the RPA contour with the number that pass when assessed on the clinical contour. The difference should be 5% or less. When there are multiple metrics for a single structure at least one should pass this criterion. | Cervix: < 5% difference between RPA Plan and Clinical Plan for all bony structures and critical soft tissue structures with VMAT and 4 field box.Chest Wall: ≤ 7% difference between RPA Plan and Clinical Plan for all assessed structures.Head & Neck: < 5% difference between RPA Plan and Clinical Plan for the majority of assessed structures.Whole Brain: < 6% difference between RPA Plan and Clinical Plan for all assessed structures (Right and Left Lens). |
| 2. | Assess the effectiveness of the RPA plan for normal structures by comparing the dose to RPA normal structures for RPA plans and clinical normal structures for clinical plans. The difference in the number of RPA plans that pass accepted dosimetric metrics and the number of clinical plans that pass accepted dosimetric metrics should be 5% or less. When there are multiple metrics for a single structure at least one should pass this criterion. | Cervix: < 5% difference between RPA Plan and Clinical Plan for all bony structures and critical soft tissue structures with VMAT and 4 field box.Chest Wall: ≤ 5% difference between RPA Plan and Clinical Plan for all assessed structures.Head & Neck: < 5% difference between RPA Plan and Clinical Plan for the majority of assessed structures.Whole Brain: < 9% difference between RPA Plan and Clinical Plan for all assessed structures (Right and Left Lens). |
| 3. | Assess the effectiveness of the RPA plan for target structures by comparing the number of RPA plans that pass accepted dosimetric metrics (e.g., percentage volume of the PTV receiving 95% of the prescribed dose) when compared with clinical plans. The difference should be 5% or less. When there are multiple metrics used to assess a single structure, at least one coverage and one maximum criterion should pass this criterion. | Cervix: < 5% difference between RPA Plan and Clinical Plan for all assessed structures.Head & Neck: < 5% difference between RPA Plan and Clinical Plan for the majority of assessed criteria.Whole Brain: < 5% difference between RPA Plan and Clinical Plan for all assessed structures. |
| 4. | Assess the geometric effectiveness of the RPA targets using recall. A low value for this metric represents under-contouring. The 25th percentile of the recall must be 0.7 or greater. | Cervix: 25th percentile for recall > 0.7Head & Neck: 25th percentile for recall > 0.7 |
| 5. | Assess the quality of body contouring generated by the RPA by comparing primary and secondary body contours generated by the RPA with manual body contours. Surface DSC (2mm) should be greater than 0.8 for 95% of the CT scans. | Cervix: Surface DSC > 0.8 for 95% of CT scansChest Wall: Surface DSC > 0.8 for 95% of CT scansHead & Neck: Surface DSC > 0.8 for >95% of CT scansWhole Brain: > 0.8 difference between RPA Plan and Clinical Plan for all assessments. |
| 6. | Assess the ability of the RPA to accurately identify the marked isocenter. This is achieved by comparing the automatically generated isocenters with manually generated ones. 95% of automatically generated marked isocenters (primary and verification approaches) should agree with manually generated marked isocenters within 3mm in all orthogonal directions (AP, lateral, cranial-caudal). | Cervix: < 3mm difference between RPA Plan and Clinical Plan for all orthogonal directions.Head & Neck: < 3mm difference between RPA Plan and Clinical Plan for all orthogonal directions.Whole Brain: < 3mm difference between RPA Plan and Clinical Plan for all orthogonal directions. |
Note on Differences in Reported Performance: The document provides specific results for each anatomical location (Cervix, Chest Wall, Head & Neck, Whole Brain). The table above aggregates these where applicable, noting that specific percentages might vary slightly by location for a given criterion.
Study Details Proving Acceptance Criteria
2. Sample Size Used for the Test Set and Data Provenance:
- Test Set Description: The test datasets were composed of CT images of patients previously treated with radiotherapy. The sampling method involved selecting data prospectively from January 1, 2022, onwards until sufficient data was collected. If needed, data collection was extended back to January 1, 2021. Critically, testing datasets were unique, with no overlap with data used for model creation or previous validation studies.
- Sample Sizes per Anatomical Location/Study:
- Cervix: 50 unique patients for VMAT plans, 47 unique patients for 3D soft tissue plans, and 45 unique patients for 3D bony landmark plans.
- Chest Wall: 46 unique patients.
- Head and Neck: 86 unique patients.
- Whole Brain: 46 unique patients.
- Data Provenance (Source): The document doesn't explicitly state the country of origin for the multicenter clinical data, but the training data largely originated from MD Anderson Cancer Center (Houston, TX, USA), and one of the GYN normal tissue test sets included "30 cervical cancer patients from 3 centers in S. Africa." Given "multicenter clinical data," it implies multiple clinical sites that could be within the US or international, but the primary training and internal validation appear to be from MD Anderson. The studies are retrospective, using previously acquired patient data.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:
The document does not explicitly state the number of experts used to establish ground truth for the test set specifically. However, for the training data (which implies the source of the ground truth methodology):
- Ground Truth Treatment Plans (Training): The ground truth treatment plans were generated via the "primary 4-field box automation technique for cervical cancer by Kisling et al." and were "rated by physicians."
- Contouring Ground Truth: The various anatomical location specific training sets mention that the "original clinical contours of anatomic structures and treatment targets" were part of the datasets.
- Qualification: The general qualification stated is "physicians" for plan rating and "trained medical professionals" for the predicate device, implying clinical experts like radiation oncologists for plan assessment and potentially dosimetrists or other clinicians for contouring. No specific number or years of experience are detailed for the ground truth experts for the test set.
4. Adjudication Method for the Test Set:
The document does not explicitly describe an adjudication method (such as 2+1 or 3+1) used for establishing the ground truth for the test set. The ground truth for dose evaluation was based on "accepted dosimetric metrics" compared to "clinical contour" or "clinical plans," suggesting a reference standard already deemed clinically acceptable. For body contouring, manual contours were used as the reference.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
A multi-reader multi-case (MRMC) comparative effectiveness study comparing human readers with AI assistance versus without AI assistance was not explicitly described in this submission. The studies presented focus on the standalone performance of the RPA device in generating contours and treatment plans against established clinical ground truths or conventional clinical plans. The device is intended to be used with human review and editing in the workflow ("All automatically generated contours and plans must be imported into the user's own Treatment Planning System (TPS) for review, edit, and final dose calculation").
6. Standalone (Algorithm Only Without Human-in-the-Loop) Performance:
Yes, a standalone performance assessment was conducted. The acceptance criteria and the results presented directly evaluate the performance of the RPA device's generated contours and plans against clinical standards. The metrics (e.g., dosimetric differences, recall, Surface DSC, isocenter agreement) quantify the algorithm's output independently before human intervention. The device's integration into the user's workflow still requires human review, but the reported performance metrics are of the automated output.
7. Type of Ground Truth Used:
- Treatment Plans: "Ground truth treatment plans were generated by the primary 4-field box automation technique for cervical cancer by Kisling et al. (Kisling 2019) with beam apertures based on a patient's bony anatomy. Only the clinically acceptable plans were used (rated by physicians)." This indicates a clinician-adjudicated "clinically acceptable plan" or expert-validated reference plan.
- Contours: "Original clinical contours of anatomic structures and treatment targets" were used for comparison, suggesting expert-drawn clinical contours as ground truth.
- Isocenter: "Manually generated marked isocenters" were used as ground truth.
- Body Contours: "Manual body contours" were used as ground truth.
- Overall, the ground truth is primarily based on expert consensus/clinical practice and reference standards derived from previously treated cases and expert-rated plans/contours.
8. Sample Size for the Training Set:
The training set sizes vary greatly by anatomical location and tissue type:
- Head and Neck:
- Normal Tissue (primary): 3,288 patients (3,495 CT scans) from MD Anderson (Sept 2004 - June 2018).
- Normal Tissue (secondary): 160 patients from MD Anderson (2018-2020).
- Lymph Node CTVs: 61 patients from MD Anderson (2010-2019).
- Whole Brain (Spinal Canal CNN, VB labeling, VB segmentation): 1,966 (CNN), 803 (VB labeling), 107 (VB segmentation) from 930 MDACC patients and 355 external patients.
- GYN:
- Normal Tissue (primary): 1,999 patients (2,254 CT scans) from MD Anderson (Sept 2004 - June 2018).
- Normal Tissue (secondary): 192 patients (316 CT scans) from MD Anderson (2006-2020).
- CTVs (UteroCervix, Nodal CTV, PAN, Vagina, Parametria): 406 to 490 CT scans from 131-388 patients from MD Anderson (2006-2020).
- Liver: 119 patients (169 CT scans) from MD Anderson.
- Chest Wall:
- Whole Body (secondary): 250 patients from MD Anderson (Aug 2016 - June 2021).
9. How the Ground Truth for the Training Set Was Established:
The ground truth for the training set was established through:
- Clinically Accepted Plans: For treatment plans, "only the clinically acceptable plans were used for training (rated by physicians)." This implies a form of expert review and selection of existing clinical plans.
- Existing Clinical Data: The training data largely consisted of "CT Scans of normal and diseased tissues from patients receiving radiation" at MD Anderson Cancer Center. These datasets included "original clinical contours of anatomic structures and treatment targets, and the dose distributions used for patient treatment." This suggests that the ground truth was derived from the routine clinical data, which is implicitly considered the standard of care as performed by clinicians at MD Anderson.
- External Patient Data: Some training data also included external patient data, such as for the Vertebral Bodies model (355 external patients) and publicly available data (MICCAI challenge data).
- Published Methodology: For cervical cancer, the "ground truth treatment plans were generated by the primary 4-field box automation technique for cervical cancer by Kisling et al. (Kisling 2019)," referring to a published methodology that would have defined how these plans (and implicitly their underlying contours and dose calculations) were considered ground truth.
Ask a specific question about this device
(118 days)
RayStation is a software system for radiation therapy and medical oncology. Based on user input, RayStation proposes treatment plans. After a proposed treatment plan is reviewed and approved by authorized intended users, RayStation may also be used to administer treatments.
The system functionality can be configured based on user needs.
RayStation is a treatment planning system for planning, analysis and administration of radiation therapy and medical oncology treatment plans. It has a modern user interface and is equipped with fast and accurate dose and optimization engines.
RayStation consists of multiple applications:
- The main RayStation application is used for treatment planning.
- . The RayPhysics application is used for commissioning of treatment machines to make them available for treatment planning and used for commissioning of imaging systems.
- . The RayTreat application is used for sending plans to treatment delivery devices for treatment and receiving records of performed treatments.
- o The RayCommand application is used for treatment session management including treatment preparation and sending instructions to the treatment delivery devices.
The provided text details the 510(k) summary for RayStation 10.1, a software system for radiation therapy and medical oncology. The document indicates that the determination of substantial equivalence to the primary predicate device (RayStation 9.1) is not based on an assessment of non-clinical performance data. Instead, it relies on the entire system verification and validation specifications and reports.
However, the document does describe the performance data for several new features and explicitly states that these features have been "successfully validated for accuracy in clinically relevant settings according to specification" or "successfully validated according to specification." While these statements imply acceptance criteria were met, the specific numerical acceptance criteria and the reported device performance values are not explicitly provided in a comparative table format within the given text.
Therefore, the following response will extract the implied acceptance criteria and reported performance from the descriptions provided, and note where specific numerical values are absent.
Acceptance Criteria and Device Performance Study for RayStation 10.1
The provided document, K210645 for RayStation 10.1, indicates that the determination of substantial equivalence to the primary predicate device (RayStation 9.1) is not based on non-clinical performance data directly comparing the existing features of 10.1 to 9.1. Instead, it relies on the comprehensive system verification and validation reports.
However, for the new features introduced in RayStation 10.1, the document states that these features have undergone validation and met their respective specifications. While specific, quantifiable acceptance criteria and reported performance values are not presented in a direct comparative table within the provided text, the descriptions imply the following:
1. A table of acceptance criteria and the reported device performance
| Feature | Implied Acceptance Criteria (from text) | Reported Device Performance (from text) |
|---|---|---|
| Brachytherapy TG43 Dose Calculation | Accurately models output from single and combined brachytherapy sources in clinical plans. All doses reported as dose-to-water (DWw). | Successfully validated for accuracy in clinically relevant settings according to specification. |
| Medical Oncology Dose Calculation Functions | Appropriate for supporting medical oncology planning workflows when used by qualified users according to IFU. | Validated to be appropriate for supporting medical oncology planning workflows. |
| Proton Ocular Treatment Dose Calculation | Accurately models proton dose calculation for ocular treatments using the single scattering (SS) delivery technique (modeled as double scattering). | Successfully validated for accuracy in clinically relevant settings according to specification. |
| Robust Planning of Organ Motion | Correctly generates deformed image sets to simulate organ motion and uses them for robust planning against intra-fractional or inter-fractional organ motion. | Successfully validated according to specification. |
Note: The provided text does not contain specific numerical acceptance criteria (e.g., "accuracy within X%") or quantitative reported performance data for any of these features. The reported performance essentially states that the criteria were "met" or "validated."
2. Sample size used for the test set and the data provenance
The document indicates that RayStation 10.1's test specification is a further developed version of RayStation 9.1's, supported by requirements specification. The verification activities included "User validation in cooperation with cancer clinics." However, no specific sample sizes for test sets (e.g., number of patient cases) or data provenance (e.g., country of origin, retrospective/prospective nature) are provided in the given text for any of the validations.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
The text mentions "User validation in cooperation with cancer clinics" but does not specify the number of experts, their qualifications, or how ground truth was established for the "test set" (if a distinct clinical test set was used for ground truth establishment).
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
No information on adjudication methods is provided in the supplied text.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The text does not mention any multi-reader multi-case (MRMC) comparative effectiveness study, nor does it discuss human readers improving with or without AI assistance. The device is a treatment planning system, and while it states it "proposes treatment plans" based on user input, it does not describe AI-assisted diagnostic or interpretation tasks. It explicitly states, "Related to machine learning, there is no change compared to the primary predicate device." suggesting limited or no direct machine learning components in the new features where such a study would typically be relevant.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The validations described for brachytherapy, medical oncology, proton ocular treatment, and robust planning of organ motion appear to be standalone algorithm performance assessments against defined specifications. These validations verify the accuracy and appropriateness of the software's calculations and functionalities independently, assuming "intended qualified user" interaction for medical oncology, but not as part of a human-in-the-loop performance study.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
For the dose calculation features (brachytherapy, proton ocular treatment), the "ground truth" implicitly refers to theoretical models and established physical principles (e.g., accurate modeling of TG43 formalism, proton dose calculations) as compared against the output of the software. For medical oncology functions, the "ground truth" for validation appears to be whether the functions are "appropriate" for planning workflows, likely assessed against clinical guidelines or expert workflows. For robust planning of organ motion, the ground truth relates to the correct generation and application of deformed image sets according to specifications. The document does not explicitly state that ground truth was established through pathology or outcomes data.
8. The sample size for the training set
The document refers to the system as "built on the same software platform" and "developed under the same quality system, by the same development teams." It mentions that "related to machine learning, there is no change compared to the primary predicate device." Given this, and the nature of treatment planning software, the concept of a "training set" in the context of machine learning (e.g., for image classification or prediction models) is not directly applicable or discussed for the validations mentioned. The system's development would involve software engineering and clinical validation rather than machine learning training sets for the described functionalities.
9. How the ground truth for the training set was established
As there is no mention of a training set or machine learning components for the new features, information on how its ground truth was established is not provided.
Ask a specific question about this device
(164 days)
Ethos Treatment Management is indicated for use in managing and monitoring radiation therapy treatment plans and sessions.
Ethos Treatment Planning is indicated for use in generating and modifying radiation therapy treatment plans.
Halcyon and Ethos Radiotherapy System are indicated for the delivery of stereotactic radiosurgery and precision radiotherapy for lesions, tumors and conditions anywhere in the body where radiation is indicated for adults and pediatric patients.
Ethos Treatment Management is software designed for radiation therapy medical professionals to support them in managing radiation treatments for patients.
Ethos Treatment Planning is software that is designed generate treatment plans, modify treatment plans, and guide users within adaptive treatment sessions.
Halcyon and Ethos Radiotherapy System are single energy linacs designed to deliver Image Guided Radiation Therapy and radiosurgery, using Intensity Modulated and Volumetric Modulated Arc Therapy techniques. They consist of an accelerator and patient support within a radiation shielded treatment room and a control console outside the treatment room.
I am sorry, but the provided text does not contain the specific information required to answer your request regarding acceptance criteria and a study proving device performance. The document describes a premarket notification for several medical devices and confirms their substantial equivalence to predicate devices, but it does not detail specific performance metrics, clinical studies, or acceptance criteria with reported device performance against those criteria.
Therefore, I cannot provide:
- A table of acceptance criteria and the reported device performance.
- Sample size used for the test set and data provenance.
- Number of experts used to establish ground truth and their qualifications.
- Adjudication method for the test set.
- Whether an MRMC comparative effectiveness study was done or its effect size.
- Whether a standalone performance study was done.
- The type of ground truth used.
- The sample size for the training set.
- How the ground truth for the training set was established.
The document mentions that "Hardware and software verification and validation testing was conducted" and "Test results showed conformance to applicable requirements specifications," but it does not provide the details of these tests or their results against specific criteria. It also states, "No animal studies or clinical tests have been included in this pre-market submission," which indicates that the information you requested about clinical performance studies is not present in this document.
Ask a specific question about this device
Page 1 of 1