Search Results
Found 1 results
510(k) Data Aggregation
(118 days)
RayStation is a software system for radiation therapy and medical oncology. Based on user input, RayStation proposes treatment plans. After a proposed treatment plan is reviewed and approved by authorized intended users, RayStation may also be used to administer treatments.
The system functionality can be configured based on user needs.
RayStation is a treatment planning system for planning, analysis and administration of radiation therapy and medical oncology treatment plans. It has a modern user interface and is equipped with fast and accurate dose and optimization engines.
RayStation consists of multiple applications:
- The main RayStation application is used for treatment planning.
- . The RayPhysics application is used for commissioning of treatment machines to make them available for treatment planning and used for commissioning of imaging systems.
- . The RayTreat application is used for sending plans to treatment delivery devices for treatment and receiving records of performed treatments.
- o The RayCommand application is used for treatment session management including treatment preparation and sending instructions to the treatment delivery devices.
The provided text details the 510(k) summary for RayStation 10.1, a software system for radiation therapy and medical oncology. The document indicates that the determination of substantial equivalence to the primary predicate device (RayStation 9.1) is not based on an assessment of non-clinical performance data. Instead, it relies on the entire system verification and validation specifications and reports.
However, the document does describe the performance data for several new features and explicitly states that these features have been "successfully validated for accuracy in clinically relevant settings according to specification" or "successfully validated according to specification." While these statements imply acceptance criteria were met, the specific numerical acceptance criteria and the reported device performance values are not explicitly provided in a comparative table format within the given text.
Therefore, the following response will extract the implied acceptance criteria and reported performance from the descriptions provided, and note where specific numerical values are absent.
Acceptance Criteria and Device Performance Study for RayStation 10.1
The provided document, K210645 for RayStation 10.1, indicates that the determination of substantial equivalence to the primary predicate device (RayStation 9.1) is not based on non-clinical performance data directly comparing the existing features of 10.1 to 9.1. Instead, it relies on the comprehensive system verification and validation reports.
However, for the new features introduced in RayStation 10.1, the document states that these features have undergone validation and met their respective specifications. While specific, quantifiable acceptance criteria and reported performance values are not presented in a direct comparative table within the provided text, the descriptions imply the following:
1. A table of acceptance criteria and the reported device performance
Feature | Implied Acceptance Criteria (from text) | Reported Device Performance (from text) |
---|---|---|
Brachytherapy TG43 Dose Calculation | Accurately models output from single and combined brachytherapy sources in clinical plans. All doses reported as dose-to-water (DWw). | Successfully validated for accuracy in clinically relevant settings according to specification. |
Medical Oncology Dose Calculation Functions | Appropriate for supporting medical oncology planning workflows when used by qualified users according to IFU. | Validated to be appropriate for supporting medical oncology planning workflows. |
Proton Ocular Treatment Dose Calculation | Accurately models proton dose calculation for ocular treatments using the single scattering (SS) delivery technique (modeled as double scattering). | Successfully validated for accuracy in clinically relevant settings according to specification. |
Robust Planning of Organ Motion | Correctly generates deformed image sets to simulate organ motion and uses them for robust planning against intra-fractional or inter-fractional organ motion. | Successfully validated according to specification. |
Note: The provided text does not contain specific numerical acceptance criteria (e.g., "accuracy within X%") or quantitative reported performance data for any of these features. The reported performance essentially states that the criteria were "met" or "validated."
2. Sample size used for the test set and the data provenance
The document indicates that RayStation 10.1's test specification is a further developed version of RayStation 9.1's, supported by requirements specification. The verification activities included "User validation in cooperation with cancer clinics." However, no specific sample sizes for test sets (e.g., number of patient cases) or data provenance (e.g., country of origin, retrospective/prospective nature) are provided in the given text for any of the validations.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
The text mentions "User validation in cooperation with cancer clinics" but does not specify the number of experts, their qualifications, or how ground truth was established for the "test set" (if a distinct clinical test set was used for ground truth establishment).
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
No information on adjudication methods is provided in the supplied text.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The text does not mention any multi-reader multi-case (MRMC) comparative effectiveness study, nor does it discuss human readers improving with or without AI assistance. The device is a treatment planning system, and while it states it "proposes treatment plans" based on user input, it does not describe AI-assisted diagnostic or interpretation tasks. It explicitly states, "Related to machine learning, there is no change compared to the primary predicate device." suggesting limited or no direct machine learning components in the new features where such a study would typically be relevant.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The validations described for brachytherapy, medical oncology, proton ocular treatment, and robust planning of organ motion appear to be standalone algorithm performance assessments against defined specifications. These validations verify the accuracy and appropriateness of the software's calculations and functionalities independently, assuming "intended qualified user" interaction for medical oncology, but not as part of a human-in-the-loop performance study.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
For the dose calculation features (brachytherapy, proton ocular treatment), the "ground truth" implicitly refers to theoretical models and established physical principles (e.g., accurate modeling of TG43 formalism, proton dose calculations) as compared against the output of the software. For medical oncology functions, the "ground truth" for validation appears to be whether the functions are "appropriate" for planning workflows, likely assessed against clinical guidelines or expert workflows. For robust planning of organ motion, the ground truth relates to the correct generation and application of deformed image sets according to specifications. The document does not explicitly state that ground truth was established through pathology or outcomes data.
8. The sample size for the training set
The document refers to the system as "built on the same software platform" and "developed under the same quality system, by the same development teams." It mentions that "related to machine learning, there is no change compared to the primary predicate device." Given this, and the nature of treatment planning software, the concept of a "training set" in the context of machine learning (e.g., for image classification or prediction models) is not directly applicable or discussed for the validations mentioned. The system's development would involve software engineering and clinical validation rather than machine learning training sets for the described functionalities.
9. How the ground truth for the training set was established
As there is no mention of a training set or machine learning components for the new features, information on how its ground truth was established is not provided.
Ask a specific question about this device
Page 1 of 1