Search Results
Found 251 results
510(k) Data Aggregation
(237 days)
Ask a specific question about this device
(237 days)
The Monaco system is used to make treatment plans for patients with prescriptions for external beam radiation therapy. The system calculates dose for photon, electron, and proton treatment plans and displays, on-screen and in hard-copy, two- or three-dimensional radiation dose distributions inside patients for given treatment plan set-ups. The Monaco product line is intended for use in radiation treatment planning. It uses generally accepted methods for:
- contouring
- image manipulation
- simulation
- image fusion
- plan optimization
- QA and plan review
The Monaco RTP System accepts patient diagnostic imaging data from CT and MR scans, and source dosimetry data, typically from a linear accelerator. The system then permits the user to display and define (contour) the target volume to be treated and critical structures which must not receive above a certain level of radiation, on these diagnostic images. Based on the prescribed dose, the user, a Dosimetrist or Medical Physicist, can then create multiple treatment scenarios involving the number, position(s) and energy of radiation beams and the use of a beam modifier (MLC, block, etc.) between the source of radiation and the patient to shape the beam. Monaco RTP system then produces a display of radiation dose distribution within the patient, indicating not only doses to the target volume but to surrounding tissue and structures. The optimal plan satisfying the prescription is then selected, one that maximizes dose to the target volume while minimizing dose to surrounding healthy volumes.
The parameters of the plan are output for later reference and for inclusion in the patient file. Monaco planning methods and modalities:
- Intensity Modulated Radiation Treatment (IMRT) planning
- Electron, photon and proton treatment planning
- Planning for dynamic delivery methods (e.g., dMLC, dynamic conformal)
- Volumetric Modulated Arc Therapy (VMAT)
- Stereotactic planning and support of cone-based stereotactic
- 3D conformal planning
- Distributed planning configurations (e.g., for conventional linac)
- Adaptive planning capabilities (e.g., for MR-Linac & conventional linac)
- Auto planning features (e.g., for conventional linac)
Monaco basic systems tools, characteristics, and functions:
- Plan review tools
- Manual and automated contouring tools (Segmentation component for MR images)
- DICOM connectivity
- Windows operating system
- Simulation
- Support for a variety of beam modifiers (e.g. MLCs, blocks, etc.)
- Standardized uptake value (SUV)
- Specialty Image Creation (MIP, MinIP, and Avg)
- Monaco dose and Monitor Unit (MU) calculation
- Dose calculation algorithms for electron, photon, proton planning
Monaco is programmed using C, C++ and C# computer programming languages. Monaco runs on Windows operating system and off-the-shelf computer server/hardware.
The provided FDA 510(k) clearance letter and summary for the Monaco RTP System (6.3) outlines the acceptance criteria and a study supporting the substantial equivalence of the new features. Here's a breakdown of the requested information:
1. A table of acceptance criteria and the reported device performance
| Changed Feature | Acceptance Criteria | Reported Device Performance |
|---|---|---|
| Segmentation component for invoking MR auto-segmentation algorithms (AI-based) | Primary metric: Average Hausdorff Distance (AVD) ≤ 3 mm. Secondary metric (value of interest): DICE or AUC ≥ 0.7 for specific structures. Additionally, qualitative analysis based on a 5-point Likert scale to determine if automatically generated structures provide a valuable starting point for clinical delineation. Investigation of any failures to meet the DICE confidence value of 0.7, with findings included as "Limitations." Sub-group analysis based on patient size, pixel size, slice spacing, and number of slices. | For all evaluated structures across all models (Female Pelvis Intact & Hysterectomy, Male Pelvis, and Head & Neck), the mean Absolute Volume Difference (AVD) was less than 3 mm. Structure-specific statistical analyses supported this conclusion. Patterns of failure for any structure failing the DICE confidence value of 0.7 were investigated and included as "Limitations." Qualitative analysis concluded that automatically generated structures provided a valuable starting point for clinical delineation. |
| Auto-planning | All pre-defined acceptance criteria related to workflow performance, protocol management, plan creation, interoperability, and error handling must be met. Plans generated must be clinically acceptable for the intended use and not introduce new safety or effectiveness concerns. | All testing met pre-defined acceptance criteria. Treatment plans generated were reviewed within the clinical workflow and determined to be suitable for clinical use, without introducing new safety or effectiveness concerns. |
| Extending adaptive planning capabilities to EMLA for offline adaptive planning | Correct system behavior during image registration, structure propagation, dose recalculation/re-optimization, offline adaptive plan generation, and workflow execution under representative clinical scenarios. No defects, unexpected behavior, or data integrity issues. Plans generated must be clinically acceptable for the intended use. All predefined acceptance criteria for verification and validation must be met. | No defects, unexpected behavior, or data integrity issues were identified during testing. Validation demonstrated that offline adaptive planning using CT‑to‑CBCT supports the creation of clinically acceptable treatment plans for the intended use. Offline adaptive plans were reviewed within the clinical workflow and determined to be suitable for use. Verification and validation testing met pre-defined acceptance criteria. |
| Interoperability with 3rd party software for image management and contouring | Correct DICOM export functionality, preservation of data integrity, and successful creation of an offline adaptive plan using third-party contouring. Third-party contouring outputs must be clinically acceptable and comparable to reference contours produced by qualified users. All planned Solution Interoperability test cases successfully executed and passed. All verification and validation testing met predefined acceptance criteria. | All verification and validation testing met the predefined acceptance criteria. All planned Solution Interoperability test cases have been successfully executed and passed. |
2. Sample size used for the test set and the data provenance
- AI-based segmentation component:
- Female Pelvis Intact & Hysterectomy models: 529 images (joint image set).
- Male Pelvis model: 250 images.
- Head & Neck model: 1862 images.
- Data Provenance: Not explicitly stated regarding country of origin or whether the data was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- AI-based segmentation component: "reference contours produced by qualified users" and "clinical delineation." The exact number or specific qualifications (e.g., radiologist with X years of experience) of these experts are not specified in the provided document.
4. Adjudication method for the test set
- AI-based segmentation component: For the qualitative analysis, it states "a conclusion that the automatically generated structures provided a valuable starting point for clinical delineation." This implies human review and evaluation. However, a formal adjudication method like "2+1" or "3+1" is not explicitly mentioned.
- For other features (Auto-planning, Adaptive planning, Interoperability), reviews mention evaluation within the "clinical workflow" and determination of "suitability for clinical use," but a specific adjudication method beyond internal reviews is not detailed.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- A formal MRMC comparative effectiveness study to quantify human reader improvement with AI assistance is not mentioned in the provided text. The AI component was evaluated in a standalone manner for its segmentation accuracy, and qualitatively for its utility as a "starting point for clinical delineation."
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Yes, for the AI-based segmentation component, a standalone algorithm-only performance evaluation was done using metrics like Average Hausdorff Distance (AVD), DICE, and AUC.
7. The type of ground truth used
- For the AI-based segmentation component, the ground truth for the test set involved "reference contours produced by qualified users" and "clinical delineation." This implies expert consensus/delineation rather than pathology or outcomes data.
- For other features, "clinically acceptable treatment plans" and "suitable for clinical use" imply evaluation against accepted clinical standards, likely by qualified personnel.
8. The sample size for the training set
- The training set sample sizes are indicated for the AI-based segmentation models:
- Female Pelvis Intact & Hysterectomy: 529 images (joint image set used for training).
- Male Pelvis: 250 images (used for training).
- Head & Neck: 1862 images (used for training).
9. How the ground truth for the training set was established
- The document implies that the training data for the AI-based segmentation models would have been expertly annotated to establish the ground truth, given the mention of "reference contours produced by qualified users" for evaluation. However, the exact method for establishing ground truth for the training set is not explicitly detailed beyond this inference.
Ask a specific question about this device
(20 days)
TrueFit Bolus is indicated for and intended to be placed on the patient's skin as an accessory to attenuate and/or compensate external beam (photon or electron) radiation during prescribed radiation therapy for the treatment of cancer or other non-malignant tissue conditions for which radiation therapy is indicated.
The device is for a single patient's use only and can be reused throughout the entirety of the treatment course.
The device is designed by the radiation therapy professional using patient imaging data as input and must be verified and approved by the trained radiation therapy professional prior to use.
The device is restricted to sale by or on the order of a physician and is by prescription only.
TrueFit Bolus is a 3D printed patient-matched radiation therapy accessory that expands the application of external beam radiation therapy by providing a patient-specific fit.
Patient imaging data from the treatment planning system (TPS) are used as inputs to generate digital design of the radiation therapy bolus (TrueFit) by 3D Bolus Software Application (K213438), previously developed by Adaptiiv. The resulting output Stereolithography (STL) file is compatible with the third-party 3D printers. A TrueFit Bolus can be 3D-printed using MJF with polyamide or polyurethane, or SLA with methacrylate photopolymer resin, based on the user's preference.
The bolus is used in radiation therapy when a patient requires the total prescription dose to be delivered on or near the skin surface. The bolus acts as a tissue-equivalent material placed on the patient skin to account for the buildup region of the treatment beam.
N/A
Ask a specific question about this device
(91 days)
ART-Plan+'s indicated target population is cancer patients for whom radiotherapy treatment has been prescribed. In this population, any patient for whom relevant modality imaging data is available.
ART-Plan+'s includes several modules:
SmartPlan which allows automatic generation of radiotherapy treatment plan that the users import into their own Treatment Planning System (TPS) for the dose calculation, review and approval. This module is available for supported prescriptions for prostate only.
Annotate which allows automatic generation of contours for organs at risk, lymph nodes and tumors, based on medical practices, on medical images such as CT and MR images
AdaptBox which allows generation of synthetic-CT from CBCT images, dose computation on CT images for external beam irradiation with photon beams and assisted CBCT-based off-line adaptation decision-making for the following anatomies:
- Head & Neck
- Breast / Thorax
- Pelvis (male)
ART-Plan+ is not intended to be used for patients less than 18 years of age.
ART-Plan is a software platform allowing contour regions of interest on 3D images, to provide an automatic treatment plan and to help in the decision for the need for replanning based on contours and doses on daily images. It includes several modules:
Home: tasks and patient monitoring
Annotate including TumorBox: contouring of regions of interest
Smartplan: creation of an automatic treatment plan based on a planning CT and a RTSS
AdaptBox: helping tool to decide if a replanning is necessary. For this purpose, the module allows the user to generate a synthetic-CT from a CBCT image, to auto-delineate regions of interest on the synthetic-CT, to compute the dose on both planning CT and synthetic-CT and then define if there is a need for replanning by comparing volume and dose metrics computed on both images and over the course of the treatment. Those metrics are defined by the user.
Administration and Settings: preferences management, user account management, etc.
About: information about the software and its use, as well as contact details.
Annotate, TumorBox, Smartplan and AdaptBox are partially based on a batch mode, which allows the user to launch the operations of autocontouring and autoplanning without having to use the interface or the viewers. In that way, the software is completely integrated into the radiotherapy workflow and offer to the user a maximum of flexibility.
Annotate which allows automatic generation of contours for organs at risk (OARs), lymph nodes (LNs) and tumors, based on medical practices, on medical images such as CT and MR images:
OARs and LNs:
- Head and neck (on CT images)
- Thorax/breast (on CT images)
- Abdomen (on CT and male on MR images)
- Pelvis male (on CT and MR images)
- Pelvis female (on CT images)
- Brain (on CT images and MR images)
Tumor:
- Brain (on MR images)
SmartPlan which allows automatic generation of radiotherapy treatment plan that the users import into their own Treatment Planning System (TPS) for the dose calculation, review and approval. This module is available for supported prescriptions for prostate only.
AdaptBox which allows generation of synthetic-CT from CBCT images, dose computation on CT images for external beam irradiation with photon beams and assisted CBCT-based off-line adaptation decision-making for the following anatomies:
- Head & Neck
- Breast / Thorax
- Pelvis (male)
Here's a breakdown of the acceptance criteria and study details for ART-Plan+ (v3.1.0) based on the provided FDA 510(k) clearance letter:
Acceptance Criteria and Reported Device Performance
The ART-Plan+ device consists of three main modules: Annotate, SmartPlan, and AdaptBox. Each module has its own set of acceptance criteria.
Note: The document provides acceptance criteria and implicitly states that "all tests passed their respective acceptance criteria, thus showing ART-Plan + v3.1.0 clinical acceptability." However, it does not provide specific numerical reported device performance for each criterion. It only states that the criteria were met.
Annotate Module Performance Criteria
| Acceptance Criterion | Reported Device Performance |
|---|---|
| Non-regression testing (on CT/MR for existing structures): Mean DSC should not regress negatively between the current and last validated version of Annotate beyond a maximum tolerance margin set to -5% relative error. | Passed (implicitly, as stated all tests passed) |
| Non-regression testing (on synthetic-CT from CBCT for existing structures): Mean DSC (sCT) should be equivalent to Mean DSC (CT) beyond a maximum tolerance margin set to -5% relative error. | Passed (implicitly, as stated all tests passed) |
| Qualitative evaluation (for new structures or those failing non-regression): Clinicians' qualitative evaluation of the auto-segmentation is considered acceptable for clinical use without modifications (A) or with minor modifications / corrections (B) with an A+B % ≥ 85%. | Passed (implicitly, as stated all tests passed) |
| Quantitative evaluation (for new structures): Mean DSC (annotate) ≥ 0.8 | Passed (implicitly, as stated all tests passed) |
SmartPlan Module Performance Criteria
| Acceptance Criterion | Reported Device Performance |
|---|---|
| Quantitative evaluation: Effectiveness difference (%) in DVH achieved goals between manual plans and automatic plans ≤ 5%. | Passed (implicitly, as stated all tests passed) |
| Qualitative evaluation: % of clinical acceptable automatic plans ≥ 93% after expert review. | Passed (implicitly, as stated all tests passed) |
AdaptBox Module Performance Criteria
| Acceptance Criterion | Reported Device Performance |
|---|---|
| Dosimetric evaluations (Synthetic CT): Median 2%/2mm ≥ 92%. | Passed (implicitly, as stated all tests passed) |
| Dosimetric evaluations (Synthetic CT): Median 3%/3mm ≥ 93.57%. | Passed (implicitly, as stated all tests passed) |
| Dosimetric evaluations (Synthetic CT): A median dose deviation (synthetic-CT compared to standard CT) of ≤2% in ≥76.7% of patients. | Passed (implicitly, as stated all tests passed) |
| Quantitative validation (Synthetic CT): Jacobian determinant = 1 +/- 5%. | Passed (implicitly, as stated all tests passed) |
| Qualitative validation (Deformation of planning CT towards CBCT): Clinicians' qualitative evaluation of the overall registration output (A+B%) ≥ 85%. | Passed (implicitly, as stated all tests passed) |
| Qualitative validation (Deformable propagation of contours): Clinicians' qualitative evaluation of the propagated contours (A+B%) ≥ 85%. | Passed (implicitly, as stated all tests passed) |
Study Details
1. Sample Sizes for the Test Set and Data Provenance
The document describes the test sets for each module and the overall data provenance:
- Overall Test Dataset: Total of 2040 patients (1413 EU patients and 627 US patients), representing 31% US data.
- Annotate Module: Total of 1844 patients (1254 EU patients and 590 US patients).
- Provenance: Retrospective, worldwide population receiving radiotherapy treatments, with a specific effort to include US data (31% overall).
- Minimum sample sizes for specific tests:
- Non-regression testing (autosegmentation on CT/MR, or synthetic-CT from CBCT): Minimum 24 patients.
- Qualitative evaluation of autosegmentation: Minimum 18 patients.
- Quantitative evaluation of autosegmentation: Minimum 24 patients.
- SmartPlan Module: Total of 35 patients (25 EU patients and 10 US patients).
- Provenance: Retrospective, worldwide population receiving radiotherapy treatments, with a specific effort to include US data.
- Minimum sample size for quantitative and qualitative evaluation: Minimum 20 patients.
- AdaptBox Module: Total of 161 patients (134 EU patients and 27 US patients).
- Provenance: Retrospective, worldwide population receiving radiotherapy treatments, with a specific effort to include US data. An independent dataset composed only of US patients was also used for quantitative validation of synthetic CT.
- Minimum sample sizes for specific tests:
- Dosimetric evaluations of synthetic CT: Minimum 15 patients.
- Quantitative validation of synthetic CT: Minimum 15 patients (plus an independent US dataset).
- Qualitative validation of deformation of planning CT: Minimum 10 patients.
- Qualitative validation of deformable propagation of contours: Minimum 10 patients.
2. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
The document refers to "medical experts" and "clinicians" for qualitative evaluations and for performing manual contours. However, it does not specify the exact number of experts used for ground truth establishment for each test set or their specific qualifications (e.g., "Radiologist with 10 years of experience"). It only mentions that evaluations were performed by medical experts.
3. Adjudication Method for the Test Set
The document does not explicitly state an adjudication method like "2+1" or "3+1" for creating the ground truth or resolving disagreements among experts. For qualitative evaluations, it describes a rating scale (A, B, C) and acceptance based on a percentage of A+B ratings, implying individual expert review results were aggregated. For quantitative validations, ground truth seems to be established by comparison with "manual contours performed by medical experts" or "direct comparison with manual plans," but the process for defining these manual references is not detailed in terms of adjudication.
4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
The document does not mention a Multi-Reader Multi-Case (MRMC) comparative effectiveness study to evaluate how human readers improve with AI vs. without AI assistance. The evaluations focus on the standalone performance of the AI modules or the clinical acceptability of outputs generated by the AI (e.g., auto-segmentations, auto-plans, synthetic CTs).
5. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study
Yes, the studies described are primarily standalone (algorithm only) performance evaluations. The modules (Annotate, SmartPlan, AdaptBox) are tested for their ability to generate contours, treatment plans, or synthetic CTs, and these outputs are then compared against ground truth or evaluated for clinical acceptability by experts. While experts review the outputs for clinical acceptability, this is an evaluation of the algorithm's output, not a comparative study of human performance with and without AI assistance. The document states, "all the steps of the workflow where ART-Plan is involved have been tested independently," emphasizing the standalone nature of the module testing.
6. Type of Ground Truth Used
The ground truth varied depending on the module and test:
- Expert Consensus / Manual Delineation:
- For Annotate's quantitative non-regression and quantitative evaluation, the ground truth was "manual contours performed by medical experts."
- For SmartPlan's quantitative evaluation, the ground truth was "manual plans."
- Qualitative Expert Review:
- For Annotate, SmartPlan, and AdaptBox qualitative evaluations, the ground truth was established by "medical experts" or "clinicians" assessing the clinical acceptability of the device's output against defined scales (A, B, C).
- Comparison to Standard Imaging/Analysis:
- For AdaptBox's dosimetric evaluations, the synthetic CT performance was compared to "standard CT."
- For AdaptBox's quantitative validation of synthetic CTs, a "direct comparison of anatomy and geometry with the associated CBCT" was performed.
7. Sample Size for the Training Set
The document does not specify the sample size for the training set for any of the modules. It only discusses the test set sizes.
8. How the Ground Truth for the Training Set Was Established
The document briefly mentions "retraining or algorithm improvement" for existing structures and new structures for autosegmentation, but it does not describe how the ground truth for the training set was established. It only focuses on the validation of the new version's performance using dedicated test sets with specific ground truth methods. It implies that the underlying AI models (deep learning neural networks) were trained, but the details of that training process, including ground truth establishment, are not provided in this 510(k) summary.
Ask a specific question about this device
(168 days)
RayStation is a software system for radiation therapy and medical oncology. Based on user input, RayStation proposes treatment plans. After a proposed treatment plan is reviewed and approved by authorized intended users, RayStation may also be used to administer treatments.
The system functionality can be configured based on user needs.
RayStation is a software system for radiation therapy and medical oncology. Based on user input, RayStation proposes treatment plans. After a proposed treatment plan is reviewed and approved by authorized intended users, RayStation may also be used to administer treatments.
The system functionality can be configured based on user needs.
RayStation consists of multiple applications:
- The main RayStation application is used for treatment planning.
- The RayPhysics application is used for commissioning of treatment machines to make them available for treatment planning and used for commissioning of imaging systems.
- The RayTreat application is used for sending plans to treatment delivery devices for treatment and receiving records of performed treatments.
The device to be marketed, RayStation 2024A SP3, adds the RayTreat application compared with last cleared version, the predicate RayStation 2024A SP3 (without RayTreat), K240398.
The RayTreat application was previously cleared with RayStation 11B, K220141. Since then some RayTreat functions have been changed:
- RayTreat is now session focused
- Usability improvements
- Bug fixes
The RayStation applications are built on a software platform, containing the radiotherapy domain model and providing GUI, optimization, dose calculation and storage services. The platform uses three Microsoft SQL databases for persistent storage of the patient, machine and clinic settings data.
As a treatment planning system, RayStation aims to be an extensive software toolbox for generating and evaluating various types of radiotherapy treatment plans. RayStation supports a wide variety of radiotherapy treatment techniques and features an extensive range of tools for manual or semi-automatic treatment planning.
The RayStation applications are divided into modules, which are activated through licensing.
The RayTreat application
RayTreat manages treatment delivery. An approved plan can be assigned to fractions in a treatment course and sent to the treatment delivery device. Treatment records from the treatment delivery device are recorded and sent to RayCarePACS.
Note that all real-time monitoring of actual delivery is handled by treatment delivery device software, not by RayStation.
Scientific concepts that form the basis for the device and significant performance characteristics:
RayStation is a stand-alone software medical device intended for radiation therapy. Input to the device is patient, disease and treatment unit information, output from the device is one or more treatment plans. The treatment plans include treatment unit parameter settings for optimal beam arrangements, energies, field sizes, and ultimately fluence patterns to produce as safe and effective radiation dose distribution as the predicate.
The scientific concepts of a treatment planning system are patient and beam modeling, and algorithms for dose calculation and plan parameter optimization.
The patient model is a computerized representation of the patient tissue and densities, identifying the target regions and particular organs at risk. The model is based on medical images of the patient and must have the desired level of accuracy. Likewise, the beam modeling is a computerized representation of the treatment unit, defined by fluence type, energy distribution, machine specific geometry, and beam modifiers such as MLC, flattening filters, wedges etc. The algorithms for dose calculation and plan parameter optimization must take into account all geometries and materials that affect irradiation transport through the treatment unit and the patient. The optimization algorithm iterates treatment plan parameters until the desired treatment plan and dose distribution have been obtained. Also here, all steps must be done to the desired level of accuracy.
Significant physical characteristics of the device, material used, and physical properties:
The device is a standalone software medical device. It has no physical properties or materials. The device design information can be found in the subsection above "Device design information".
N/A
Ask a specific question about this device
(101 days)
ClearCalc is intended to assist radiation treatment planners in determining if their treatment planning calculations are accurate using an independent Monitor Unit (MU) and dose calculation algorithm.
The ClearCalc Model RADCA V2.6 device is software that uses treatment data, image data, and structure set data obtained from a supported Treatment Planning System and Application Programming interfaces to perform a dose and/or monitor unit (MU) calculation on the incoming treatment planning parameters. It is designed to assist radiation treatment planners in determining if their treatment planning calculations are accurate using an independent Monitor Unit (MU) and dose calculation algorithm.
N/A
Ask a specific question about this device
(234 days)
The PlanOne is a software system used to plan radiotherapy treatments for patients with malignant or benign diseases. PlanOne is used to plan external beam irradiation with photon and proton beams. The intended users of PlanOne shall be clinically qualified radiation therapy staff trained in using the system.
The Cosylab Treatment Planning System (PlanOne) is responsible for creating machine instructions (treatment plans) for radiotherapy. It's a complex piece of software, integrating detailed physics (dose calculation), mathematics (plan optimization) and graphical (contouring) algorithms.
The PlanOne has to import 3D image datasets of patient anatomy, usually CT images. In the first stage of the planning, the tumor and critical structures have to be identified by the user. The process is called contouring. In the second stage, the 3D image and the contours are taken along with prescription input to calculate a treatment plan. The treatment plan includes machine instructions on how to deliver radiation.
To produce an appropriate treatment plan, the PlanOne computes the expected dose distribution in the patient's anatomy, taking into account relative electron density and particle stopping material properties at specific voxels (pixels). The PlanOne also helps to navigate beam placement based on avoiding critical structures that are more sensitive to radiation in an effort to reduce collateral damage from the therapy. The PlanOne may optimize beam shape and intensity to meet the user set objectives. This may include automated, complex programming for multi-leaf collimator (MLC) leaf sequencing to shape the beam around critical structures during dose delivery. In particle therapy instead of shaping MLC, the PlanOne determines the appropriate spot placement and weight in each beam direction.
N/A
Ask a specific question about this device
(237 days)
Dose+ is a software-only medical device intended for use by qualified, trained radiation therapy professionals (including but not limited to medical physicists, radiation oncologists, and medical dosimetrists). The device is intended for male patients with localized prostate cancer or prostate cancer with pelvic lymph node involvement who are undergoing external beam radiation therapy treatment. The software uses machine learning-based algorithms to automatically produce 3D dose distributions from patient-specific anatomical geometry and target dose prescription.
The predicted dose distribution output is required to be transferred to a radiotherapy treatment planning system (TPS) or reviewed by any DICOM-RT compliant software prior to further use in clinical workflows. Dose+ is intended to provide additional information during the treatment planning process facilitating the creation and review of a treatment plan.
Dose+ is not intended to be used for disease diagnosis and treatment decision purposes in clinical workflows.
Dose+ is a software-only medical device that assists radiation oncologists, medical dosimetrists and medical physicists during external beam radiotherapy treatment planning. The software utilizes pre-trained machine learning models that are not modifiable or editable by the end-user. The product provides information during the radiotherapy plan creation but does not replace a treatment planning system (TPS).
The central value proposition of Dose+ is to provide personalized organ-at-risk (OAR) dose optimization goals based on individual patient anatomy, rather than relying solely on population-based protocol templates. The software analyzes patient-specific anatomical geometry to determine achievable dose levels for each OAR relative to target volumes. This helps to ensure:
- Initial optimization objectives are more achievable, reducing the number of iterations needed during plan optimization
- Opportunities for further OAR dose reduction are not missed when standard fixed templates suggest a higher dose
- Inappropriately aggressive goals for one OAR do not compromise target coverage or dose reduction to other OARs
The device operates in two deployment modes:
- Cloud-based service with secure HTTPS data transfer
- Local installation in healthcare provider's IT network
Key features include:
- Automated dose prediction using locked machine learning models
- Generation of DICOM RT Dose objects
- Integration with existing treatment planning workflows
- Support for multiple fractionation schemes
- Compatibility with major treatment planning systems
The first release includes two models for male pelvis patients:
- "Prostate" model: For localized prostate treatments
- "PelvisLN" model: Designed for cancers involving lymph nodes
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) Clearance Letter for Dose+:
1. Acceptance Criteria and Reported Device Performance
| Acceptance Criteria | Reported Device Performance |
|---|---|
| OAR Mean Dose Difference: $\le$ 10 Gy (Performance Verification) | Both prostate and pelvisLN models met this criterion, showing strong correlation between predicted and ground truth dose distributions. |
| Target Coverage Metrics: (Homogeneity & Conformity Indices met) (Performance Verification) | Both prostate and pelvisLN models met this criterion, showing strong correlation between predicted and ground truth dose distributions. |
| Reduction in Optimization Iterations: $\ge$ 20% mean reduction (Clinical Validation) | The study demonstrated a statistically significant reduction in optimization iterations when using Dose+ compared to conventional planning. |
| Non-inferior OAR Mean Doses: $\le$ 10 Gy difference compared to conventional planning (Clinical Validation) | The study demonstrated non-inferior OAR mean doses ($ \le$10 Gy difference) compared to conventional planning. |
| Non-inferior Target Coverage: No statistically significant differences compared to conventional planning (Clinical Validation) | The study demonstrated non-inferior target coverage (no statistically significant differences) compared to conventional planning. |
| Positive User Acceptance from Validators: Validators indicate willingness to use clinically and report potential time savings (Clinical Validation) | The majority of validators indicated willingness to use the system clinically and reported potential time savings in treatment planning. |
| No Identified Safety Issues: (Clinical Validation) | No safety hazards were identified during validation testing. |
2. Sample Size Used for the Test Set and Data Provenance
The description distinguishes between a "Performance Verification Dataset" and a "Clinical Validation Dataset", both of which function as test sets for different aspects of performance.
-
Performance Verification Dataset: This dataset was used for demonstrating non-inferiority in OAR mean dose predictions and target coverage against manual planning.
- Prostate Model: 25 cases
- PelvisLN Model: 27 cases
- Data Provenance:
- Prostate Model: 100% US dataset, collected from 7 U.S. institutions across 6 US states.
- PelvisLN Model: 96.3% US dataset, collected from 6 institutions from 6 US states.
- Retrospective/Prospective: Not explicitly stated, but the description of "independent dataset" and "multiple US institutions" suggests retrospective collection for this phase.
-
Clinical Validation Dataset: This dataset was used for a comparative effectiveness study involving human readers.
- Sample Size: Not explicitly stated as a number of cases, but mentions "prospective patient enrollment" at 4 US institutions and "within-subject comparison". Given the context of a "validation study", it implies a separate cohort from the Verification Dataset.
- Data Provenance: 100% US dataset, conducted at 4 geographically diverse US institutions across 3 states. Patients aged 60 years and older, with demographic representativeness matching the US national median age for prostate cancer diagnosis.
- Retrospective/Prospective: Prospective patient enrollment.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Performance Verification: The ground truth for OAR mean dose predictions and target coverage was established against "manual planning." It's implied that these manual plans, considered the ground truth, were created by qualified radiation therapy professionals (medical physicists, radiation oncologists, and medical dosimetrists), but the specific number and qualifications of experts involved in creating this specific ground truth dataset are not detailed. It refers to the ground truth as "ground truth dose distributions," which would typically be the outcome of expert-created and approved treatment plans.
- Clinical Validation: The study involved "Independent validators (ABR-certified medical physicists)" for assessing plan quality, optimization iterations, and user acceptance. The exact number of these physicists is not specified.
4. Adjudication Method for the Test Set
The document does not explicitly describe an adjudication method (such as 2+1 or 3+1 consensus) for establishing the ground truth or evaluating disagreements between readers or with the AI.
- For Performance Verification, the ground truth appears to be established from existing "manual planning" dose distributions.
- For Clinical Validation, "Independent validators (ABR-certified medical physicists)" were used, but the process for reconciling differences in their assessments or how their assessments contributed to a final ground truth is not detailed.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
Yes, a multi-reader multi-case (MRMC) comparative effectiveness study was done as part of the Performance Validation.
- Effect Size of Human Readers' Improvement:
- The study demonstrated a "statistically significant reduction in optimization iterations (≥20% mean reduction)" when using Dose+ compared to conventional planning.
- It also showed "non-inferior OAR mean doses (≤10 Gy difference)" and "non-inferior target coverage (no statistically significant differences)" compared to conventional planning, suggesting that AI assistance helps achieve comparable or better plan quality with less effort from human readers.
- Validators also "reported potential time savings in treatment planning."
6. Standalone (i.e., algorithm only without human-in-the-loop performance) Study
Yes, a standalone performance study was conducted. This is referred to as "Performance Verification."
- The Dose+ models were evaluated on their ability to predict OAR mean doses and target coverage against "ground truth dose distributions" (presumably from expert-generated manual plans) on an independent dataset. This evaluation focuses solely on the algorithm's output without direct human interaction in the generation or real-time review of the predicted dose distributions for the verification metrics themselves.
7. Type of Ground Truth Used
- Performance Verification: The ground truth used was "ground truth dose distributions" and "manual planning," implying expert-approved treatment plans generated through conventional clinical practice.
- Clinical Validation: The ground truth for comparison was "conventional planning" and subjective assessments from "ABR-certified medical physicists" regarding plan quality, optimization iterations, and user acceptance.
8. Sample Size for the Training Set
The document mentions a "Model Development Dataset" (with internal splits for training, validation, and testing during model development). However, the specific sample size for the training set itself is not provided. It only states that samples in all datasets are from distinct and individual patients, so the number of cases is the same as the number of patients.
9. How the Ground Truth for the Training Set Was Established
The ground truth for the training set (part of the Model Development Dataset) was established by using "patient-specific anatomical geometry and target dose prescription" and training the machine learning models. It's implied that these models learned from a dataset of existing, clinically accepted treatment plans. The document states, "The software analyzes patient-specific anatomical geometry to determine achievable dose levels for each OAR relative to target volumes," indicating that it was trained on examples where optimal dose levels for OARs were determined by human experts. The description "processes input using locked machine learning (ML) models trained on patient-specific anatomical geometry to generate the predicted 3D dose distribution" further supports that the ground truth for training would have been expert-generated or clinical gold-standard dose distributions and associated patient data.
Ask a specific question about this device
(123 days)
The device is intended for radiation treatment planning for use in stereotactic, conformal, computer planned, Linac based radiation treatment and indicated for cranial, head and neck and extracranial lesions.
RT Elements are computed-based software applications for radiation therapy treatment planning and dose optimization for linac-based conformal radiation treatments, i.e. stereotactic radiosurgery (SRS), fractionated stereotactic radiotherapy (SRT) or stereotactic ablative radiotherapy (SABR), also known as stereotactic body radiation therapy (SBRT) for use in stereotactic, conformal, computer planned, Linac based radiation treatment of cranial, head and neck, and extracranial lesions.
The device consists of the following software modules: Multiple Brain Mets SRS 4.5, Cranial SRS 4.5, Spine SRS 4.5, Cranial SRS w/ Cones 4.5, RT Contouring 4.5, RT QA 4.5, Dose Review 4.5, Brain Mets Retreatment Review 4.5, and Physics Administration 7.5.
Here's the breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) clearance letter for RT Elements 4.5, specifically focusing on the AI Tumor Segmentation feature:
Acceptance Criteria and Reported Device Performance
| Diagnostic Characteristics | Minimum Acceptance Criteria (Lower Bound of 95% Confidence Interval) | Reported Device Performance (Mean 95% CI Lower Bound) |
|---|---|---|
| All Tumor Types | Dice ≥ 0.7 | Dice: 0.74 |
| Recall ≥ 0.8 | Recall: 0.83 | |
| Precision ≥ 0.8 | Precision: 0.85 | |
| Metastases to the CNS | Dice ≥ 0.7 | Dice: 0.73 |
| Recall ≥ 0.8 | Recall: 0.82 | |
| Precision ≥ 0.8 | Precision: 0.83 | |
| Meningiomas | Dice ≥ 0.7 | Dice: 0.73 |
| Recall ≥ 0.8 | Recall: 0.85 | |
| Precision ≥ 0.8 | Precision: 0.84 | |
| Cranial and paraspinal nerve tumors | Dice ≥ 0.7 | Dice: 0.88 |
| Recall ≥ 0.8 | Recall: 0.93 | |
| Precision ≥ 0.8 | Precision: 0.93 | |
| Gliomas and glio-/neuronal tumors | Dice ≥ 0.7 | Dice: 0.76 |
| Recall ≥ 0.8 | Recall: 0.74 | |
| Precision ≥ 0.8 | Precision: 0.88 |
Note: For "Gliomas and glio-/neuronal tumors," the reported lower bound 95% CI for Recall (0.74) is slightly below the stated acceptance criteria of 0.8. Additional clarification from the submission would be needed to understand how this was reconciled for clearance. However, for all other categories and overall, the reported performance meets or exceeds the acceptance criteria.
Study Details for AI Tumor Segmentation
2. Sample size used for the test set and the data provenance:
- Sample Size: 412 patients (595 scans, 1878 annotations)
- Data Provenance: De-identified 3D CE-T1 MR images from multiple clinical sites in the US and Europe. Data was acquired from adult patients with one or multiple contrast-enhancing tumors. ¼ of the test pool corresponded to data from three independent sites in the USA.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: Not explicitly stated as a number, but referred to as an "external/independent annotator team."
- Qualifications of Experts: US radiologists and non-US radiologists. No further details on years of experience or specialization are provided in this document.
4. Adjudication method for the test set:
- The document mentions "a well-defined data curation process" followed by the annotator team, but it does not explicitly describe a specific adjudication method (e.g., 2+1, 3+1) for resolving disagreements among annotators.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, a multi-reader multi-case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance was not reported for the AI tumor segmentation. The study focused on standalone algorithm performance against ground truth.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone performance study was done. The validation was conducted quantitatively by comparing the algorithm's automatically-created segmentations with the manual ground-truth segmentations.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Expert Consensus Segmentations: The ground truth was established through "manual ground-truth segmentations, the so-called annotations," performed by the external/independent annotator team of radiologists.
8. The sample size for the training set:
- The sample size for the training set is not explicitly stated in this document. The document mentions that "The algorithm was trained on MRI image data with contrast-enhancing tumors from multiple clinical sites, including a wide variety of scanner models and patient characteristics."
9. How the ground truth for the training set was established:
- How the ground truth for the training set was established is not explicitly stated in this document. It can be inferred that it followed a similar process to the test set, involving expert annotations, but the details are not provided.
Ask a specific question about this device
(203 days)
RayCare is an oncology information system intended to provide information which is used to take decisions for diagnosis, treatment management, treatment planning, scheduling, treatment and follow-up of radiation therapy, medical oncology and surgical oncology.
For these disciplines, as applicable, RayCare enables the user to define the clinical treatment intent, prescribe treatment, specify the detailed course of treatment delivery, manage the treatment course and monitor the treatment course.
In the context of radiation therapy, the RayCare image viewer can be used for viewing images, annotating images, performing and saving image registrations as well as image fusion to enable offline image review of patient positioning during treatment delivery.
RayCare is not intended for use in diagnostic activities.
RayCare is an oncology information system intended to provide information which is used to take decisions for diagnosis, treatment management, treatment planning, scheduling, treatment and follow-up of radiation therapy, medical oncology and surgical oncology.
For these disciplines, as applicable, RayCare enables the user to define the clinical treatment intent, prescribe treatment, specify the detailed course of treatment delivery, manage the treatment course and monitor the treatment course.
In the context of radiation therapy, the RayCare image viewer can be used for viewing images, annotating images, performing and saving image registrations as well as image fusion to enable offline image review of patient positioning during treatment delivery.
As an oncology information system, RayCare supports healthcare professionals in managing cancer care treatments. The system provides functionalities as described briefly in the sections below. These functionalities are not provided separately in different applications and have a joint purpose for the treatment of the patient.
RayCare is a software-as a Medical Device with a client part that allows the user to interact with the system and a server part that performs the necessary processing and storage functions. Selected aspects of RayCare are configurable, such as adapting workflow templates to the specific needs of the clinic.
This document describes the premarket notification for RayCare (2024A SP1), an oncology information system. The relevant sections for acceptance criteria and study details are primarily found under "VII. Non-Clinical and/or Clinical Tests Summary" and the tables within it.
Based on the provided text, RayCare (2024A SP1) is not an AI/ML device in the sense of making autonomous diagnostic decisions or image-based classifications. It is an Oncology Information System that supports clinical workflows for radiation therapy and other oncology disciplines. The "acceptance criteria" and "study that proves the device meets the acceptance criteria" in this context refer to the software verification and validation (V&V) activities. Therefore, the information provided focuses on demonstrating the software's functional correctness, safety, and effectiveness compared to a predicate device, rather than performance metrics specifically for an AI model (e.g., sensitivity, specificity, AUC).
Here's a breakdown of the requested information based on the provided document:
Acceptance Criteria and Device Performance (Software V&V)
The acceptance criteria for RayCare (2024A SP1) are implicitly defined by the successful completion of various software verification and validation activities designed to demonstrate that the device performs as intended and is as safe and effective as its predicate. These are primarily functional and system-level criteria.
Table of Acceptance Criteria and Reported Device Performance:
Since this is a software verification and validation summary for an oncology information system, the "performance" is demonstrated through successful compliance with system specifications and validated functionality. The "acceptance criteria" are the "Pass criteria" of the specific tests.
| Acceptance Criteria (from "Criteria" or "Pass criteria" of listed V&V) | Reported Device Performance |
|---|---|
| Treatment Course Management (TCM) Workspace: The TCM workspace shall show the treatment course and its related series, treatment fractions, and assigned beam sets for the care plan selected in the global care plan selector. Specific criteria: • The treatment series related to the selected care plan is displayed. • The fractions in the fractions table are only related to the treatment series related to the selected care plan. • The assigned beam set table only displays the beam set related to the selected care plan. | Passed. "The successful validation of this feature demonstrates that the device is as safe and effective as the predicate device." |
| Extended RayCare Scripting Support (Unit Testing): Queries shall only be available for scripting if explicitly declared as scriptable (whitelisted data). | Passed. "The successful validation of this feature demonstrates that the device is as safe and effective as the predicate device." |
| Extended RayCare Scripting Support (System Level Verification): It is possible to run a script by clicking a RayCare script task, and the script has performed the expected action within RayCare. | Passed. "The successful validation of this feature demonstrates that the device is as safe and effective as the predicate device." |
| Offline and Online Recording of Treatment Results: Offline import is requested, received, and possible to sign with device and radiotherapy record selected for import for a selected session. Specific criteria: • Verify treatment course table and beam delivery result table in TC overview gets updated with corresponding data for the first session. • Verify the device selected for offline import is the delivered device on the session. | Passed. "The successful validation of this feature demonstrates that the device is as safe and effective as the predicate device." |
| Treatment Delivery Integration Framework (Varian TrueBeam): The treatment flow for treatment delivery is verified. Specific criteria: • The fraction is fully delivered, and the status of the fraction, session, and beams is set to "Delivered". • Compare the delivered meterset, the couch positions and angles. They should be the same. • The online couch corrections are calculated as the difference between the planned and the delivered couch positions. | Passed. "The successful validation of this feature demonstrates that the device is as safe and effective as the predicate device." |
Overall Conclusion:
"From the successful verification and validation activities, the conclusion can be drawn that RayCare 2024A SP1 has met specifications and is as safe, as effective and performs as well as or better than the legally marketed predicate device."
-
Sample sizes used for the test set and the data provenance:
- Test Set Sample Size: The document does not specify a numerical sample size for "test sets" in the traditional sense of patient cases or images for evaluating an AI model. Instead, it refers to software verification and validation ("V&V") activities including unit testing, integration testing, system-level testing, cybersecurity testing, usability testing, and regression testing. These involve testing against requirements and specifications, often using simulated data, test cases, or specific user scenarios, rather than a fixed "dataset" of patient images.
- Data Provenance: The document does not explicitly state the country of origin of testing data or if it was retrospective or prospective. Given it's software V&V for an oncology information system, the "data" would primarily be test inputs and expected outputs generated internally during the development process (e.g., test scripts, simulated patient data to exercise specific functionalities). It's not a study on real patient data for diagnostic performance.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- This concept is not applicable as this is a software verification and validation summary for an oncology information system, not a study evaluating an AI model's diagnostic or prognostic performance against expert-determined ground truth. The "ground truth" for V&V activities is the system's specified behavior and functional requirements. Software engineers and QA professionals establish whether the software meets these pre-defined requirements.
-
Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not applicable. Adjudication methods are typically used in clinical studies involving human readers to resolve discrepancies in annotations or diagnoses, especially when establishing ground truth for AI model evaluation. This document describes software V&V.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study was not done. The document explicitly states: "No Clinical trials were required to demonstrate substantial equivalence."
- This type of study is relevant for AI-assisted diagnostic devices. RayCare is described as an oncology information system, not an AI diagnostic tool.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- While the system has automated functions, the concept of "standalone performance" as it relates to an AI algorithm making a clinical decision (e.g., classifying a lesion) is not directly applicable here. The V&V described focuses on the system's ability to correctly manage and process information, integrate with other systems, and support user workflows, which are inherent to its "standalone" operation as an information system. The "performance" is demonstrated through successful execution of its intended software functions as per its specifications.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- The "ground truth" for the software verification and validation described here is the functional specifications and requirements of the RayCare system. Successful "verification" means the design output meets the requirements, and "validation" means the software conforms to user needs and intended uses. This is established through internal testing against defined expected behaviors.
-
The sample size for the training set:
- Not applicable. RayCare (2024A SP1) is an oncology information system, and the document does not indicate that it incorporates a machine learning model that was "trained" on a dataset in the way an AI diagnostic or predictive algorithm would be. The device's "development" involved standard software engineering practices.
-
How the ground truth for the training set was established:
- Not applicable. As there is no mention of an AI/ML training set, the concept of establishing ground truth for it does not apply.
In summary, this FDA review document pertains to the clearance of an Oncology Information System (OIS) through the 510(k) pathway, demonstrating substantial equivalence to a predicate device. The "acceptance criteria" and "proof" come from a robust set of software verification and validation activities (unit, integration, system, cybersecurity, usability, regression testing) rather than clinical studies or the evaluation of an AI model's diagnostic performance against a clinical ground truth. The device is not presented as an AI-driven diagnostic tool.
Ask a specific question about this device
Page 1 of 26