Search Results
Found 1 results
510(k) Data Aggregation
(237 days)
Dose+ is a software-only medical device intended for use by qualified, trained radiation therapy professionals (including but not limited to medical physicists, radiation oncologists, and medical dosimetrists). The device is intended for male patients with localized prostate cancer or prostate cancer with pelvic lymph node involvement who are undergoing external beam radiation therapy treatment. The software uses machine learning-based algorithms to automatically produce 3D dose distributions from patient-specific anatomical geometry and target dose prescription.
The predicted dose distribution output is required to be transferred to a radiotherapy treatment planning system (TPS) or reviewed by any DICOM-RT compliant software prior to further use in clinical workflows. Dose+ is intended to provide additional information during the treatment planning process facilitating the creation and review of a treatment plan.
Dose+ is not intended to be used for disease diagnosis and treatment decision purposes in clinical workflows.
Dose+ is a software-only medical device that assists radiation oncologists, medical dosimetrists and medical physicists during external beam radiotherapy treatment planning. The software utilizes pre-trained machine learning models that are not modifiable or editable by the end-user. The product provides information during the radiotherapy plan creation but does not replace a treatment planning system (TPS).
The central value proposition of Dose+ is to provide personalized organ-at-risk (OAR) dose optimization goals based on individual patient anatomy, rather than relying solely on population-based protocol templates. The software analyzes patient-specific anatomical geometry to determine achievable dose levels for each OAR relative to target volumes. This helps to ensure:
- Initial optimization objectives are more achievable, reducing the number of iterations needed during plan optimization
- Opportunities for further OAR dose reduction are not missed when standard fixed templates suggest a higher dose
- Inappropriately aggressive goals for one OAR do not compromise target coverage or dose reduction to other OARs
The device operates in two deployment modes:
- Cloud-based service with secure HTTPS data transfer
- Local installation in healthcare provider's IT network
Key features include:
- Automated dose prediction using locked machine learning models
- Generation of DICOM RT Dose objects
- Integration with existing treatment planning workflows
- Support for multiple fractionation schemes
- Compatibility with major treatment planning systems
The first release includes two models for male pelvis patients:
- "Prostate" model: For localized prostate treatments
- "PelvisLN" model: Designed for cancers involving lymph nodes
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) Clearance Letter for Dose+:
1. Acceptance Criteria and Reported Device Performance
| Acceptance Criteria | Reported Device Performance |
|---|---|
| OAR Mean Dose Difference: $\le$ 10 Gy (Performance Verification) | Both prostate and pelvisLN models met this criterion, showing strong correlation between predicted and ground truth dose distributions. |
| Target Coverage Metrics: (Homogeneity & Conformity Indices met) (Performance Verification) | Both prostate and pelvisLN models met this criterion, showing strong correlation between predicted and ground truth dose distributions. |
| Reduction in Optimization Iterations: $\ge$ 20% mean reduction (Clinical Validation) | The study demonstrated a statistically significant reduction in optimization iterations when using Dose+ compared to conventional planning. |
| Non-inferior OAR Mean Doses: $\le$ 10 Gy difference compared to conventional planning (Clinical Validation) | The study demonstrated non-inferior OAR mean doses ($ \le$10 Gy difference) compared to conventional planning. |
| Non-inferior Target Coverage: No statistically significant differences compared to conventional planning (Clinical Validation) | The study demonstrated non-inferior target coverage (no statistically significant differences) compared to conventional planning. |
| Positive User Acceptance from Validators: Validators indicate willingness to use clinically and report potential time savings (Clinical Validation) | The majority of validators indicated willingness to use the system clinically and reported potential time savings in treatment planning. |
| No Identified Safety Issues: (Clinical Validation) | No safety hazards were identified during validation testing. |
2. Sample Size Used for the Test Set and Data Provenance
The description distinguishes between a "Performance Verification Dataset" and a "Clinical Validation Dataset", both of which function as test sets for different aspects of performance.
-
Performance Verification Dataset: This dataset was used for demonstrating non-inferiority in OAR mean dose predictions and target coverage against manual planning.
- Prostate Model: 25 cases
- PelvisLN Model: 27 cases
- Data Provenance:
- Prostate Model: 100% US dataset, collected from 7 U.S. institutions across 6 US states.
- PelvisLN Model: 96.3% US dataset, collected from 6 institutions from 6 US states.
- Retrospective/Prospective: Not explicitly stated, but the description of "independent dataset" and "multiple US institutions" suggests retrospective collection for this phase.
-
Clinical Validation Dataset: This dataset was used for a comparative effectiveness study involving human readers.
- Sample Size: Not explicitly stated as a number of cases, but mentions "prospective patient enrollment" at 4 US institutions and "within-subject comparison". Given the context of a "validation study", it implies a separate cohort from the Verification Dataset.
- Data Provenance: 100% US dataset, conducted at 4 geographically diverse US institutions across 3 states. Patients aged 60 years and older, with demographic representativeness matching the US national median age for prostate cancer diagnosis.
- Retrospective/Prospective: Prospective patient enrollment.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Performance Verification: The ground truth for OAR mean dose predictions and target coverage was established against "manual planning." It's implied that these manual plans, considered the ground truth, were created by qualified radiation therapy professionals (medical physicists, radiation oncologists, and medical dosimetrists), but the specific number and qualifications of experts involved in creating this specific ground truth dataset are not detailed. It refers to the ground truth as "ground truth dose distributions," which would typically be the outcome of expert-created and approved treatment plans.
- Clinical Validation: The study involved "Independent validators (ABR-certified medical physicists)" for assessing plan quality, optimization iterations, and user acceptance. The exact number of these physicists is not specified.
4. Adjudication Method for the Test Set
The document does not explicitly describe an adjudication method (such as 2+1 or 3+1 consensus) for establishing the ground truth or evaluating disagreements between readers or with the AI.
- For Performance Verification, the ground truth appears to be established from existing "manual planning" dose distributions.
- For Clinical Validation, "Independent validators (ABR-certified medical physicists)" were used, but the process for reconciling differences in their assessments or how their assessments contributed to a final ground truth is not detailed.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
Yes, a multi-reader multi-case (MRMC) comparative effectiveness study was done as part of the Performance Validation.
- Effect Size of Human Readers' Improvement:
- The study demonstrated a "statistically significant reduction in optimization iterations (≥20% mean reduction)" when using Dose+ compared to conventional planning.
- It also showed "non-inferior OAR mean doses (≤10 Gy difference)" and "non-inferior target coverage (no statistically significant differences)" compared to conventional planning, suggesting that AI assistance helps achieve comparable or better plan quality with less effort from human readers.
- Validators also "reported potential time savings in treatment planning."
6. Standalone (i.e., algorithm only without human-in-the-loop performance) Study
Yes, a standalone performance study was conducted. This is referred to as "Performance Verification."
- The Dose+ models were evaluated on their ability to predict OAR mean doses and target coverage against "ground truth dose distributions" (presumably from expert-generated manual plans) on an independent dataset. This evaluation focuses solely on the algorithm's output without direct human interaction in the generation or real-time review of the predicted dose distributions for the verification metrics themselves.
7. Type of Ground Truth Used
- Performance Verification: The ground truth used was "ground truth dose distributions" and "manual planning," implying expert-approved treatment plans generated through conventional clinical practice.
- Clinical Validation: The ground truth for comparison was "conventional planning" and subjective assessments from "ABR-certified medical physicists" regarding plan quality, optimization iterations, and user acceptance.
8. Sample Size for the Training Set
The document mentions a "Model Development Dataset" (with internal splits for training, validation, and testing during model development). However, the specific sample size for the training set itself is not provided. It only states that samples in all datasets are from distinct and individual patients, so the number of cases is the same as the number of patients.
9. How the Ground Truth for the Training Set Was Established
The ground truth for the training set (part of the Model Development Dataset) was established by using "patient-specific anatomical geometry and target dose prescription" and training the machine learning models. It's implied that these models learned from a dataset of existing, clinically accepted treatment plans. The document states, "The software analyzes patient-specific anatomical geometry to determine achievable dose levels for each OAR relative to target volumes," indicating that it was trained on examples where optimal dose levels for OARs were determined by human experts. The description "processes input using locked machine learning (ML) models trained on patient-specific anatomical geometry to generate the predicted 3D dose distribution" further supports that the ground truth for training would have been expert-generated or clinical gold-standard dose distributions and associated patient data.
Ask a specific question about this device
Page 1 of 1