Search Results
Found 2 results
510(k) Data Aggregation
(91 days)
ART-Plan+'s indicated target population is cancer patients for whom radiotherapy treatment has been prescribed. In this population, any patient for whom relevant modality imaging data is available.
ART-Plan+'s includes several modules:
SmartPlan which allows automatic generation of radiotherapy treatment plan that the users import into their own Treatment Planning System (TPS) for the dose calculation, review and approval. This module is available for supported prescriptions for prostate only.
Annotate which allows automatic generation of contours for organs at risk, lymph nodes and tumors, based on medical practices, on medical images such as CT and MR images
AdaptBox which allows generation of synthetic-CT from CBCT images, dose computation on CT images for external beam irradiation with photon beams and assisted CBCT-based off-line adaptation decision-making for the following anatomies:
- Head & Neck
- Breast / Thorax
- Pelvis (male)
ART-Plan+ is not intended to be used for patients less than 18 years of age.
ART-Plan is a software platform allowing contour regions of interest on 3D images, to provide an automatic treatment plan and to help in the decision for the need for replanning based on contours and doses on daily images. It includes several modules:
Home: tasks and patient monitoring
Annotate including TumorBox: contouring of regions of interest
Smartplan: creation of an automatic treatment plan based on a planning CT and a RTSS
AdaptBox: helping tool to decide if a replanning is necessary. For this purpose, the module allows the user to generate a synthetic-CT from a CBCT image, to auto-delineate regions of interest on the synthetic-CT, to compute the dose on both planning CT and synthetic-CT and then define if there is a need for replanning by comparing volume and dose metrics computed on both images and over the course of the treatment. Those metrics are defined by the user.
Administration and Settings: preferences management, user account management, etc.
About: information about the software and its use, as well as contact details.
Annotate, TumorBox, Smartplan and AdaptBox are partially based on a batch mode, which allows the user to launch the operations of autocontouring and autoplanning without having to use the interface or the viewers. In that way, the software is completely integrated into the radiotherapy workflow and offer to the user a maximum of flexibility.
Annotate which allows automatic generation of contours for organs at risk (OARs), lymph nodes (LNs) and tumors, based on medical practices, on medical images such as CT and MR images:
OARs and LNs:
- Head and neck (on CT images)
- Thorax/breast (on CT images)
- Abdomen (on CT and male on MR images)
- Pelvis male (on CT and MR images)
- Pelvis female (on CT images)
- Brain (on CT images and MR images)
Tumor:
- Brain (on MR images)
SmartPlan which allows automatic generation of radiotherapy treatment plan that the users import into their own Treatment Planning System (TPS) for the dose calculation, review and approval. This module is available for supported prescriptions for prostate only.
AdaptBox which allows generation of synthetic-CT from CBCT images, dose computation on CT images for external beam irradiation with photon beams and assisted CBCT-based off-line adaptation decision-making for the following anatomies:
- Head & Neck
- Breast / Thorax
- Pelvis (male)
Here's a breakdown of the acceptance criteria and study details for ART-Plan+ (v3.1.0) based on the provided FDA 510(k) clearance letter:
Acceptance Criteria and Reported Device Performance
The ART-Plan+ device consists of three main modules: Annotate, SmartPlan, and AdaptBox. Each module has its own set of acceptance criteria.
Note: The document provides acceptance criteria and implicitly states that "all tests passed their respective acceptance criteria, thus showing ART-Plan + v3.1.0 clinical acceptability." However, it does not provide specific numerical reported device performance for each criterion. It only states that the criteria were met.
Annotate Module Performance Criteria
| Acceptance Criterion | Reported Device Performance |
|---|---|
| Non-regression testing (on CT/MR for existing structures): Mean DSC should not regress negatively between the current and last validated version of Annotate beyond a maximum tolerance margin set to -5% relative error. | Passed (implicitly, as stated all tests passed) |
| Non-regression testing (on synthetic-CT from CBCT for existing structures): Mean DSC (sCT) should be equivalent to Mean DSC (CT) beyond a maximum tolerance margin set to -5% relative error. | Passed (implicitly, as stated all tests passed) |
| Qualitative evaluation (for new structures or those failing non-regression): Clinicians' qualitative evaluation of the auto-segmentation is considered acceptable for clinical use without modifications (A) or with minor modifications / corrections (B) with an A+B % ≥ 85%. | Passed (implicitly, as stated all tests passed) |
| Quantitative evaluation (for new structures): Mean DSC (annotate) ≥ 0.8 | Passed (implicitly, as stated all tests passed) |
SmartPlan Module Performance Criteria
| Acceptance Criterion | Reported Device Performance |
|---|---|
| Quantitative evaluation: Effectiveness difference (%) in DVH achieved goals between manual plans and automatic plans ≤ 5%. | Passed (implicitly, as stated all tests passed) |
| Qualitative evaluation: % of clinical acceptable automatic plans ≥ 93% after expert review. | Passed (implicitly, as stated all tests passed) |
AdaptBox Module Performance Criteria
| Acceptance Criterion | Reported Device Performance |
|---|---|
| Dosimetric evaluations (Synthetic CT): Median 2%/2mm ≥ 92%. | Passed (implicitly, as stated all tests passed) |
| Dosimetric evaluations (Synthetic CT): Median 3%/3mm ≥ 93.57%. | Passed (implicitly, as stated all tests passed) |
| Dosimetric evaluations (Synthetic CT): A median dose deviation (synthetic-CT compared to standard CT) of ≤2% in ≥76.7% of patients. | Passed (implicitly, as stated all tests passed) |
| Quantitative validation (Synthetic CT): Jacobian determinant = 1 +/- 5%. | Passed (implicitly, as stated all tests passed) |
| Qualitative validation (Deformation of planning CT towards CBCT): Clinicians' qualitative evaluation of the overall registration output (A+B%) ≥ 85%. | Passed (implicitly, as stated all tests passed) |
| Qualitative validation (Deformable propagation of contours): Clinicians' qualitative evaluation of the propagated contours (A+B%) ≥ 85%. | Passed (implicitly, as stated all tests passed) |
Study Details
1. Sample Sizes for the Test Set and Data Provenance
The document describes the test sets for each module and the overall data provenance:
- Overall Test Dataset: Total of 2040 patients (1413 EU patients and 627 US patients), representing 31% US data.
- Annotate Module: Total of 1844 patients (1254 EU patients and 590 US patients).
- Provenance: Retrospective, worldwide population receiving radiotherapy treatments, with a specific effort to include US data (31% overall).
- Minimum sample sizes for specific tests:
- Non-regression testing (autosegmentation on CT/MR, or synthetic-CT from CBCT): Minimum 24 patients.
- Qualitative evaluation of autosegmentation: Minimum 18 patients.
- Quantitative evaluation of autosegmentation: Minimum 24 patients.
- SmartPlan Module: Total of 35 patients (25 EU patients and 10 US patients).
- Provenance: Retrospective, worldwide population receiving radiotherapy treatments, with a specific effort to include US data.
- Minimum sample size for quantitative and qualitative evaluation: Minimum 20 patients.
- AdaptBox Module: Total of 161 patients (134 EU patients and 27 US patients).
- Provenance: Retrospective, worldwide population receiving radiotherapy treatments, with a specific effort to include US data. An independent dataset composed only of US patients was also used for quantitative validation of synthetic CT.
- Minimum sample sizes for specific tests:
- Dosimetric evaluations of synthetic CT: Minimum 15 patients.
- Quantitative validation of synthetic CT: Minimum 15 patients (plus an independent US dataset).
- Qualitative validation of deformation of planning CT: Minimum 10 patients.
- Qualitative validation of deformable propagation of contours: Minimum 10 patients.
2. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
The document refers to "medical experts" and "clinicians" for qualitative evaluations and for performing manual contours. However, it does not specify the exact number of experts used for ground truth establishment for each test set or their specific qualifications (e.g., "Radiologist with 10 years of experience"). It only mentions that evaluations were performed by medical experts.
3. Adjudication Method for the Test Set
The document does not explicitly state an adjudication method like "2+1" or "3+1" for creating the ground truth or resolving disagreements among experts. For qualitative evaluations, it describes a rating scale (A, B, C) and acceptance based on a percentage of A+B ratings, implying individual expert review results were aggregated. For quantitative validations, ground truth seems to be established by comparison with "manual contours performed by medical experts" or "direct comparison with manual plans," but the process for defining these manual references is not detailed in terms of adjudication.
4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
The document does not mention a Multi-Reader Multi-Case (MRMC) comparative effectiveness study to evaluate how human readers improve with AI vs. without AI assistance. The evaluations focus on the standalone performance of the AI modules or the clinical acceptability of outputs generated by the AI (e.g., auto-segmentations, auto-plans, synthetic CTs).
5. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study
Yes, the studies described are primarily standalone (algorithm only) performance evaluations. The modules (Annotate, SmartPlan, AdaptBox) are tested for their ability to generate contours, treatment plans, or synthetic CTs, and these outputs are then compared against ground truth or evaluated for clinical acceptability by experts. While experts review the outputs for clinical acceptability, this is an evaluation of the algorithm's output, not a comparative study of human performance with and without AI assistance. The document states, "all the steps of the workflow where ART-Plan is involved have been tested independently," emphasizing the standalone nature of the module testing.
6. Type of Ground Truth Used
The ground truth varied depending on the module and test:
- Expert Consensus / Manual Delineation:
- For Annotate's quantitative non-regression and quantitative evaluation, the ground truth was "manual contours performed by medical experts."
- For SmartPlan's quantitative evaluation, the ground truth was "manual plans."
- Qualitative Expert Review:
- For Annotate, SmartPlan, and AdaptBox qualitative evaluations, the ground truth was established by "medical experts" or "clinicians" assessing the clinical acceptability of the device's output against defined scales (A, B, C).
- Comparison to Standard Imaging/Analysis:
- For AdaptBox's dosimetric evaluations, the synthetic CT performance was compared to "standard CT."
- For AdaptBox's quantitative validation of synthetic CTs, a "direct comparison of anatomy and geometry with the associated CBCT" was performed.
7. Sample Size for the Training Set
The document does not specify the sample size for the training set for any of the modules. It only discusses the test set sizes.
8. How the Ground Truth for the Training Set Was Established
The document briefly mentions "retraining or algorithm improvement" for existing structures and new structures for autosegmentation, but it does not describe how the ground truth for the training set was established. It only focuses on the validation of the new version's performance using dedicated test sets with specific ground truth methods. It implies that the underlying AI models (deep learning neural networks) were trained, but the details of that training process, including ground truth establishment, are not provided in this 510(k) summary.
Ask a specific question about this device
(160 days)
ART-Plan+'s indicated target population is cancer patients for whom radiotherapy treatment has been prescribed. In this population, any patient for whom relevant modality imaging data is available.
ART-Plan+'s includes several modules:
-
SmartPlan which allows automatic generation of radiotherapy treatment plan that the users import into their own Treatment Planning System (TPS) for the dose calculation, review and approval. This module is available for supported prescriptions for prostate only.
-
Annotate which allows automatic generation of contours for organs at risk, lymph nodes and tumors, based on medical practices, on medical images such as CT and MR images
ART-Plan+ is not intended to be used for patients less than 18 years of age.
The indicated users are trained medical professionals including, but not limited to, radiotherapists, radiation oncologists, medical physicists, dosimetrists and medical professionals involved in the radiation therapy process.
The indicated use environments include, but are not limited to, hospitals, clinics and any health facility offering radiation therapy.
ART-Plan+ is a software platform allowing contour regions of interest on 3D images and to provide an automatic treatment plan. It includes several modules:
-Home: tasks
-Annotate and TumorBox: contouring of regions of interest
-SmartPlan: creation of an automatic treatment plan based on a planning CT and a RTSS
-Administration and settings : preferences management, user account management, etc.
-Institute Management: institute information, including licenses, list of users, etc.
-About: information about the software and its use, as well as contact details.
Annotate, TumorBox and SmartPlan are partially based on a batch mode, which allows the user to launch the operations of autocontouring and autoplanning without having to use the interface or the viewers. In that way, the software is completely integrated into the radiotherapy workflow and offer to the user a maximum of flexibility.
ART-Plan+ offers deep-learning based automatic segmentation of OARs and LNs for the following localizations:
-Head and neck (on CT images)
-Thorax/breast (on CT images)
-Abdomen (on CT and male on MR images)
-Pelvis male (on CT and MR images)
-Pelvis female (on CT images)
-Brain (on CT images and MR images)
ART-Plan+ offers deep-learning based automatic segmentation of targets for the following localizations:
-Brain (on MR images)
Based on the provided text, here's a detailed breakdown of the acceptance criteria and the study that proves the device meets them:
1. Table of Acceptance Criteria and the Reported Device Performance:
The document describes five distinct types of evaluations with their respective acceptance criteria. While the exact "reported device performance" (i.e., the specific numerical results obtained for each metric) is not explicitly stated, the document uniformly concludes, "All validation tests were carried out using datasets representative of the worldwide population receiving radiotherapy treatments. Finally, all tests passed their respective acceptance criteria, thus showing ART-Plan + v3.0.0 clinical acceptability." This implies all reported device performances met or exceeded the criteria.
| Study Type | Acceptance Criteria | Reported Device Performance (Implied) |
|---|---|---|
| Non-regression Testing of Autosegmentation of ORs | Mean DSC should not regress negatively between the current and last validated version of Annotate beyond a maximum tolerance margin set to -5% relative error. | Met |
| Qualitative Evaluation of Autosegmentation of ORs | Clinicians' qualitative evaluation of the auto-segmentation is considered acceptable for clinical use without modifications (A) or with minor modifications/corrections (B) with an A+B % above or equal to 85%. | Met |
| Quantitative Evaluation of Autosegmentation of ORs | Mean DSC (annotate) ≥ 0.8 | Met |
| Inter-expert Variability Evaluation of Autosegmentation of ORs | Mean DSC (annotate) ≥ Mean DSC (inter-expert) with a tolerance margin of -5% of relative error. | Met |
| Quantitative Evaluation of Autosegmentation of Brain Metastasis | Lesion-wise sensitivity ≥ 0.86 AND Lesion-wise precision ≥ 0.70 AND Lesion-wise DSC ≥ 0.78 AND Patient-wise DSC ≥ 0.83 AND Patient-wise false positive (FP) ≤ 2.1 | Met |
| Quantitative Evaluation of Autosegmentation of Glioblastoma | Sensitivity ≥ 0.80 AND DSC ≥ 0.76 | Met |
| Quantitative and Qualitative Evaluation of Automatic Treatment Plans Generations | Quantitative: effectiveness difference (%) in DVH achieved goals between manual plans and automatic plans ≤ 5% AND Qualitative: % of clinical acceptable automatic plans ≥ 93% after expert review. | Met |
2. Sample Sizes Used for the Test Set and the Data Provenance:
- Non-regression Testing (Autosegmentation of ORs): Minimum sample size of 24 patients.
- Qualitative Evaluation (Autosegmentation of ORs): Minimum sample size of 18 patients.
- Quantitative Evaluation (Autosegmentation of ORs): Minimum sample size of 24 patients.
- Inter-expert Variability Evaluation (Autosegmentation of ORs): Minimum sample size of 13 patients.
- Quantitative Evaluation (Brain Metastasis, MR images): Minimum sample size of 51 patients.
- Quantitative Evaluation (Glioblastoma, MR images): Minimum sample size of 43 patients.
- Quantitative and Qualitative Evaluation (Automatic Treatment Plans): Minimum sample size of 20 patients.
Data Provenance: The document states, "All validation tests were carried out using datasets representative of the worldwide population receiving radiotherapy treatments." It does not specify the country of origin or whether the data was retrospective or prospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts:
The document refers to "medical experts" or "clinicians" for establishing ground truth and performing evaluations.
- For the non-regression testing of autosegmentation, "manual contours performed by medical experts" were used.
- For qualitative evaluation of autosegmentation, "medical experts" performed the qualitative evaluation.
- For inter-expert variability evaluation of autosegmentation, "two independent medical experts" were asked to contour the same images.
- For brain metastasis and glioblastoma segmentation, "contours provided by medical experts" were used for comparison.
- For the evaluation of automatic treatment plans, "medical experts" determined the clinical acceptability.
The specific number of experts beyond "two independent" for inter-expert variability is not consistently provided, nor are their exact qualifications (e.g., specific specialties like "radiation oncologist" or years of experience). However, the stated users of the device include "trained medical professionals including, but not limited to, radiotherapists, radiation oncologists, medical physicists, dosimetrists and medical professionals involved in the radiation therapy process," implying these are the types of professionals who would serve as experts.
4. Adjudication Method for the Test Set:
- For the inter-expert variability test, it involved comparing contours between two independent medical experts and with the software's contours. This implies a comparison rather than an explicit formal adjudication method (like 2+1 voting).
- For other segmentation evaluations, the ground truth was "manual contours performed by medical experts" or "contours provided by medical experts." It's not specified if these were consensus readings, or if an adjudication method was used if multiple experts contributed to a single ground truth contour for a case.
- For the automatic treatment plan qualitative evaluation, "expert review" is mentioned, but the number of reviewers or their adjudication process is not detailed.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
The document describes studies that evaluate the standalone performance of the AI for segmentation and treatment planning, and how its performance compares to expert-generated contours/plans or inter-expert variability. It does not explicitly describe an MRMC comparative effectiveness study designed to measure the improvement of human readers with AI assistance versus without AI assistance. The focus is on the AI's performance relative to expert-defined ground truths or benchmarks.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:
Yes, the studies are largely focused on standalone algorithm performance.
- The "Non-regression testing," "Quantitative evaluation," and "Inter-expert variability evaluation" of autosegmentation explicitly compare the software's generated contours (algorithm only) against manual contours or inter-expert contours.
- The "Quantitative evaluation of autosegmentation of Brain metastasis" and "Glioblastoma" assess the algorithm's performance (sensitivity, precision, DSC, FP) against expert-provided contours.
- For "Automatic Treatment Plan Generations," the quantitative evaluation compares the algorithm's plans to manual plans, and the qualitative evaluation assesses the acceptance of the automatic plans by experts.
7. The Type of Ground Truth Used:
The primary ground truth relied upon in these studies is:
- Expert Consensus/Manual Contours: This is repeatedly stated as "manual contours performed by medical experts" or "contours provided by medical experts."
- Inter-expert Variability: For one specific study, the variability between two independent experts was used as a benchmark for comparison.
- Manual Treatment Plans: For the treatment plan evaluation, manual plans served as a benchmark for quantitative comparison.
No mention of pathology or outcomes data as ground truth is provided.
8. The Sample Size for the Training Set:
The document does not specify the sample size for the training set. It only mentions the training of the algorithm (e.g., "retraining or algorithm improvement").
9. How the Ground Truth for the Training Set Was Established:
The document does not explicitly describe how the ground truth for the training set was established. It only states that the device uses "deep-learning based automatic segmentation," implying that it would have been trained on curated data with established ground truth, likely also generated by medical experts, but the specifics are not detailed in this excerpt.
Ask a specific question about this device
Page 1 of 1