Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K242822
    Manufacturer
    Date Cleared
    2025-02-25

    (160 days)

    Product Code
    Regulation Number
    892.5050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K213628, K234068

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ART-Plan+'s indicated target population is cancer patients for whom radiotherapy treatment has been prescribed. In this population, any patient for whom relevant modality imaging data is available.

    ART-Plan+'s includes several modules:

    • SmartPlan which allows automatic generation of radiotherapy treatment plan that the users import into their own Treatment Planning System (TPS) for the dose calculation, review and approval. This module is available for supported prescriptions for prostate only.

    • Annotate which allows automatic generation of contours for organs at risk, lymph nodes and tumors, based on medical practices, on medical images such as CT and MR images

    ART-Plan+ is not intended to be used for patients less than 18 years of age.

    The indicated users are trained medical professionals including, but not limited to, radiotherapists, radiation oncologists, medical physicists, dosimetrists and medical professionals involved in the radiation therapy process.

    The indicated use environments include, but are not limited to, hospitals, clinics and any health facility offering radiation therapy.

    Device Description

    ART-Plan+ is a software platform allowing contour regions of interest on 3D images and to provide an automatic treatment plan. It includes several modules:

    -Home: tasks

    -Annotate and TumorBox: contouring of regions of interest

    -SmartPlan: creation of an automatic treatment plan based on a planning CT and a RTSS

    -Administration and settings : preferences management, user account management, etc.

    -Institute Management: institute information, including licenses, list of users, etc.

    -About: information about the software and its use, as well as contact details.

    Annotate, TumorBox and SmartPlan are partially based on a batch mode, which allows the user to launch the operations of autocontouring and autoplanning without having to use the interface or the viewers. In that way, the software is completely integrated into the radiotherapy workflow and offer to the user a maximum of flexibility.

    ART-Plan+ offers deep-learning based automatic segmentation of OARs and LNs for the following localizations:

    -Head and neck (on CT images)

    -Thorax/breast (on CT images)

    -Abdomen (on CT and male on MR images)

    -Pelvis male (on CT and MR images)

    -Pelvis female (on CT images)

    -Brain (on CT images and MR images)

    ART-Plan+ offers deep-learning based automatic segmentation of targets for the following localizations:

    -Brain (on MR images)

    AI/ML Overview

    Based on the provided text, here's a detailed breakdown of the acceptance criteria and the study that proves the device meets them:

    1. Table of Acceptance Criteria and the Reported Device Performance:

    The document describes five distinct types of evaluations with their respective acceptance criteria. While the exact "reported device performance" (i.e., the specific numerical results obtained for each metric) is not explicitly stated, the document uniformly concludes, "All validation tests were carried out using datasets representative of the worldwide population receiving radiotherapy treatments. Finally, all tests passed their respective acceptance criteria, thus showing ART-Plan + v3.0.0 clinical acceptability." This implies all reported device performances met or exceeded the criteria.

    Study TypeAcceptance CriteriaReported Device Performance (Implied)
    Non-regression Testing of Autosegmentation of ORsMean DSC should not regress negatively between the current and last validated version of Annotate beyond a maximum tolerance margin set to -5% relative error.Met
    Qualitative Evaluation of Autosegmentation of ORsClinicians' qualitative evaluation of the auto-segmentation is considered acceptable for clinical use without modifications (A) or with minor modifications/corrections (B) with an A+B % above or equal to 85%.Met
    Quantitative Evaluation of Autosegmentation of ORsMean DSC (annotate) ≥ 0.8Met
    Inter-expert Variability Evaluation of Autosegmentation of ORsMean DSC (annotate) ≥ Mean DSC (inter-expert) with a tolerance margin of -5% of relative error.Met
    Quantitative Evaluation of Autosegmentation of Brain MetastasisLesion-wise sensitivity ≥ 0.86
    AND Lesion-wise precision ≥ 0.70
    AND Lesion-wise DSC ≥ 0.78
    AND Patient-wise DSC ≥ 0.83
    AND Patient-wise false positive (FP) ≤ 2.1Met
    Quantitative Evaluation of Autosegmentation of GlioblastomaSensitivity ≥ 0.80
    AND DSC ≥ 0.76Met
    Quantitative and Qualitative Evaluation of Automatic Treatment Plans GenerationsQuantitative: effectiveness difference (%) in DVH achieved goals between manual plans and automatic plans ≤ 5%
    AND Qualitative: % of clinical acceptable automatic plans ≥ 93% after expert review.Met

    2. Sample Sizes Used for the Test Set and the Data Provenance:

    • Non-regression Testing (Autosegmentation of ORs): Minimum sample size of 24 patients.
    • Qualitative Evaluation (Autosegmentation of ORs): Minimum sample size of 18 patients.
    • Quantitative Evaluation (Autosegmentation of ORs): Minimum sample size of 24 patients.
    • Inter-expert Variability Evaluation (Autosegmentation of ORs): Minimum sample size of 13 patients.
    • Quantitative Evaluation (Brain Metastasis, MR images): Minimum sample size of 51 patients.
    • Quantitative Evaluation (Glioblastoma, MR images): Minimum sample size of 43 patients.
    • Quantitative and Qualitative Evaluation (Automatic Treatment Plans): Minimum sample size of 20 patients.

    Data Provenance: The document states, "All validation tests were carried out using datasets representative of the worldwide population receiving radiotherapy treatments." It does not specify the country of origin or whether the data was retrospective or prospective.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts:

    The document refers to "medical experts" or "clinicians" for establishing ground truth and performing evaluations.

    • For the non-regression testing of autosegmentation, "manual contours performed by medical experts" were used.
    • For qualitative evaluation of autosegmentation, "medical experts" performed the qualitative evaluation.
    • For inter-expert variability evaluation of autosegmentation, "two independent medical experts" were asked to contour the same images.
    • For brain metastasis and glioblastoma segmentation, "contours provided by medical experts" were used for comparison.
    • For the evaluation of automatic treatment plans, "medical experts" determined the clinical acceptability.

    The specific number of experts beyond "two independent" for inter-expert variability is not consistently provided, nor are their exact qualifications (e.g., specific specialties like "radiation oncologist" or years of experience). However, the stated users of the device include "trained medical professionals including, but not limited to, radiotherapists, radiation oncologists, medical physicists, dosimetrists and medical professionals involved in the radiation therapy process," implying these are the types of professionals who would serve as experts.

    4. Adjudication Method for the Test Set:

    • For the inter-expert variability test, it involved comparing contours between two independent medical experts and with the software's contours. This implies a comparison rather than an explicit formal adjudication method (like 2+1 voting).
    • For other segmentation evaluations, the ground truth was "manual contours performed by medical experts" or "contours provided by medical experts." It's not specified if these were consensus readings, or if an adjudication method was used if multiple experts contributed to a single ground truth contour for a case.
    • For the automatic treatment plan qualitative evaluation, "expert review" is mentioned, but the number of reviewers or their adjudication process is not detailed.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    The document describes studies that evaluate the standalone performance of the AI for segmentation and treatment planning, and how its performance compares to expert-generated contours/plans or inter-expert variability. It does not explicitly describe an MRMC comparative effectiveness study designed to measure the improvement of human readers with AI assistance versus without AI assistance. The focus is on the AI's performance relative to expert-defined ground truths or benchmarks.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    Yes, the studies are largely focused on standalone algorithm performance.

    • The "Non-regression testing," "Quantitative evaluation," and "Inter-expert variability evaluation" of autosegmentation explicitly compare the software's generated contours (algorithm only) against manual contours or inter-expert contours.
    • The "Quantitative evaluation of autosegmentation of Brain metastasis" and "Glioblastoma" assess the algorithm's performance (sensitivity, precision, DSC, FP) against expert-provided contours.
    • For "Automatic Treatment Plan Generations," the quantitative evaluation compares the algorithm's plans to manual plans, and the qualitative evaluation assesses the acceptance of the automatic plans by experts.

    7. The Type of Ground Truth Used:

    The primary ground truth relied upon in these studies is:

    • Expert Consensus/Manual Contours: This is repeatedly stated as "manual contours performed by medical experts" or "contours provided by medical experts."
    • Inter-expert Variability: For one specific study, the variability between two independent experts was used as a benchmark for comparison.
    • Manual Treatment Plans: For the treatment plan evaluation, manual plans served as a benchmark for quantitative comparison.

    No mention of pathology or outcomes data as ground truth is provided.

    8. The Sample Size for the Training Set:

    The document does not specify the sample size for the training set. It only mentions the training of the algorithm (e.g., "retraining or algorithm improvement").

    9. How the Ground Truth for the Training Set Was Established:

    The document does not explicitly describe how the ground truth for the training set was established. It only states that the device uses "deep-learning based automatic segmentation," implying that it would have been trained on curated data with established ground truth, likely also generated by medical experts, but the specifics are not detailed in this excerpt.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1