Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K253091

    Validate with FDA (Live)

    Manufacturer
    Date Cleared
    2025-12-23

    (91 days)

    Product Code
    Regulation Number
    892.5050
    Age Range
    18 - 120
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ART-Plan+'s indicated target population is cancer patients for whom radiotherapy treatment has been prescribed. In this population, any patient for whom relevant modality imaging data is available.

    ART-Plan+'s includes several modules:

    SmartPlan which allows automatic generation of radiotherapy treatment plan that the users import into their own Treatment Planning System (TPS) for the dose calculation, review and approval. This module is available for supported prescriptions for prostate only.

    Annotate which allows automatic generation of contours for organs at risk, lymph nodes and tumors, based on medical practices, on medical images such as CT and MR images

    AdaptBox which allows generation of synthetic-CT from CBCT images, dose computation on CT images for external beam irradiation with photon beams and assisted CBCT-based off-line adaptation decision-making for the following anatomies:

    • Head & Neck
    • Breast / Thorax
    • Pelvis (male)

    ART-Plan+ is not intended to be used for patients less than 18 years of age.

    Device Description

    ART-Plan is a software platform allowing contour regions of interest on 3D images, to provide an automatic treatment plan and to help in the decision for the need for replanning based on contours and doses on daily images. It includes several modules:

    Home: tasks and patient monitoring

    Annotate including TumorBox: contouring of regions of interest

    Smartplan: creation of an automatic treatment plan based on a planning CT and a RTSS

    AdaptBox: helping tool to decide if a replanning is necessary. For this purpose, the module allows the user to generate a synthetic-CT from a CBCT image, to auto-delineate regions of interest on the synthetic-CT, to compute the dose on both planning CT and synthetic-CT and then define if there is a need for replanning by comparing volume and dose metrics computed on both images and over the course of the treatment. Those metrics are defined by the user.

    Administration and Settings: preferences management, user account management, etc.

    About: information about the software and its use, as well as contact details.

    Annotate, TumorBox, Smartplan and AdaptBox are partially based on a batch mode, which allows the user to launch the operations of autocontouring and autoplanning without having to use the interface or the viewers. In that way, the software is completely integrated into the radiotherapy workflow and offer to the user a maximum of flexibility.

    Annotate which allows automatic generation of contours for organs at risk (OARs), lymph nodes (LNs) and tumors, based on medical practices, on medical images such as CT and MR images:

    OARs and LNs:

    • Head and neck (on CT images)
    • Thorax/breast (on CT images)
    • Abdomen (on CT and male on MR images)
    • Pelvis male (on CT and MR images)
    • Pelvis female (on CT images)
    • Brain (on CT images and MR images)

    Tumor:

    • Brain (on MR images)

    SmartPlan which allows automatic generation of radiotherapy treatment plan that the users import into their own Treatment Planning System (TPS) for the dose calculation, review and approval. This module is available for supported prescriptions for prostate only.

    AdaptBox which allows generation of synthetic-CT from CBCT images, dose computation on CT images for external beam irradiation with photon beams and assisted CBCT-based off-line adaptation decision-making for the following anatomies:

    • Head & Neck
    • Breast / Thorax
    • Pelvis (male)
    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for ART-Plan+ (v3.1.0) based on the provided FDA 510(k) clearance letter:

    Acceptance Criteria and Reported Device Performance

    The ART-Plan+ device consists of three main modules: Annotate, SmartPlan, and AdaptBox. Each module has its own set of acceptance criteria.

    Note: The document provides acceptance criteria and implicitly states that "all tests passed their respective acceptance criteria, thus showing ART-Plan + v3.1.0 clinical acceptability." However, it does not provide specific numerical reported device performance for each criterion. It only states that the criteria were met.

    Annotate Module Performance Criteria

    Acceptance CriterionReported Device Performance
    Non-regression testing (on CT/MR for existing structures): Mean DSC should not regress negatively between the current and last validated version of Annotate beyond a maximum tolerance margin set to -5% relative error.Passed (implicitly, as stated all tests passed)
    Non-regression testing (on synthetic-CT from CBCT for existing structures): Mean DSC (sCT) should be equivalent to Mean DSC (CT) beyond a maximum tolerance margin set to -5% relative error.Passed (implicitly, as stated all tests passed)
    Qualitative evaluation (for new structures or those failing non-regression): Clinicians' qualitative evaluation of the auto-segmentation is considered acceptable for clinical use without modifications (A) or with minor modifications / corrections (B) with an A+B % ≥ 85%.Passed (implicitly, as stated all tests passed)
    Quantitative evaluation (for new structures): Mean DSC (annotate) ≥ 0.8Passed (implicitly, as stated all tests passed)

    SmartPlan Module Performance Criteria

    Acceptance CriterionReported Device Performance
    Quantitative evaluation: Effectiveness difference (%) in DVH achieved goals between manual plans and automatic plans ≤ 5%.Passed (implicitly, as stated all tests passed)
    Qualitative evaluation: % of clinical acceptable automatic plans ≥ 93% after expert review.Passed (implicitly, as stated all tests passed)

    AdaptBox Module Performance Criteria

    Acceptance CriterionReported Device Performance
    Dosimetric evaluations (Synthetic CT): Median 2%/2mm ≥ 92%.Passed (implicitly, as stated all tests passed)
    Dosimetric evaluations (Synthetic CT): Median 3%/3mm ≥ 93.57%.Passed (implicitly, as stated all tests passed)
    Dosimetric evaluations (Synthetic CT): A median dose deviation (synthetic-CT compared to standard CT) of ≤2% in ≥76.7% of patients.Passed (implicitly, as stated all tests passed)
    Quantitative validation (Synthetic CT): Jacobian determinant = 1 +/- 5%.Passed (implicitly, as stated all tests passed)
    Qualitative validation (Deformation of planning CT towards CBCT): Clinicians' qualitative evaluation of the overall registration output (A+B%) ≥ 85%.Passed (implicitly, as stated all tests passed)
    Qualitative validation (Deformable propagation of contours): Clinicians' qualitative evaluation of the propagated contours (A+B%) ≥ 85%.Passed (implicitly, as stated all tests passed)

    Study Details

    1. Sample Sizes for the Test Set and Data Provenance

    The document describes the test sets for each module and the overall data provenance:

    • Overall Test Dataset: Total of 2040 patients (1413 EU patients and 627 US patients), representing 31% US data.
    • Annotate Module: Total of 1844 patients (1254 EU patients and 590 US patients).
      • Provenance: Retrospective, worldwide population receiving radiotherapy treatments, with a specific effort to include US data (31% overall).
      • Minimum sample sizes for specific tests:
        • Non-regression testing (autosegmentation on CT/MR, or synthetic-CT from CBCT): Minimum 24 patients.
        • Qualitative evaluation of autosegmentation: Minimum 18 patients.
        • Quantitative evaluation of autosegmentation: Minimum 24 patients.
    • SmartPlan Module: Total of 35 patients (25 EU patients and 10 US patients).
      • Provenance: Retrospective, worldwide population receiving radiotherapy treatments, with a specific effort to include US data.
      • Minimum sample size for quantitative and qualitative evaluation: Minimum 20 patients.
    • AdaptBox Module: Total of 161 patients (134 EU patients and 27 US patients).
      • Provenance: Retrospective, worldwide population receiving radiotherapy treatments, with a specific effort to include US data. An independent dataset composed only of US patients was also used for quantitative validation of synthetic CT.
      • Minimum sample sizes for specific tests:
        • Dosimetric evaluations of synthetic CT: Minimum 15 patients.
        • Quantitative validation of synthetic CT: Minimum 15 patients (plus an independent US dataset).
        • Qualitative validation of deformation of planning CT: Minimum 10 patients.
        • Qualitative validation of deformable propagation of contours: Minimum 10 patients.

    2. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    The document refers to "medical experts" and "clinicians" for qualitative evaluations and for performing manual contours. However, it does not specify the exact number of experts used for ground truth establishment for each test set or their specific qualifications (e.g., "Radiologist with 10 years of experience"). It only mentions that evaluations were performed by medical experts.

    3. Adjudication Method for the Test Set

    The document does not explicitly state an adjudication method like "2+1" or "3+1" for creating the ground truth or resolving disagreements among experts. For qualitative evaluations, it describes a rating scale (A, B, C) and acceptance based on a percentage of A+B ratings, implying individual expert review results were aggregated. For quantitative validations, ground truth seems to be established by comparison with "manual contours performed by medical experts" or "direct comparison with manual plans," but the process for defining these manual references is not detailed in terms of adjudication.

    4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    The document does not mention a Multi-Reader Multi-Case (MRMC) comparative effectiveness study to evaluate how human readers improve with AI vs. without AI assistance. The evaluations focus on the standalone performance of the AI modules or the clinical acceptability of outputs generated by the AI (e.g., auto-segmentations, auto-plans, synthetic CTs).

    5. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    Yes, the studies described are primarily standalone (algorithm only) performance evaluations. The modules (Annotate, SmartPlan, AdaptBox) are tested for their ability to generate contours, treatment plans, or synthetic CTs, and these outputs are then compared against ground truth or evaluated for clinical acceptability by experts. While experts review the outputs for clinical acceptability, this is an evaluation of the algorithm's output, not a comparative study of human performance with and without AI assistance. The document states, "all the steps of the workflow where ART-Plan is involved have been tested independently," emphasizing the standalone nature of the module testing.

    6. Type of Ground Truth Used

    The ground truth varied depending on the module and test:

    • Expert Consensus / Manual Delineation:
      • For Annotate's quantitative non-regression and quantitative evaluation, the ground truth was "manual contours performed by medical experts."
      • For SmartPlan's quantitative evaluation, the ground truth was "manual plans."
    • Qualitative Expert Review:
      • For Annotate, SmartPlan, and AdaptBox qualitative evaluations, the ground truth was established by "medical experts" or "clinicians" assessing the clinical acceptability of the device's output against defined scales (A, B, C).
    • Comparison to Standard Imaging/Analysis:
      • For AdaptBox's dosimetric evaluations, the synthetic CT performance was compared to "standard CT."
      • For AdaptBox's quantitative validation of synthetic CTs, a "direct comparison of anatomy and geometry with the associated CBCT" was performed.

    7. Sample Size for the Training Set

    The document does not specify the sample size for the training set for any of the modules. It only discusses the test set sizes.

    8. How the Ground Truth for the Training Set Was Established

    The document briefly mentions "retraining or algorithm improvement" for existing structures and new structures for autosegmentation, but it does not describe how the ground truth for the training set was established. It only focuses on the validation of the new version's performance using dedicated test sets with specific ground truth methods. It implies that the underlying AI models (deep learning neural networks) were trained, but the details of that training process, including ground truth establishment, are not provided in this 510(k) summary.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1