K Number
K234068
Manufacturer
Date Cleared
2024-04-22

(122 days)

Product Code
Regulation Number
892.5050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

ART-Plan's indicated target population is cancer patients for whom radiotherapy treatment has been prescribed. In this population, any patient for whom relevant modality imaging data is available.

ART-Plan is not intended to be used for patients less than 18 years of age.

The indicated users are trained medical professionals including, but not limited to, radiotherapists, radiation oncologists, medical physicists, dosimetrists and medical professionals involved in the radiation therapy process.

The indicated use environments are, but not limited to, hospitals, clinics and any health facility involved in radiation therapy.

Device Description

ART-Plan is a software platform allowing users with an account to contour regions of interest on 3D images, perform multi-model registration of images, and help in the decision for the need for replanning based on contours and doses on daily images. It includes several modules:

  • Home,
  • Annotate,
  • SmartFuse,
  • AdaptBox,
  • Administration,
  • About

ART-Plan asks the user to work by project. It is necessary to create a "project" entity in the Patient page by associating it to a reference volume (preferably the positioning CT or the positioning MR) in order to create the contours on this volume in the Annotate module or to use this image as reference to compare it with CBCT images in the AdaptBox module. It is possible to create several projects for a given patient.

The Home module allows the user to search for a patient already present in the software's database or to import it from the structure's imaging servers or from another external source and to manage the different projects of this patient.

The Annotate module allows the user to contour regions of interest on the reference volume. It also allows generation of pseudo-CTs from MRI images. Users are able to visualize, evaluate and modify the HU values of the associated structures on the pseudo-CT, if needed. After validation, manual and automatic contours can be generated on the pseudo-CT images. Registration can also be performed using the pseudo-CT either as a target or source image. All results can be exported upon approval.

The SmartFuse module allows the user to fuse the primary volume of a project with secondary volumes.

The AdapBox module helps the user to decide if a replanning is necessary. For this purpose, the module allows the user to generate a pseudo-CT from a CBCT image, to auto-delineate regions of interest on the pseudo-CT, to compute the dose on both planning CT and pseudo-CT and then define if there is a need for replanning by comparing volume and dose metrics computed on both images and over the course of the treatment. Those metrics are defined by the user.

The Administration module allows users with specific rights to manage the platform's usage parameters. Some more restricted rights are also accessible in the drop-down menu linked to the user account through the Settings menu.

The About module allows the user to obtain information about the software and its use, as well as to contact TheraPanacea.

AI/ML Overview

Here's a summary of the acceptance criteria and study details for ART-Plan (v.2.2.0), based on the provided text:

Acceptance Criteria and Reported Device Performance

Note: The provided text lists multiple acceptance criteria for different aspects of the device (auto-segmentation, synthetic-CT generation, dose engine validation), and not all reported performance metrics are explicitly linked one-for-one to a single acceptance criterion in a consolidated table within the document. The table below attempts to synthesize the acceptance criteria and the general statement of "all tests passed their respective acceptance criteria" for the reported performance.

Test Type/MetricAcceptance CriterionReported Device Performance
Auto-segmentation (Quantitative - DSC)DSC (mean) ≥ 0.8 (AAPM) OR DSC (mean) ≥ 0.54 OR DSC (mean) ≥ mean (DSC inter-expert) + 5%All organs included in the model passed at least one acceptance criterion. All tests passed respective acceptance criteria.
Auto-segmentation (Quantitative - HD95)HD95 (mean) ≤ 5.75 mmAll tests passed respective acceptance criteria.
Auto-segmentation (Qualitative)A+B % ≥ 85% (A: acceptable w/o modification, B: acceptable w/ minor modification)All tests passed respective acceptance criteria.
Auto-segmentation (Non-regression)Mean DSC should not regress negatively beyond -5% relative errorAll tests passed respective acceptance criteria.
Auto-segmentation (US vs. nUS data)Mean DSC (US) ≥ Mean DSC (nUS) AND/OR Mean HD95 (US) ≤ Mean HD95 (nUS)All tests passed respective acceptance criteria.
Contour Propagation (Qualitative)A+B % ≥ 85% (deformable) / ≥ 50% (rigid)All tests passed respective acceptance criteria.
Synthetic-CT Dosimetric ValidationDVH parameters (PTV): 76.7%
Median Gamma Index 2%/2mm: ≥ 92%
Median Gamma Index 3%/3mm: ≥ 93.57%All tests passed respective acceptance criteria.
Synthetic-CT Geometric/AnatomicalJacobian Determinant = 1 +/- 5%All tests passed respective acceptance criteria.
Dose Engine ValidationRelative differences on DVH parameters (PTV/OARs): ≤ 4.4% (Lungs ≤ 24.4%)
Median Gamma Index 2%/2mm: ≥ 86.3%
Median Gamma Index 3%/3mm: ≥ 91.75%All tests passed respective acceptance criteria.

Study Details

  1. Sample size used for the test set and the data provenance:

    • Auto-segmentation (Quantitative & Non-regression): Minimum sample size of 17 patients per anatomical region (where applicable).
    • Auto-segmentation (Qualitative): Minimum sample size of 15 patients per anatomical region.
    • Auto-segmentation (US patient data performance comparison): Minimum sample size of 17 patients.
    • Contour Propagation: Minimum sample size of 15 patients.
    • Synthetic-CT Dosimetric Validation: 19 patients per supported anatomy.
    • Synthetic-CT Geometric and Anatomic Validation: 19 patients per supported anatomy.
    • Dose Engine Validation: 45 patients per supported anatomy.
    • Data Provenance: "datasets representative of the worldwide population receiving radiotherapy treatments." (No specific countries or retrospective/prospective information is given, beyond "US patient data" for one specific comparison.)
  2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • The term "medical experts" is used generally. No specific number of experts or detailed qualifications (e.g., years of experience, subspecialty certification) are provided for the ground truth establishment.
  3. Adjudication method for the test set:

    • Not explicitly stated. The ground truth seems to be established by "manual contours performed by medical experts," implying a single expert creation or a consensus if multiple experts were involved, but the method for consensus (e.g., 2+1, 3+1) is not detailed.
  4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No MRMC comparative effectiveness study involving human readers improving with AI assistance is described. The studies focus on the performance of the AI algorithm (auto-segmentation, synthetic-CT generation, dose calculation) against expert-generated ground truth or established clinical benchmarks.
  5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • Yes, the described studies are primarily standalone performance evaluations of the ART-Plan (v2.2.0) algorithms. This includes auto-segmentation, synthetic-CT generation, and dose engine validation, all assessed against a ground truth. While there are "qualitative evaluations" by clinicians, these assess the output of the auto-segmentation or contour propagation, not a measure of human performance with or without the device.
  6. The type of ground truth used:

    • Expert Consensus: "manual contours performed by medical experts" is the primary ground truth for auto-segmentation and contour propagation assessments.
    • Clinical Dosimetric Criteria: For dose engine validation and synthetic-CT dosimetric validation, comparisons are made against established clinical dosimetric criteria and dose distributions from the planning CT.
    • Reference Imaging: For geometric/anatomic validation of synthetic-CT, comparisons seem to be made against the original CBCT using metrics like the Jacobian determinant.
  7. The sample size for the training set:

    • Not specified in the provided text. The document focuses on the validation of the device (ART-Plan v2.2.0) and mentions "retraining or algorithm improvement" but does not give details about the training set size or composition.
  8. How the ground truth for the training set was established:

    • Not specified in the provided text.

§ 892.5050 Medical charged-particle radiation therapy system.

(a)
Identification. A medical charged-particle radiation therapy system is a device that produces by acceleration high energy charged particles (e.g., electrons and protons) intended for use in radiation therapy. This generic type of device may include signal analysis and display equipment, patient and equipment supports, treatment planning computer programs, component parts, and accessories.(b)
Classification. Class II. When intended for use as a quality control system, the film dosimetry system (film scanning system) included as an accessory to the device described in paragraph (a) of this section, is exempt from the premarket notification procedures in subpart E of part 807 of this chapter subject to the limitations in § 892.9.