(120 days)
ART-Plan is a software designed to assist the contouring process of the target anatomical regions on 3D-images of cancer patients for whom radiotherapy treatment has been planned.
The SmartFuse module allows the user to register combinations of anatomical and functional images and display them with fused and non-fused displays to facilitate the comparison and delineation of image data by the user.
The images created with rigid or elastic registrations, potential modifications, and then the validation of a trained user with professional qualifications in anatomy and radiotherapy.
With the Annotate module, users can edit manually and semi-automatically the contours for the regions of interest. It also allows to generate automatically, and based on medical practices, the contours for the organs at risk and healthy lymph nodes on CT images.
The contours created automatically, semi-automatically or manually require verifications, and then the validation of a trained user with professional qualifications in anatomy and radiotherapy.
The device is intended to be used in a clinical setting, by trained professionals only.
The ART-Plan application is comprised of two kev modules: SmartFuse and Annotate, allowing the user to display and visualize 3D multi-modal medical image data. The user may process, render, review, store, display and distribute DICOM 3.0 compliant datasets within the system and/or across computer networks. Supported modalities include static and gated CT (computerized tomography), PET (positron emission tomography), and MR (magnetic resonance).
The overview of the product, in terms of input/output, functionalities and integration within the current clinical workflow for radiation therapy planning.
The ART-Plan technical functionalities claimed by TheraPanacea are the following:
- . Proposing automatic solutions to the user, such as an automatic delineation, automatic multimodal image fusion, etc. towards improving standardization of processes/ performance / reducing user tedious / time consuminq involvement.
- . Offering to the user a set of tools to assist semi-automatic delineation, semi-automatic registration towards modifying/editing manually automatically generated structures and addinq/removing new/undesired structures or imposing user-provided correspondences constraints on the fusion of multimodal images.
- . Presenting to the user a set of visualization methods of the delineated structures, and registration fusion maps.
- . Saving the delineated structures / fusion results for use in the dosimetry process.
- . Enabling rigid and deformable registration of patients images sets to combine information contained in different or same modalities.
Here's an analysis of the acceptance criteria and supporting studies for the ART-Plan device, based on the provided FDA 510(k) summary:
1. Table of Acceptance Criteria and Reported Device Performance
The provided document details various performance tests but does not explicitly state quantitative "acceptance criteria" alongside "reported device performance" in a structured table. Instead, it describes tests performed and their general outcome ("Passed"). The "results" column indicates whether the device met the implicit expectations for each test.
Therefore, I've created a table summarizing the tests described and their reported outcomes, which implicitly serve as the device meeting performance expectations:
Test Name | Test Description | Reported Device Performance (Implicit Acceptance) |
---|---|---|
Usability Testing | Assessment for compliance with IEC 62366. | Passed |
Autosegmentation performances (European data) | Study gathering information on 3 tests performed on automatic segmentation performances on European data. | Passed |
Autosegmentation performances according to AAPM requirements | Demonstrated that the auto-segmentation algorithm of the Annotate module provides acceptable contours for the concerned structures on an image of a patient. | Passed |
Autosegmentation performances against MIM | Demonstrated that the auto-segmentation algorithm of the Annotate module provides acceptable contours for the concerned structures on an image of a patient. | Passed |
Qualitative validation of autosegmentation performances | Demonstrated that the auto-segmentation algorithm of the Annotate module provides acceptable contours for the concerned structures on an image of a patient. | Passed |
External Contour performances according to AAPM requirements | Demonstrated that the External Contour Automatic Segmentation algorithm of the Annotate module provides acceptable contours for the patient's body on an image of a patient. | Passed |
Fusion performances according to AAPM recommendations | Evaluated the quality of the rigid and deformable registration tools of the SmartFuse module on retrospective intra-patient images and inter-patient images of different modalities, to ensure the safety of the device for clinical use. | Passed |
Registration performances on POPI-model | Evaluated the quality of the deformable registration tools of the SmartFuse module on intra-patient CT images. Testing was conducted according to POPI-model protocol on corresponding public data. | Passed |
Autosegmentation performances on US data | Demonstrated that the autosegmentation algorithm of the Annotate module provides clinically acceptable contours for the concerned structures when applied to US patients. | Passed |
Pilot study for sample size estimation - literature review | Pilot study estimating a consistent sample size of dataset for performance testing's considering state-of-art studies in image registration and segmentation. Literature review completed on most cited articles in the field of medical vision. | Passed |
System Verification and Validation Testing | Verification and validation testing performed to verify the software of the ART-Plan, following FDA's Guidance for Industry and FDA Staff, "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices." The software was considered a "major" level of concern due to potential for serious injury or death from failure or misapplication. | Passed |
2. Sample Size Used for the Test Set and Data Provenance
- Autosegmentation performances (European data): The document mentions "3 tests performed on the automatic segmentation performances on European data." No specific sample size (number of patients or images) is given for these tests. The provenance is explicitly stated as European data. The retrospective/prospective nature is not specified, but typically such performance evaluations on existing data are retrospective.
- Autosegmentation performances on US data: This test explicitly states that the algorithm was applied to US patients. No specific sample size (number of patients or images) is provided. The retrospective/prospective nature is not specified.
- Fusion performances according to AAPM recommendations & Registration performances on POPI-model: These studies evaluated various registration qualities. No specific sample size (number of patients or images) is provided for these tests, although the POPI-model test indicates "corresponding public data." The provenance for the AAPM fusion test includes "retrospective intra-patient images and inter-patient images," suggesting retrospective data.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
The document does not explicitly state the number of experts used to establish ground truth for the test sets or their specific qualifications (e.g., "radiologist with 10 years of experience").
It does mention that the device's output (contours created automatically, semi-automatically, or manually) "require verifications, and then the validation of a trained user with professional qualifications in anatomy and radiotherapy." This implies that qualified professionals are expected to review and validate results, which is a key part of the human-in-the-loop workflow. However, this is about clinical use, not the ground truth generation for the performance studies themselves.
4. Adjudication Method for the Test Set
The document does not specify any adjudication method (e.g., 2+1, 3+1, none) used for establishing the ground truth of the test sets in the performance studies.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No, a multi-reader multi-case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance is not explicitly mentioned or described in the provided non-clinical testing section. The studies focus on the standalone performance of the AI algorithms (autosegmentation, fusion).
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, standalone performance studies were done. The "Autosegmentation performances" tests, "External Contour performances," "Fusion performances," and "Registration performances" all evaluate the algorithm's output directly. The device description explicitly states it is a software designed "to assist the contouring process," and the contours generated "require verifications, potential modifications, and then the validation of a trained user." This confirms that the AI provides an initial output (standalone performance) that is then subject to human review.
7. The Type of Ground Truth Used
The document refers to the AI providing "acceptable contours" or "clinically acceptable contours." While it doesn't explicitly state the method of ground truth generation (e.g., expert consensus, pathology, outcomes data), the context of "acceptability" for radiotherapy planning strongly implies that the ground truth for these contoured structures would be established by expert consensus (likely radiation oncologists or dosimetrists) or highly correlated with clinical consensus/guidelines for radiotherapy planning. Pathology or outcomes data are less likely to be used directly for geometric contour ground truth.
8. The Sample Size for the Training Set
The document does not provide any information regarding the sample size used for the training set of the AI models.
9. How the Ground Truth for the Training Set Was Established
The document does not provide any information on how the ground truth for the training set was established.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).