Search Results
Found 2 results
510(k) Data Aggregation
(88 days)
ART-Plan is indicated for cancer patients for whom radiation treatment has been planned. It is intended to be used by trained medical professionals including, but not limited to, radiation oncologists, dosimetrists, and medical physicists.
ART-Plan is a software application intended to display and visualize 3D multi-modal medical image data. The user may import, define, display, transform and store DICOM3.0 compliant datasets (including regions of interest structures). These images, contours and objects can subsequently be exported/distributed within the system, across computer networks and/or to radiation treatment planning systems. Supported modalities include CT, PET-CT, CBCT, 4D-CT and MR images.
ART-Plan supports Al-based contouring on CT and MR images and offers semi-automatic and manual tools for segmentation.
To help the user assess changes in image data and to obtain combined multi-modal image information, ART-Plan allows the registration of anatomical and functional images and display of fused and non-fused images to facilitate the comparison of patient image data by the user.
With ART-Plan, users are also able to generate, visualize, evaluate and modify pseudo-CT from MRI images.
The ART-Plan application is comprised of two key modules: SmartFuse and Annotate, allowing the user to display and visualize 3D multi-modal medical image data. The user may process, render, review, store, display and distribute DICOM 3.0 compliant datasets within the system and/or across computer networks. Supported modalities cover static and gated CT (computerized tomography including CBCT and 4D-CT), PET (positron emission tomography) and MR (magnetic resonance).
Compared to ART-Plan v1.6.1 (primary predicate), the following additional features have been added to ART-Plan v1.10.0:
- · an improved version of the existing automatic segmentation tool
- · automatic segmentation on more anatomies and organ-at-risk
- image registration on 4D-CT and CBCT images .
- automatic segmentation on MR images .
- · generate synthetic CT from MR images
- a cloud-based deployment
The ART-Plan technical functionalities claimed by TheraPanacea are the following:
- . Proposing automatic solutions to the user, such as an automatic delineation, automatic multimodal image fusion, etc. towards improving standardization of processes/ performance / reducing user tedious / time consuming involvement.
- . Offering to the user a set of tools to assist semi-automatic delineation, semi-automatic registration towards modifying/editing manually automatically generated structures and adding/removing new/undesired structures or imposing user-provided correspondences constraints on the fusion of multimodal images.
- . Presenting to the user a set of visualization methods of the delineated structures, and registration fusion maps.
- . Saving the delineated structures / fusion results for use in the dosimetry process.
- . Enabling rigid and deformable registration of patients images sets to combine information contained in different or same modalities.
- Allowing the users to generate, visualize, evaluate and modify pseudo-CT from MRI images.
ART-Plan offers deep-learning based automatic segmentation for the following localizations:
- head and neck (on CT images) .
- . thorax/breast (for male/female and on CT images)
- abdomen (on CT images and MR images) ●
- . pelvis male(on CT images and MR images)
- . pelvis female (on CT images)
- brain (on CT images and MR images)
ART-Plan offers deep-learning based synthetic CT-generation from MR images for the following localizations:
- pelvis male .
- brain
Here's a summary of the acceptance criteria and study details for the ART-Plan device, extracting information from the provided text:
Acceptance Criteria and Device Performance
Criterion Category | Acceptance Criteria | Reported Device Performance |
---|---|---|
Auto-segmentation - Dice Similarity Coefficient (DSC) | DSC (mean) ≥ 0.8 (AAPM standard) OR DSC (mean) ≥ 0.54 or DSC (mean) ≥ mean(DSC inter-expert) + 5% (inter-expert variability) | Multiple tests passed demonstrating acceptable contours, exceeding AAPM standards in some cases (e.g., Abdo MRI auto-segmentation), and meeting or exceeding inter-expert variability for others (e.g., Brain MR, Pelvis MRI). For Brain MRI, initially some organs did not meet 0.8 but eventually passed with further improvements and re-evaluation against inter-expert variability. All organs for all anatomies met at least one acceptance criterion. |
Auto-segmentation - Qualitative Evaluation | Clinicians' qualitative evaluation of auto-segmentation is considered acceptable for clinical use without modifications (A) or with minor modifications/corrections (B), with A+B % ≥ 85%. | For all tested organs and anatomies, the qualitative evaluation resulted in A+B % ≥ 85%, indicating that clinicians found the contours acceptable for clinical use with minor or no modifications. For example, Pelvis Truefisp model achieved ≥ 85% A or B, and H&N Lymph nodes also met this. |
Synthetic-CT Generation | A median 2%/2mm gamma passing criteria of ≥ 95% OR A median 3%/3mm gamma passing criteria of ≥ 99.0% OR A mean dose deviation (pseudo-CT compared to standard CT) of ≤ 2% in ≥ 88% of patients. | For both pelvis and brain synthetic-CT, the performance met these acceptance criteria and demonstrated non-inferiority to previously cleared devices. |
Fusion Performance | Not explicitly stated with numerical thresholds, but evaluated qualitatively. | Both rigid and deformable fusion algorithms provided clinically acceptable results for major clinical use cases in radiotherapy workflows, receiving "Passed" in all relevant studies. |
Study Details
-
Sample Size used for the test set and the data provenance:
- Test Set Sample Size: The exact number of patients in the test set is not explicitly given as a single number but is stated that for structures of a given anatomy and modality, two non-overlapping datasets were separated: test patients and train data. The number of test patients was "selected based on thorough literature review and statistical power."
- Data Provenance: Real-world retrospective data, initially used for treatment of cancer patients. Pseudo-anonymized by the centers providing data before transfer. Data was sourced from both non-US and US populations.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: Varies. For some tests (e.g., Abdo MRI auto-segmentation, Brain MRI autosegmentation, Pelvis MRI auto-segmentation), at least 3 different experts were involved for inter-expert variability calculations. For the qualitative evaluations, it implies multiple clinicians or medical physicists.
- Qualifications of Experts: Clinical experts, medical physicists (for validation of usability and performance tests) with expertise level comparable to a junior US medical physicist and responsibilities in the radiotherapy clinical workflow.
-
Adjudication method for the test set:
- The document describes a "truthing process [that] includes a mix of data created by different delineators (clinical experts) and assessment of intervariability, ground truth contours provided by the centers and validated by a second expert of the center, and qualitative evaluation and validation of the contours." This suggests a multi-reader approach, potentially with consensus or an adjudicator for ground truth, but a specific "2+1" or "3+1" method is not detailed. The "inter-expert variability" calculation implies direct comparison between multiple experts' delineations of the same cases.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- A direct MRMC comparative effectiveness study with human readers improving with AI vs without AI assistance is not explicitly described in the provided text. The studies focus on the standalone performance of the AI algorithm against established criteria (AAPM, inter-expert variability, qualitative acceptance) and non-inferiority to other cleared devices.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone performance evaluation of the algorithm was done. The acceptance criteria and performance data are entirely based on the algorithm's output (e.g., DSC, gamma passing criteria, dose deviation) compared to ground truth or existing standards, and qualitative assessment by experts of the algorithm's generated contours.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- The ground truth used primarily involved:
- Expert Consensus/Delineation: Contours created by different clinical experts and assessed for inter-variability.
- Validated Ground Truth Contours: Contours provided by the centers and validated by a second expert from the same center.
- Qualitative Evaluation: Clinical review and validation of contours.
- Dosimetric Measures: For synthetic-CT; comparison to standard CT dose calculations.
- The ground truth used primarily involved:
-
The sample size for the training set:
- Training Patients: 8,736 patients.
- Training Samples (Images/Anatomies/Structures): 299,142 samples. (One patient can have multiple images, and each image multiple delineated structures).
-
How the ground truth for the training set was established:
- "The contouring guidelines followed to produce the contours were confirmed with the centers which provided the data. Our truthing process includes a mix of data created by different delineators (clinical experts) and assessment of intervariability, ground truth contours provided by the centers and validated by a second expert of the center, and qualitative evaluation and validation of the contours." This indicates that the ground truth for the training set was established through a combination of expert delineation, internal validation by a second expert, adherence to established guidelines, and assessment of variability among experts.
Ask a specific question about this device
(48 days)
syngo.via RT Image Suite is a 3D and 4D image visualization, multi-modality manipulation and contouring tool that helps the preparation and response assessment of treatments such as, but not limited to those performed with radiation (for example, Brachytherapy, Particle Therapy, External Beam Radiation Therapy).
It provides tools to efficiently view existing contours, create, edit, modify, copy contours of the body, such as but not limited to, skin outline, targets and organs-at-risk. It also provides functionalities to create and modify simple treatment plans. Contours, images and treatment plans can subsequently be exported to a Treatment Planning System.
The software combines following digital image processing and visualization tools:
- Multi-modality viewing and contouring of anatomical, functional, and multi-parametric images such as but not limited to CT, PET, PET/CT, MRI, Linac Cone Beam CT (CBCT) images and dose distributions
- Multiplanar reconstruction (MPR) thin/thick, minimum intensity projection (MIP), volume rendering technique (VRT)
- Freehand and semi-automatic contouring of regions-of-interest on any orientation including oblique
- Creation of contours on any type of images without prior assignment of a planning CT .
- Manual and semi-automatic registration using rigid and deformable registration ●
- Supports the user in comparing, contouring, and adapting contours based on datasets acquired ● with different imaging modalities and at different time points
- . Supports the user in comparing images and contours of different patients
- Supports multi-modality image fusion ●
- Visualization and contouring of moving tumors and organs ●
- Management of points of interest including but not limited to the isocenter
- Management of simple treatment plans
- Generation of a synthetic CT based on multiple pre-define MR acquisitions ●
The subject device with the current software version SOMARIS/8 VB40 is an image analysis software for viewing, manipulation, 3D and 4D visualization, comparison of medical images from multiple imaging modalities and for the segmentation of tumors and organs-at-risk, prior to dosimetric planning and response assessment in radiation therapy. syngo.via RT Image Suite combines routine and advanced digital image processing and visualization tools for easy manual and software assisted contouring of volumes of interest, identification of points of interest, sending isocenter points to an external laser system, registering images and exporting final results. syngo.via RT Image Suite supports the medical professional with tools to use during different steps in radiation therapy case preparation.
Here's a breakdown of the acceptance criteria and study details for the syngo.via RT Image Suite, based on the provided FDA 510(k) summary:
1. Table of Acceptance Criteria and Reported Device Performance
Metric | Acceptance Criteria (Predicate vs. Subject) | Reported Device Performance (Subject Device) |
---|---|---|
Detection Rate (OARs) | Improved over predicate device | 100% for Brain, Liver, Kidney Left, Kidney Right |
Mean Average Symmetric | Improved over predicate device | Ranged from 0.64 mm to 3.04 mm |
Surface Distance (ASSD) | (Specifically 0.64 mm for Femur Head Right, 3.04 mm for Heart) | |
Mean DICE Coefficient | Improved over predicate device | Ranged from 0.85 to 0.97 |
(Specifically 0.85 for Prostate and Rectum, 0.97 for Heart) |
Note: The document states the subject device "improved for all evaluated OAR" for both ASSD and DICE coefficient compared to the predicate device, but does not provide the specific predicate device values for direct comparison in this summary.
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: 32 datasets
- Data Provenance: Not explicitly stated in the provided text (e.g., country of origin, retrospective or prospective).
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications
- Number of Experts: Not explicitly stated in the provided text.
- Qualifications of Experts: The ground truth was "manually annotated." While this implies expert review, the specific qualifications (e.g., "radiologist with 10 years of experience") are not detailed.
4. Adjudication Method for the Test Set
- Adjudication Method: Not explicitly stated. The document mentions "manually annotated ground truth," but does not specify if multiple annotators were used and how discrepancies were resolved (e.g., 2+1, 3+1).
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- MRMC Study: No. The study described is a comparison of the subject device (algorithm) performance against manually annotated ground truth, and an improvement over a predicate device's algorithm. It does not assess human reader performance with or without AI assistance.
6. Standalone (Algorithm Only) Performance Study
- Standalone Study: Yes. The clinical testing summary describes the performance of the algorithm itself, specifically focusing on "detection rate" and "segmentation quality" based on metrics like ASSD and DICE coefficient. It compares the subject device's algorithm (SOMARIS/8 VB40) to the predicate device's algorithm (SOMARIS/8 VB30).
7. Type of Ground Truth Used
- Type of Ground Truth: Expert consensus (specifically, "manually annotated ground truth").
8. Sample Size for the Training Set
- Training Set Sample Size: Not explicitly stated. The document mentions a "new deep learning-based approach that uses an adversarial network" and that its "segmentation algorithm were validated separately using a testing set of 32 datasets," but does not provide details on the training set size.
9. How the Ground Truth for the Training Set Was Established
- Training Set Ground Truth Establishment: Not explicitly stated. While the test set ground truth was "manually annotated," the method for establishing the training set ground truth, especially for a deep learning model, is not described in this summary.
Ask a specific question about this device
Page 1 of 1