Search Results
Found 1 results
510(k) Data Aggregation
(179 days)
Trained medical professionals use Contour ProtégéAI as a tool to assist in the automated processing of digital medical images of modalities CT and MR, as supported by ACR/NEMA DICOM 3.0. In addition, Contour ProtégéAI supports the following indications:
• Creation of contours using machine-learning algorithms for applications including, but not limited to, quantitative analysis, aiding adaptive therapy, aiding image registration, transferring contours to radiation therapy treatment planning systems, and archiving contours for patient follow-up and management.
• Segmenting structures across a variety of CT and MR anatomical locations.
Appropriate image visualization software must be used to review and, if necessary, edit results automatically generated by Contour ProtégéAI.
Contour ProtégéAI+ is an accessory to MIM software that automatically creates contours on medical images through the use of machine-learning algorithms. It is designed for use in the processing of medical images and operates on Windows, Mac, and Linux computer systems. Contour ProtégéAI+ is deployed on a remote server using the MIMcloud service for data management and transfer; or locally on the workstation or server running MIM software.
Compared to the predicate device, the intended use and indications for use for the subject device include minor modifications to improve clarity and completeness.
The upcoming 2.0.0 release of Contour ProtégéAI+ serving as the subject device in this 510(k) submission includes one new 4.3.0 neural network model (MR Brain) using the existing architecture cleared by the predicates, as well as one 5.0.0 neural network model (CT Male Pelvis) using the new architecture to allow the training of smaller networks for individual structures or groups of adjacent structures.
This 510(k) submission also includes plans for further development activities to Contour ProtégéAI+. Proposed modifications in the PCCP are categorized as follows:
● New CT models or MR models
● New CBCT models for CBCT IRIS imaging data (cleared in K252188) acquired from Elekta's Evo, Versa HD, and Harmony Pro systems
● Re-training models due to improvements in training data
● Re-training models on cleared architecture
● Re-applying CT models for CBCT IRIS imaging data (cleared in K252188) acquired from Elekta's Evo, Versa HD, and Harmony Pro systems
Here's a breakdown of the acceptance criteria and the study that proves the Contour ProtégéAI+ device meets them, based on the provided FDA 510(k) clearance letter:
Acceptance Criteria and Reported Device Performance
Table 1: Acceptance Criteria and Reported Device Performance for Contour ProtégéAI+
| Criteria Category | Specific Metric | Acceptance Criteria | Reported Device Performance (Contour ProtégéAI+) |
|---|---|---|---|
| Per-Structure Performance | Dice Score (Non-Inferiority) | Lower 95th percentile confidence bound of the difference between Contour ProtégéAI+ mean Dice and MIM Atlas mean Dice > -0.1 | Demonstrated equivalence or better performance than MIM Maestro atlas segmentation (many indicated by *) |
| MDA Score (Non-Inferiority) | Upper 95th percentile confidence bound of the difference between Contour ProtégéAI+ mean MDA and MIM Atlas mean MDA < 2mm | Demonstrated equivalence or better performance than MIM Maestro atlas segmentation (many indicated by *) | |
| User Evaluation Score (Average) | Average score of 3 or higher (on a five-point scale, where 3 = minor edits in less time than starting from scratch, 4 = minor edits not necessary, 5 = can be used as-is) | Scores ranged from 2.6 to 4.75 across structures (many indicated above 3, some below but passed other criteria) | |
| Model-Level Performance | Cumulative Added Path Length (APL) (Non-Inferiority) | Statistically non-inferior cumulative APL compared to the reference predicate | 4.3.0 MR Brain: 36.87 ± 72.40 (3.63%) * (Non-inferiority demonstrated) 5.0.0 CT Male Pelvis: 165.44 ± 235.96 (-21.5%) * (Non-inferiority demonstrated) |
| Overall Acceptance | Inclusion in Final Models | Structures must pass two or more of the three per-structure tests (Dice, MDA, User Evaluation). | All included structures passed this criterion. |
Study Details
-
Sample Size Used for the Test Set and Data Provenance:
- Test Set Size: 189 individual patient images.
- Data Provenance: All testing data originated from the United States.
- Regional breakdown: Midwest (18.5%), South (54.0%), West (12.7%), and Northeast (14.8%).
- Sex distribution: 28.0% female, 46.8% male, and 25.4% unknown.
- Age distribution: 6.9% between 20-40 years, 17.5% between 40-60 years, 51.3% over 60 years, and 24.3% unknown.
- Manufacturer representation: GE (46.6%), Siemens (36.0%), Phillips (4.8%), Accuray (5.8%), and TomoTherapy (6.9%).
- Nature of data: Retrospective, obtained from clinical treatment plans for patients prescribed external beam or molecular radiotherapy.
-
Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts:
- The document implies a team-based approach for ground truth establishment. For "Re-segmented" data, segmentation is performed by a dosimetrist, then reviewed by a team of dosimetrists, and separately reviewed by a radiation oncologist. Segmentations that failed review were re-contoured by a dosimetrist and re-reviewed. The exact number of individual experts (dosimetrists, radiation oncologists) involved is not explicitly stated.
- Qualifications: Dosimetrists and Radiation Oncologists are "trained medical professionals" and "consultants (physicians and dosimetrists)". Specific years of experience are not provided, but their roles in clinical treatment planning and review imply significant expertise.
-
Adjudication Method for the Test Set:
- The document describes a review process where segmentations are reviewed by a team of dosimetrists and separately reviewed by a radiation oncologist. If segmentations fail review, they are referred for re-contouring and re-reviewed. This suggests a form of consensus or expert adjudication, but a specific "2+1" or "3+1" method is not detailed. It's a qualitative review leading to re-contouring if disagreements are significant enough to "fail review."
-
If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC comparative effectiveness study was explicitly described in terms of human readers improving with AI assistance.
- A "user beta testing" was conducted to evaluate "time savings compared to contouring from scratch," which is related to AI assistance. However, it measured the quality of the AI-generated contour (on a 5-point scale), not the improvement in human reader performance with the AI.
- The primary comparative effectiveness study was Contour ProtégéAI+ (AI standalone) vs. MIM Maestro Atlas Segmentation (reference device), not AI-assisted human vs. unassisted human.
-
If a Standalone (i.e. algorithm only without human-in-the loop performance) was done:
- Yes, a standalone performance study was done. The Dice and MDA scores presented in Table 2 are direct comparisons of the Contour ProtégéAI+ algorithm's output against the ground truth, and against the MIM Maestro atlas segmentation (another automated method). The user evaluation scores also reflect the quality of the algorithm's output, which would then be reviewed by a human.
-
The Type of Ground Truth Used:
- Expert Consensus / Clinically Established Guidelines: The ground truth for the test set consisted of contours derived from clinical treatment plans. These contours were either "Not be Re-segmented" (original treatment plan segmentations, implicitly considered ground truth) or "Re-segmented" and then meticulously reviewed and approved by a team of dosimetrists and a radiation oncologist, ensuring adherence to established clinical guidelines.
- Outcome Data: Not explicitly mentioned as a source for ground truth.
- Pathology: Not explicitly mentioned as a source for ground truth.
-
The Sample Size for the Training Set:
- The document states, "The CT images for this training set were obtained from clinical treatment plans for patients prescribed external beam or molecular radiotherapy and were re-segmented by consultants (physicians and dosimetrists) specifically for this purpose." However, the specific sample size (number of patients/images) for the training set is not provided. It only mentions that the images for the verification data (189 images) are independent from the training data.
-
How the Ground Truth for the Training Set was Established:
- The ground truth for the training set was established by "consultants (physicians and dosimetrists)" who re-segmented clinical treatment plans. This implies expert-driven manual contouring or correction to create the reference data used to train the machine learning models. The process for internal review and quality assurance of these training contours is not detailed to the same extent as for the test set ground truth.
Ask a specific question about this device
Page 1 of 1