(231 days)
Trained medical professionals use Contour ProtégéAl as a tool to assist in the automated processing of digital medical images of modalities CT and MR, as supported by ACR/NEMA DICOM 3.0. In addition, Contour ProtégéAl supports the following indications:
· Creation of contours using maching algorithms for applications including, but not limited to, quantitative analysis, aiding adaptive therapy, transferring contours to radiation therapy treatment planning systems, and archiving contours for patient follow-up and management.
· Segmenting normal structures across a variety of CT anatomical locations.
· And segmenting normal structures of the prostate, seminal vesicles, and urethra within T2-weighted MR images.
Appropriate image visualization software must be used to review and, if necessary, edit results automatically generated by Contour ProtégéAI.
Contour ProtégéAl is an accessory to MIM software that automatically creates contours on medical images through the use of machine-learning algorithms. It is designed for use in the processing of medical images and operates on Windows, Mac, and Linux computer systems. Contour ProtégéAl is deployed on a remote server using the MIMcloud service for data management and transfer; or locally on the workstation or server running MIM software.
The provided text outlines the 510(k) summary for Contour ProtégéAI, but it primarily focuses on establishing substantial equivalence to predicate devices and does not detail specific acceptance criteria or a comprehensive study report with numerical performance metrics against those criteria. The information provided is more about the regulatory submission process and general claims of equivalence rather than a detailed breakdown of a validation study.
However, based on the limited information regarding "Testing and Performance Data" (page 9), I can infer some aspects and highlight what is missing.
Here's an attempt to describe the acceptance criteria and study proving the device meets them, based on the provided text, while also pointing out the lack of detailed numerical results for the acceptance criteria.
Acceptance Criteria and Device Performance Study for Contour ProtégéAI
The provided 510(k) summary for Contour ProtégéAI states that "Equivalence is defined such that the lower 95th percentile confidence bound of the Contour ProtégéAI segmentation is greater than 0.1 Dice lower than the mean MIM Maestro atlas segmentation reference device performance." This statement defines the non-inferiority acceptance criterion used to compare Contour ProtégéAI against a reference device (MIM Maestro) rather than setting absolute performance thresholds for the contours themselves.
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria (Inferred from Text) | Reported Device Performance |
---|---|
For each structure of each neural network model, the lower 95th percentile confidence bound of the Contour ProtégéAI Dice coefficient must be greater than 0.1 Dice lower than the mean Dice coefficient of the MIM Maestro atlas segmentation reference device. | Stated Outcome: "Contour ProtégéAI results were equivalent or had better performance than the MIM atlas segmentation reference device." |
Specific numerical performance for each structure (Dice Coefficient) | Not provided in the document. The document states a qualitative conclusion of "equivalent or better performance" without the actual mean Dice coefficients or 95th percentile bounds for either Contour ProtégéAI or MIM Maestro. |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: The document implies that the "test subjects" were used for evaluation, but the specific number of cases or patients in the test set is not explicitly stated.
- Data Provenance: The text mentions that neural network models were trained on data that "did not include any patients from the same institution as the test subjects." This implies that the test set data originated from institutions different from the training data, suggesting a form of independent validation. The countries of origin for the data are not specified. The text indicates the study was retrospective as it involved evaluating pre-existing patient data.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Those Experts
The document states that "multiple atlases were created over the test subjects" for the MIM Maestro reference device. It does not explicitly state how the ground truth for the test set was established for Contour ProtégéAI's evaluation results. Instead, it refers to the MIM Maestro's performance as a reference. There is no information provided on the number or qualifications of experts who established any ground truth used in this comparison.
4. Adjudication Method for the Test Set
The document does not describe any specific adjudication method (e.g., 2+1, 3+1) for establishing ground truth or evaluating the test set. It mentions the "leave-one-out analysis" for creating atlases for MIM Maestro, which is a method of data splitting/resampling, not an adjudication process.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was an MRMC study done? Based on the provided text, there is no indication that a multi-reader multi-case (MRMC) comparative effectiveness study was conducted to evaluate how much human readers improve with AI vs. without AI assistance. The study described focuses on the comparison of the algorithm's performance (Contour ProtégéAI) against an existing atlas-based segmentation method (MIM Maestro).
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study
- Was a standalone study done? Yes, the described study appears to be a standalone (algorithm-only) performance evaluation. The comparison is between the Contour ProtégéAI algorithm's output and the MIM Maestro atlas segmentation reference device, with Dice coefficients calculated directly from these automated segmentations. The "Indications for Use" explicitly state: "Appropriate image visualization software must be used to review and, if necessary, edit results automatically generated by Contour ProtégéAI," implying that human modification is expected in clinical use, but the reported study does not include this human-in-the-loop performance.
7. Type of Ground Truth Used
The "ground truth" for the comparison appears to be the segmentation contours generated by the MIM Maestro atlas segmentation reference device. The study aims to demonstrate non-inferiority to this existing, cleared technology rather than a human expert-defined anatomical ground truth or pathology/outcomes data.
8. Sample Size for the Training Set
The document mentions a "pool of training data" but the specific sample size for the training set is not provided.
9. How the Ground Truth for the Training Set Was Established
The document states that "neural network models were trained for each modality (CT and MR) on a pool of training data." However, it does not describe how the ground truth (i.e., the "correct" contours) for this training data was established. It refers to the models being trained "on a pool of training data" without detailing the annotation or ground truth generation process for this training data.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).