Search Results
Found 2 results
510(k) Data Aggregation
(179 days)
Trained medical professionals use Contour ProtégéAI as a tool to assist in the automated processing of digital medical images of modalities CT and MR, as supported by ACR/NEMA DICOM 3.0. In addition, Contour ProtégéAI supports the following indications:
• Creation of contours using machine-learning algorithms for applications including, but not limited to, quantitative analysis, aiding adaptive therapy, aiding image registration, transferring contours to radiation therapy treatment planning systems, and archiving contours for patient follow-up and management.
• Segmenting structures across a variety of CT and MR anatomical locations.
Appropriate image visualization software must be used to review and, if necessary, edit results automatically generated by Contour ProtégéAI.
Contour ProtégéAI+ is an accessory to MIM software that automatically creates contours on medical images through the use of machine-learning algorithms. It is designed for use in the processing of medical images and operates on Windows, Mac, and Linux computer systems. Contour ProtégéAI+ is deployed on a remote server using the MIMcloud service for data management and transfer; or locally on the workstation or server running MIM software.
Compared to the predicate device, the intended use and indications for use for the subject device include minor modifications to improve clarity and completeness.
The upcoming 2.0.0 release of Contour ProtégéAI+ serving as the subject device in this 510(k) submission includes one new 4.3.0 neural network model (MR Brain) using the existing architecture cleared by the predicates, as well as one 5.0.0 neural network model (CT Male Pelvis) using the new architecture to allow the training of smaller networks for individual structures or groups of adjacent structures.
This 510(k) submission also includes plans for further development activities to Contour ProtégéAI+. Proposed modifications in the PCCP are categorized as follows:
● New CT models or MR models
● New CBCT models for CBCT IRIS imaging data (cleared in K252188) acquired from Elekta's Evo, Versa HD, and Harmony Pro systems
● Re-training models due to improvements in training data
● Re-training models on cleared architecture
● Re-applying CT models for CBCT IRIS imaging data (cleared in K252188) acquired from Elekta's Evo, Versa HD, and Harmony Pro systems
Here's a breakdown of the acceptance criteria and the study that proves the Contour ProtégéAI+ device meets them, based on the provided FDA 510(k) clearance letter:
Acceptance Criteria and Reported Device Performance
Table 1: Acceptance Criteria and Reported Device Performance for Contour ProtégéAI+
| Criteria Category | Specific Metric | Acceptance Criteria | Reported Device Performance (Contour ProtégéAI+) |
|---|---|---|---|
| Per-Structure Performance | Dice Score (Non-Inferiority) | Lower 95th percentile confidence bound of the difference between Contour ProtégéAI+ mean Dice and MIM Atlas mean Dice > -0.1 | Demonstrated equivalence or better performance than MIM Maestro atlas segmentation (many indicated by *) |
| MDA Score (Non-Inferiority) | Upper 95th percentile confidence bound of the difference between Contour ProtégéAI+ mean MDA and MIM Atlas mean MDA < 2mm | Demonstrated equivalence or better performance than MIM Maestro atlas segmentation (many indicated by *) | |
| User Evaluation Score (Average) | Average score of 3 or higher (on a five-point scale, where 3 = minor edits in less time than starting from scratch, 4 = minor edits not necessary, 5 = can be used as-is) | Scores ranged from 2.6 to 4.75 across structures (many indicated above 3, some below but passed other criteria) | |
| Model-Level Performance | Cumulative Added Path Length (APL) (Non-Inferiority) | Statistically non-inferior cumulative APL compared to the reference predicate | 4.3.0 MR Brain: 36.87 ± 72.40 (3.63%) * (Non-inferiority demonstrated) 5.0.0 CT Male Pelvis: 165.44 ± 235.96 (-21.5%) * (Non-inferiority demonstrated) |
| Overall Acceptance | Inclusion in Final Models | Structures must pass two or more of the three per-structure tests (Dice, MDA, User Evaluation). | All included structures passed this criterion. |
Study Details
-
Sample Size Used for the Test Set and Data Provenance:
- Test Set Size: 189 individual patient images.
- Data Provenance: All testing data originated from the United States.
- Regional breakdown: Midwest (18.5%), South (54.0%), West (12.7%), and Northeast (14.8%).
- Sex distribution: 28.0% female, 46.8% male, and 25.4% unknown.
- Age distribution: 6.9% between 20-40 years, 17.5% between 40-60 years, 51.3% over 60 years, and 24.3% unknown.
- Manufacturer representation: GE (46.6%), Siemens (36.0%), Phillips (4.8%), Accuray (5.8%), and TomoTherapy (6.9%).
- Nature of data: Retrospective, obtained from clinical treatment plans for patients prescribed external beam or molecular radiotherapy.
-
Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts:
- The document implies a team-based approach for ground truth establishment. For "Re-segmented" data, segmentation is performed by a dosimetrist, then reviewed by a team of dosimetrists, and separately reviewed by a radiation oncologist. Segmentations that failed review were re-contoured by a dosimetrist and re-reviewed. The exact number of individual experts (dosimetrists, radiation oncologists) involved is not explicitly stated.
- Qualifications: Dosimetrists and Radiation Oncologists are "trained medical professionals" and "consultants (physicians and dosimetrists)". Specific years of experience are not provided, but their roles in clinical treatment planning and review imply significant expertise.
-
Adjudication Method for the Test Set:
- The document describes a review process where segmentations are reviewed by a team of dosimetrists and separately reviewed by a radiation oncologist. If segmentations fail review, they are referred for re-contouring and re-reviewed. This suggests a form of consensus or expert adjudication, but a specific "2+1" or "3+1" method is not detailed. It's a qualitative review leading to re-contouring if disagreements are significant enough to "fail review."
-
If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC comparative effectiveness study was explicitly described in terms of human readers improving with AI assistance.
- A "user beta testing" was conducted to evaluate "time savings compared to contouring from scratch," which is related to AI assistance. However, it measured the quality of the AI-generated contour (on a 5-point scale), not the improvement in human reader performance with the AI.
- The primary comparative effectiveness study was Contour ProtégéAI+ (AI standalone) vs. MIM Maestro Atlas Segmentation (reference device), not AI-assisted human vs. unassisted human.
-
If a Standalone (i.e. algorithm only without human-in-the loop performance) was done:
- Yes, a standalone performance study was done. The Dice and MDA scores presented in Table 2 are direct comparisons of the Contour ProtégéAI+ algorithm's output against the ground truth, and against the MIM Maestro atlas segmentation (another automated method). The user evaluation scores also reflect the quality of the algorithm's output, which would then be reviewed by a human.
-
The Type of Ground Truth Used:
- Expert Consensus / Clinically Established Guidelines: The ground truth for the test set consisted of contours derived from clinical treatment plans. These contours were either "Not be Re-segmented" (original treatment plan segmentations, implicitly considered ground truth) or "Re-segmented" and then meticulously reviewed and approved by a team of dosimetrists and a radiation oncologist, ensuring adherence to established clinical guidelines.
- Outcome Data: Not explicitly mentioned as a source for ground truth.
- Pathology: Not explicitly mentioned as a source for ground truth.
-
The Sample Size for the Training Set:
- The document states, "The CT images for this training set were obtained from clinical treatment plans for patients prescribed external beam or molecular radiotherapy and were re-segmented by consultants (physicians and dosimetrists) specifically for this purpose." However, the specific sample size (number of patients/images) for the training set is not provided. It only mentions that the images for the verification data (189 images) are independent from the training data.
-
How the Ground Truth for the Training Set was Established:
- The ground truth for the training set was established by "consultants (physicians and dosimetrists)" who re-segmented clinical treatment plans. This implies expert-driven manual contouring or correction to create the reference data used to train the machine learning models. The process for internal review and quality assurance of these training contours is not detailed to the same extent as for the test set ground truth.
Ask a specific question about this device
(174 days)
MIM software is used by trained medical professionals as a tool to aid in evaluation and information management of digital medical images. The medical image modalities include, but are not limited to, CT, MR, CR, DX, MG, US, SPECT, PET, and XA as supported by ACR/NEMA DICOM 3.0. MIM assists in the following indications:
- Receive, transmit, store, retrieve, display, print, and process medical images and DICOM objects.
- Create, display, and print reports from medical images.
- Registration, fusion display, and review of medical images for diagnosis, staging, treatment planning, monitoring treatment response, and treatment evaluation.
- Evaluation of cardiac left ventricular function and perfusion, including left ventricular end-diastolic volume, end-systolic volume, and ejection fraction.
- Localization and definition of objects such as tumors and normal tissues in medical images.
- Creation, transformation, and modification of contours for applications including, but not limited to, quantitative analysis, aiding adaptive therapy, transferring contours to radiation therapy treatment planning systems, and archiving contours for patient follow-up and management.
- Quantitative and statistical analysis of PET/SPECT brain scans by comparing to other registered PET/SPECT brain scans.
- Planning and evaluation of permanent implant brachytherapy procedures (not including radioactive microspheres).
- Calculating absorbed radiation dose as a result of administering a radionuclide.
- Assist with the planning and evaluation of ablation procedures by providing visualization and analysis, including energy zone visualization through the placement of virtual ablation devices validated for inclusion in MIM-Ablation. The software is not intended to predict specific ablation zone volumes or predict ablation success.
When using the device clinically, within the United States, the user should only use FDA approved radiopharmaceuticals. If used with unapproved ones, this device should only be used for research purposes.
Lossy compressed mammographic images and digitized film screen images must not be reviewed for primary image interpretations. Images that are printed to film must be printed using an FDA-approved printer for the diagnosis of digital mammography images. Mammographic images must be viewed on a display system that has been cleared by the FDA for the diagnosis of digital mammography images. The software is not to be used for mammography CAD.
When used for diagnostic purposes, the mobile thin client is not intended to replace a full workstation and should only be used when there is no access to a workstation.
The subject MIM – LesionID Pro device is a standalone software application that extends the functionality of the MIM software device. It is a modification to the predicate MIM software application (K243012) for incorporating updates to the LesionID Pro option that is commercially available in the currently distributed version of MIM software.
LesionID Pro assists users with the evaluation of PSMA PET/CT and SPECT/CT studies by automating hotspot segmentation and physiological uptake removal, to help reduce manual processing and streamline generation of Total Tumor Burden (TTB) statistics. It is provided via MIM Workflows that allow automation using scripts constructed of MIM software modular functions and commands.
LesionID Pro does not determine final hotspots segmentation for TTB generation, and requires users to review, edit, and confirm the segmentation before generating TTB statistics. The modifications made to LesionID Pro optimize the identification and removal of physiological uptake, automates the processing for a more streamlines workflow, and introduced enhancements related to user interface and experience.
Here's a summary of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) clearance letter for MIM - LesionID Pro:
The clearance letter primarily focuses on the device's substantial equivalence to predicate devices and does not detail specific quantitative acceptance criteria or a comprehensive study plan with statistical results in the provided sections. Instead, it describes general performance testing and qualitative clinical reader evaluation.
Acceptance Criteria and Reported Device Performance
The document describes the acceptance criteria as ensuring that the "initial TTB segmentation generated by LesionID Pro was of acceptable quality for clinical use in the context of PSMA PET and SPECT TTB segmentation and evaluation" and that it "reduce[s] user need for manual editing."
The reported device performance indicates that LesionID Pro "successfully completed performance testing on a clinically representative dataset to verify that the generated segmentations are adequate for use as an initial segmentation, helping to reduce user need for manual editing."
Given the information, a table of specific quantitative acceptance criteria and corresponding reported device performance values is not available in the provided text. The evaluation appears to be qualitative and aimed at verifying adequacy and reduction in manual editing.
| Acceptance Criteria (Inferred from study description) | Reported Device Performance |
|---|---|
| Initial TTB segmentation adequate for clinical use in PSMA PET and SPECT | Successfully completed performance testing verifying adequacy |
| Initial TTB segmentation reduces user need for manual editing | Successfully completed performance testing verifying reduced manual editing needs |
| Segmentations adequate for use as an initial segmentation | Segmentations verified as adequate for initial use |
| Generated segmentations aligning with physician-approved segmentation Agreement Standard | Test evaluated initial TTB segmentation against a pre-defined segmentation Agreement Standard based on physician-approved segmentation (Result: "successfully completed") |
Study Details
Based on the provided text, the study focuses on performance testing and clinical reader evaluation of LesionID Pro.
-
1. Sample sized used for the test set and the data provenance:
- Sample Size: Not explicitly stated as a number. The text mentions "a clinically representative dataset" and "clinically representative PSMA PET/CT and SPECT/CT patient clinical studies."
- Data Provenance: Not explicitly stated (e.g., country of origin). The studies spanned "factors various relevant to the evaluation of LesionID Pro's segmentation performance (e.g., radiotracers, disease burden, imaging systems)." The readers were "United States board certified NM physicians," suggesting the clinical context is within the US. The information does not specify if the data was retrospective or prospective.
-
2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: Not explicitly stated as a number. The "pre-defined segmentation Agreement Standard based on physician approved segmentation" implies expert consensus or approval was used to define the ground truth for comparison.
- Qualifications of Experts (for ground truth): The "physician approved segmentation" implies qualified medical professionals, but their specific qualifications (e.g., years of experience) are not detailed here.
-
3. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- The "pre-defined segmentation Agreement Standard based on physician approved segmentation" suggests a form of consensus or expert-defined standard, but the specific adjudication method (e.g., "2+1") is not described.
-
4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- MRMC Study: A "qualitative clinical reader evaluation" was performed where readers assessed the "initial segmentation generated by LesionID Pro." This indicates a reader study, but it's not explicitly framed as an MRMC comparative effectiveness study measuring reader improvement with AI vs. without AI assistance. It seems to be an evaluation of the AI's output itself for clinical acceptability rather than a comparison of human performance with and without the AI.
- Effect Size: No effect size or quantitative measure of human reader improvement with AI assistance is reported.
-
5. If a standalone (i.e., algorithm only without human-in-the loop performance) was done:
- Yes, the "performance testing on a clinically representative dataset" compared the initial TTB segmentation generated by LesionID Pro (presumably standalone algorithm output) against a "pre-defined segmentation Agreement Standard." The qualitative clinical reader evaluation was of the initial segmentation generated by LesionID Pro, further supporting standalone evaluation. The device also "requires users to review, edit, and confirm the segmentation before generating TTB statistics," indicating the AI provides an initial segmentation, which is a standalone function.
-
6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- The ground truth for comparison was a "pre-defined segmentation Agreement Standard based on physician approved segmentation." This points towards expert-defined or expert-approved segmentation.
-
7. The sample size for the training set:
- Not specified. The document only mentions testing and verification.
-
8. How the ground truth for the training set was established:
- Not specified. The document only discusses the ground truth for the test set.
Ask a specific question about this device
Page 1 of 1