(57 days)
inHEART MODELS comprises a suite of medical imaging software modules that are intended to provide qualified medical professionals with tools to aid them in reading, interpreting, and treatment planning, inHEART MODELS accepts DICOM compliant medical images acquired from a variety of imaging devices, including CT and MR. The software is designed to be used by qualified medical professionals (including physicians, cardiologists, radiologists, and technicians) and the users are solely responsible for making all final patient management decisions.
inHEART MODELS is a suite of medical image processing software tools that enables 3D visualization and analysis of anatomical structures. Specifically, the software modules read DICOM compatible anonymized pre-operative CT and MR images acquired by commercially available imaging devices. These images are then processed to generate 3D models of the anatomy to allow qualified medical professionals to display, review, analyze, annotate, interpret, export, and plan therapeutic interventions. inHEART MODELS includes two non-device Medical Device Data Systems (MDDS) modules that are only intended to transfer, store and convert formats.
Here's a breakdown of the acceptance criteria and study details for the inHEART MODELS device, based on the provided FDA 510(k) summary:
Overview:
The inHEART MODELS software is intended to provide tools for qualified medical professionals to aid in reading, interpreting, reporting, and treatment planning using DICOM compliant CT and MR images. The performance study aimed to demonstrate the accuracy of segmentations produced by inHEART MODELS compared to ground truth and previously cleared comparable software tools, establishing substantial equivalence to the predicate device (K200973 - Synapse 3D Cardiac Tools).
1. Table of Acceptance Criteria and Reported Device Performance
The provided document doesn't detail specific quantitative acceptance criteria (e.g., minimum dice score, maximum Hausdorff distance) or the precise numerical performance metrics achieved by the inHEART MODELS. Instead, it states a qualitative outcome for the primary performance evaluation.
Acceptance Criterion (Implicit) | Reported Device Performance |
---|---|
Accuracy of segmentations compared to ground truth | All validation criteria were met. The performance evaluation study demonstrated that output segmentations for inHEART MODELS were substantially equivalent to previously cleared legally marketed software devices. |
Accuracy of measurements based on segmentations | All validation criteria were met. The performance evaluation study demonstrated that measurements on these segmentations for inHEART MODELS were substantially equivalent to previously cleared legally marketed software devices. |
Absence of new safety or efficacy issues | "No new safety or efficacy issues were introduced by inHEART MODELS compared to the predicate device." |
Functional equivalence to predicate device | "Performance data demonstrate that the functionality, output and clinical usage of inHEART MODELS is substantially equivalent to the predicate device." |
Software V&V according to FDA Guidance | "Software verification and validation testing were conducted on all inHEART MODELS software modules and documentation was provided as recommended by FDA's Guidance for Industry and FDA Staff, 'Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices.'" (The device was considered a "moderate" level of concern, implying V&V was performed to the appropriate standard for this level). |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: The document does not explicitly state the number of images or cases in the test set. It mentions "CT and MR images were selected for the performance study."
- Data Provenance: Not specified in the provided text (e.g., country of origin, retrospective or prospective collection).
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Number of Experts: Not specified. The document states, "Radiologists were designated as ground-truth annotators." This implies more than one, but the exact number isn't provided.
- Qualifications of Experts: "Radiologists." Specific experience levels (e.g., "10 years of experience") are not detailed.
4. Adjudication Method for the Test Set
- Adjudication Method: Not specified. It only states that "Radiologists were designated as ground-truth annotators," but doesn't explain if this involved consensus, majority vote, or individual expert review.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- MRMC Study Done: No. The study described compares the device's segmentations and measurements to ground truth and other cleared software tools, not explicitly studying the effect of AI assistance on human readers' performance. The focus is on the software's standalone output being substantially equivalent.
- Effect Size of Human Improvement with AI: Not applicable, as an MRMC comparative effectiveness study involving human readers' performance with and without AI was not described.
6. Standalone (Algorithm Only) Performance Study
- Standalone Study Done: Yes. The performance study directly assessed the "accuracy of segmentations for inHEART MODELS" compared to ground truth and other software tools, indicating a standalone evaluation of the algorithm's output. The statement "output segmentations and measurements on these segmentations for inHEART MODELS were substantially equivalent" confirms this.
7. Type of Ground Truth Used
- Type of Ground Truth: Expert consensus / Expert annotation. The document states that "Radiologists were designated as ground-truth annotators" who generated 3D heart models. This indicates manual review and annotation by medical experts as the basis for ground truth.
8. Sample Size for the Training Set
- Training Set Sample Size: Not specified in the provided document. The document focuses on the validation or test set performance.
9. How Ground Truth for the Training Set Was Established
- Ground Truth for Training Set Establishment: Not specified in the provided document. This information would typically be relevant for machine learning models but is not detailed for inHEART MODELS in this summary. The process for generating ground truth for the test set is described (radiologists as annotators), but not for any potential training set.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).