(125 days)
The Emprint™ Visualization Application is a stand-alone software product that allows physicians to visualize and compare CT and MRI imaging data. The display, annotation, and volume rendering of medical images aids in ablation procedures conducted using Emprint™ ablation systems. The software is not intended for diagnosis.
The Emprint™ Visualization Application is a software product that achieves its medical purpose without being part of the hardware of a medical device (SaMD). The Emprint™ Visualization Application is used to support Emprint ™-system ablation procedures by displaying patient CT and MRI images with modeled ablation zones / volumes. The application is a Windows™ desktop program that is installed on a hospital computer with local storage and a network connection. The software receives CT and MRI images by supporting DICOM connections with CT/MRI scanners and hospital PACS. The software's DICOM image viewer does not in any way alter the medical images. The device is designed to meet the procedure planning and evaluation needs of physicians conducting soft tissue ablation procedures using Emprint ™-branded systems only.
Based on the provided text, the Emprint™ Visualization Application is a standalone software product (Software as a Medical Device - SaMD) that allows physicians to visualize and compare CT and MRI imaging data to aid in ablation procedures. It is not intended for diagnosis. The performance testing described focuses on various aspects of software quality and usability rather than a comparative effectiveness study with human readers or a standalone AI performance evaluation for diagnostic purposes.
Here's an analysis of the acceptance criteria and the study that proves the device meets them, according to the provided document:
Acceptance Criteria and Reported Device Performance
The document describes performance testing that focused on software verification and human factors engineering. Explicit quantitative acceptance criteria are not presented in a table format with corresponding performance metrics for features like sensitivity, specificity, or accuracy in a diagnostic sense. Instead, the performance is described in terms of compliance with standards and functional verification.
Table of Acceptance Criteria and Reported Device Performance:
Acceptance Criterion (Implicit/Explicit) | Reported Device Performance |
---|---|
Functional Performance & Accuracy: | |
- Ability to import and view standard DICOM images in 3-dimensions. | Software receives CT and MRI images by supporting DICOM connections with CT/MRI scanners and hospital PACS. User can import standard DICOM images and view them in 3-dimensions. |
- Ability to select and view specific anatomical features. | User can select and view specific anatomical features (e.g., soft-tissue lesions, anatomical landmarks). |
- Ability to measure and mark critical anatomical features/areas. | User can measure and mark critical anatomical features / areas of interest. System-level testing verified the application's measurement accuracy (+/- 2 voxels ). |
- Ability to overlay and position virtual images (antenna/ablation zone). | User can overlay and position virtual images of the Emprint™ ablation antenna and the anticipated thermal ablation zone onto the medical image. The device references zone charts (look-up tables) that characterize Emprint™ Ablation System performance for sizing and displaying predicted ablation zones. |
- Ability to add textual annotations. | User can add textual annotations to images. |
- Ability to export annotated plans. | User can export annotated plans for the patient's medical record or for use in a radiology or operating suite. |
- Ability to view and compare imported images simultaneously. | User can view and compare any 2 of the imported images simultaneously. Ability to compare images across patients is also mentioned. |
- No alteration of medical images. | The software's DICOM image viewer does not in any way alter the medical images. |
Software Quality & Compliance: | |
- Compliance with NEMA PS 3.1-3.20:2016, DICOM standard. | Demonstrated compliance. The device is a DICOM image viewer and supports DICOM connections. |
- Compliance with IEC 62304:2006, Medical Device Software Life Cycle. | Demonstrated compliance. |
Usability/Human Factors: | |
- Meets user needs and expectations. | A human-factors engineering (HFE) process was followed, and simulated-use, validation testing was conducted to confirm that the visualization application met user needs and expectations. Optimizations for workflow and user interface were performed based on intraprocedural use in the CT suite. |
Study Details:
-
Sample sizes used for the test set and data provenance:
- The document does not specify a sample size for a "test set" in the context of image data for diagnostic performance.
- It mentions "extensive software verification testing" including "software subsystem and system-level verification" and "simulated-use, validation testing" for human factors.
- The data provenance of the images used in these tests (e.g., country of origin, retrospective or prospective) is not explicitly stated. The device uses patient CT and MRI images, which would be from clinical practice, implicitly retrospective for testing purposes if not generated specifically for the study.
-
Number of experts used to establish the ground truth for the test set and their qualifications:
- This information is not provided. Since the device is "not intended for diagnosis" and the testing focused on functional verification and usability (measurement accuracy, workflow), the concept of "ground truth" established by experts for diagnostic performance (e.g., presence/absence of disease) is not applicable or described in this submission summary. The "ground truth" for the measurement accuracy test (e.g., +/- 2 voxels) would likely be based on known geometric properties of test objects or simulated environments rather than expert interpretation of pathology.
-
Adjudication method for the test set:
- Not applicable/described as there's no mention of a diagnostic performance study requiring expert adjudication of cases.
-
If a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done, and the effect size of how much human readers improve with AI vs. without AI assistance:
- No, a MRMC comparative effectiveness study was not done.
- The device is a visualization and planning tool, not an AI for diagnosis or a system designed to directly improve human reader performance in interpreting images for diagnostic tasks. Its purpose is to aid in ablation procedures by displaying, annotating, and volume rendering medical images.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- The document doesn't describe a standalone performance evaluation in the typical sense of an AI algorithm making a diagnostic decision. The device itself is a "stand-alone software product" (SaMD), but its function is visualization and planning aid, not autonomous decision-making. Its "standalone" nature refers to it being a software app distinct from a hardware device. Performance testing focused on software functions, DICOM compliance, and measurement accuracy, which are "algorithm-only" in the sense that the software correctly performs its programmed tasks.
-
The type of ground truth used:
- For measurement accuracy: The document states "system-level testing was conducted to verify the application's measurement accuracy (+/- 2 voxels)". The ground truth for this would likely be an engineered or known value within test objects or simulated datasets, not expert consensus or pathological findings from real patients.
- For functional correctness: The ground truth is the expected behavior and output of the software as per its design specifications and standard compliance (e.g., DICOM standard conformance, correct display of images, faithful representation of ablation zones based on look-up tables).
- For usability: The ground truth is user needs and expectations, assessed through human factors engineering and simulated-use validation testing.
-
The sample size for the training set:
- The document does not mention a "training set" in the context of machine learning or AI models that learn from data. The device's description suggests it primarily uses rule-based logic (e.g., referencing "zone charts (look-up tables)" for ablation zones) and established imaging principles rather than a deep learning model requiring a large training dataset.
-
How the ground truth for the training set was established:
- Not applicable, as no training set for a machine learning model is described. The "zone charts" mentioned for ablation zone prediction are likely derived from preclinical studies or physical principles of the ablation systems, not from ground truth established by experts interpreting images.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).