(62 days)
MIM 3.5 (CIRCA) is a software package that provides the physician with the means to display, register and fuse medical images from multiple modalities. Additionally, it evaluates cardiac left ventricular function and perfusion including left ventricular end-diastolic volume, end-systolic volume, and ejection fraction. Finally, the ROI feature reduces the time necessary for the physician to define objects in medical image volumes by providing an initial definition of object contours. The objects include but are not limited to turnors and organs.
The MIM software program should be used for the registration, fusion and display of medical images from multi-modalities, such as, SPECT, PET, CT, and MRI. MIM assists in definition of structures in medical images including turnors, organs, and cardiac left ventricular cavity.
MIM 3.5 (CIRCA) is a software package designed for use in diagnostic imaging. It is a standalone software package which operates on Windows 2000/XP. Its intended function and use is to provide the physician with the means to display, register and fuse medical images from multiple modalities including DICOM PET, ECAT PET, SPECT, CT and MRI. Additionally it evaluates cardiac left ventricular function and perfusion including left ventricular end-diastolic volume, end-systolic volume, ejection fraction, volumetric curve and ROI contouring.
The provided text makes a general statement that "MIMvista has conducted performance and functional testing on the MIM 3.5 (CIRCA) software. In all cases, the software passed its' performance requirements and met specifications." However, it does not provide specific acceptance criteria or details about the study that proves the device meets those criteria.
Therefore, I cannot populate the table or answer most of your questions based on the given document.
Here's a breakdown of what can be inferred or directly stated from the document regarding the study, and where information is missing:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria (e.g., Accuracy, Precision, Recall) | Reported Device Performance |
---|---|
Not provided in the document | Not provided in the document |
(The document only states "passed its' performance requirements and met specifications.") | (No specific metrics or quantitative results are given.) |
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
- Sample Size: Not specified.
- Data Provenance: Not specified (e.g., country of origin, retrospective/prospective).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of experts: Not specified.
- Qualifications of experts: Not specified.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
- Adjudication method: Not specified.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC study: Not specified as having been performed.
- Effect size: Not applicable, as an MRMC study is not mentioned. The device's stated function is to provide tools for display, registration, fusion, and evaluation, and to assist in defining structures, implying a human-in-the-loop context, but no comparative study vs. human readers alone is described.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone study: The document implies standalone software functionality ("standalone software package"), and the general statement "passed its' performance requirements and met specifications" refers to the software itself. However, specific details of a standalone performance study with metrics are not provided.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Type of ground truth: Not specified.
8. The sample size for the training set
- Training set size: Not specified. (It's worth noting that at the time of this 510(k), the term "training set" in the context of machine learning was not as prevalent or explicitly required for device submissions as it is now. This device is described as "Medical Imaging Software" rather than an AI/ML algorithm in the modern sense.)
9. How the ground truth for the training set was established
- Ground truth establishment for training set: Not specified.
Summary of available information regarding the study:
- What was done: Performance and functional testing.
- Outcome: The software passed its performance requirements and met specifications.
- Details missing: Specific metrics, study design, sample sizes, ground truth methodology, expert involvement, and comparative studies are all absent from this 510(k) summary. The submission focuses on substantial equivalence to predicate devices, which was primarily based on intended use and technical characteristics, not necessarily detailed clinical performance studies as would be expected for novel AI devices today.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).