K Number
K960071
Device Name
IMAGEFUSION
Date Cleared
1996-04-17

(103 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
N/A
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Image processing and comparing an MR and a CT image set or two different CT image sets.

Device Description

The ImageFusion system, addressed in this premarket notification, has the same intended use and technological characteristics as the commercially available StereoPlan system. Like StereoPlan, the ImageFusion system includes an image processing work station used to evaluate, manipulate, and compare MR and CT image data. In addition, ImageFusion software can reconstruct (fuse) nonstereotactic MR images into the image space of a reference CT stereotactic image set for subsequent stereotactic use, eliminating the need for the localizing hardware required in StereoPlan to define stereotactic locations in MR images. Subsequently, fused images can be used in the treatment planning for stereotactic neurosurgery, radiosurgery and radiotherapy procedures in the same way that supplementary stereotactic MR or CT images are utilized in StereoPlan.

AI/ML Overview

This document is a Summary of Safety and Effectiveness for a medical device called ImageFusion. It's a premarket notification (K960071) from 1996, which is quite old. As such, the level of detail regarding study design, ground truth establishment, and contemporary AI/ML evaluation metrics (like specific sensitivities, specificities, AUC) is not present. The language reflects the regulatory expectations of that era.

Here's an attempt to extract and infer the requested information based on the provided text, noting where information is explicitly stated, implied, or absent.


Acceptance Criteria and Device Performance

CriteriaReported Device Performance
Registration Accuracy (MR to CT space)- Average: 1.5 ± 0.6 mm for individual landmarks
  • Maximum: 2.5 mm for individual landmarks |
    | Bone Segmentation Accuracy | - Verified as "accurate" during system and unit testing. (No specific numerical metric provided) |
    | Landmark Alignment Accuracy | - Verified as "accurate" during system and unit testing. (No specific numerical metric provided) |

Note: The document specifies "registration of MR images in stereotactic CT space is accurate" and provides numerical values for this accuracy. For "bone segmentation and landmark alignment," it only states that these features are "accurate" based on system and unit testing, without providing quantitative metrics or specific acceptance criteria.

Study Information

2. Sample size used for the test set and the data provenance:

  • Test Set Sample Size: Not explicitly stated. The text mentions "system testing" and "unit testing" but does not specify the number of cases or landmarks used in these tests.
  • Data Provenance: Not specified. It's an older submission, and such details (country of origin, retrospective/prospective nature) were often not explicitly required or documented in this section.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

  • Not specified. The document does not describe the involvement of human experts in establishing ground truth for the "system testing" mentioned. It's possible the accuracy was determined by comparing the device's output to a known phantom or a previously established manual registration, but this is not detailed.

4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

  • Not specified. The description of testing is very high-level and does not include details on adjudication methods.

5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

  • No, an MRMC comparative effectiveness study was not done. The device (ImageFusion) is described as an "image processing and comparing" system for MR and CT images used in treatment planning for stereotactic neurosurgery, radiosurgery, and radiotherapy. The focus here is on the accuracy of the image fusion itself, not on aiding human readers in diagnosis or interpretation compared to a baseline. The device's role is to merge images for subsequent use, not to make or assist in making a diagnostic interpretation typically associated with MRMC studies in AI.

6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

  • Yes, a standalone performance assessment was done. The described "system testing" verifies the registration accuracy of MR images into CT space (1.5 ± 0.6 mm on average, 2.5 mm maximum) and the accuracy of features like bone segmentation and landmark alignment. This assessment focuses purely on the algorithm's output (the fused accuracy) without explicitly describing a human-in-the-loop component for this specific accuracy verification. The device is used by humans, but the performance metrics provided are for the algorithmic output.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

  • Not explicitly stated, but likely based on precise measurements or phantoms. Given the nature of "registration accuracy" for "individual landmarks," the ground truth was most likely established through highly precise physical phantoms (where landmark positions are known) or possibly through meticulous manual registration performed by an expert against a known reference, followed by quantitative measurement of the difference. Pathology or outcomes data are not relevant for assessing image registration accuracy.

8. The sample size for the training set:

  • Not applicable / Not specified. The document describes a traditional software system, not a machine learning or AI algorithm in the modern sense that typically requires a distinct "training set." While software development involves testing and iterative refinement, the concept of a separate "training set" for model learning is absent from this type of regulatory submission from 1996. The device's functionality is based on deterministic algorithms for image processing and registration.

9. How the ground truth for the training set was established:

  • Not applicable. As a traditional software system, there isn't a "training set" in the machine learning context. Therefore, the establishment of ground truth for such a set is not relevant.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).