K Number
K120734
Manufacturer
Date Cleared
2012-08-02

(146 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The Microsoft® Amalga™ UIS Image Processing Module (IPM) is used in conjunction with Microsoft® Amalga™ UIS and Microsoft® Amalga™ UIS Medical Imaging Module to receive medical images from acquisition devices and imaging systems adhering to DICOM protocol. Medical images received from volumetric or planar imaging modalities are processed to derive certain information. The information thus derived is transmitted using the DICOM or HTTP protocol to other devices supporting these standard protocols.

The IPM enables physicians and other healthcare providers to rapidly navigate across images by selecting tagged organs and review them through multi-planar reconstruction (MPR). IPM is not intended to be used for primary diagnosis.

This device is not intended to be used for mammography.

Device Description

The Microsoft® Amalga™ UIS Image Processing Module (IPM) is used in conjunction with Microsoft® Amalga™ UIS and Microsoft® Amalga™ UIS Medical Imaging Module (MIM). These two software products (Amalga™ UIS and Amalga™ MIM) are Class 1 devices that perform consolidation of disparate health information (Amalga™ UIS) and medical image communication (Amalga™ MIM). Amalga™ IPM enables healthcare providers to rapidly navigate across images by selecting tagged organs and to review these images using multi-planar reconstruction (MPR).

Amalga™ IPM delivers a basic MPR rendering platform that constructs a three-dimensional view from a set of CT, MR, or PET images. Amalga™ IPM also enables organ label based navigation through DICOM CT scans both in 2D image display and MPR viewing mode.

AI/ML Overview

Here’s a summary of the acceptance criteria and study details for the Microsoft® Amalga™ UIS Image Processing Module:

1. Table of Acceptance Criteria and Reported Device Performance

Feature/MetricAcceptance CriteriaReported Device Performance
Accuracy of image tagging algorithmExceeding threshold of 81%Ranged between 84% and 99% for 21 organs
Multiplanar Reconstruction (MPR) functionOperates properlyConfirmed to operate properly
Navigation using semantic tagging of organsOperates properlyConfirmed to operate properly

2. Sample Size Used for the Test Set and Data Provenance

The exact sample size (number of images or patients) for the test set is not explicitly stated. The document mentions "human images from a variety of databases."

  • Data Provenance: The data came from "a variety of databases" comprising patients with "a wide variety of medical conditions and body shapes." Scans exhibited "large differences in image cropping, resolution, scanner type and use of contrast agents." The country of origin is not specified, and it is stated as retrospective data (from existing databases).

3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications

  • Number of Experts: The document refers to "physician identified" boundaries for comparison, implying multiple physicians were involved but the exact number is not specified.
  • Qualifications of Experts: The experts are referred to as "physicians." Their specific specialties (e.g., radiologists) or years of experience are not provided.

4. Adjudication Method for the Test Set

The adjudication method used to establish ground truth is not specified. It mentions "physician identified" boundaries as the basis for comparison with the software's performance, but the process for resolving discrepancies among physicians or establishing a consensus is not detailed.

5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

A multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly mentioned or described. The study focused on the standalone performance of the algorithm for image tagging.

6. Standalone (Algorithm Only) Performance Study

Yes, a standalone study was conducted. The "image tagging algorithm" was validated for its accuracy, sensitivity, specificity, and precision in identifying organ boundaries. The reported performance metrics (accuracy, sensitivity, specificity, precision) were determined for the algorithm itself.

7. Type of Ground Truth Used

The type of ground truth used for the test set was expert consensus / expert identification. Specifically, "physician identified" boundaries were used as the reference against which the software's identified boundaries were compared.

8. Sample Size for the Training Set

The sample size for the training set is not explicitly stated. The document only mentions using "human images from a variety of databases" for validation. It doesn't differentiate between training and validation/test sets in terms of sample size.

9. How Ground Truth for the Training Set Was Established

The document does not explicitly state how the ground truth for the training set was established. It only refers to "physician identified" boundaries for the validation/test phase. It's implied that the training data would also have had some form of expert-derived ground truth, but the method is not detailed.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).