K Number
K032186
Device Name
AUTOALIGN
Date Cleared
2003-10-14

(89 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
N/A
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

AutoAlign™ Atlas-Based Image Registration software is intended to provide an output registration matrix that may be utilized to align an MRI brain scan to a known and consistent anatomic orientation, a process known as image registration. AutoAlign™ Atlas-Based Image Registration software is intended to be marketed as a software device that can provide improvements to the manual processes of image registration. The dominant use of AutoAlign™ Atlas-Based Image Registration software is its integration into proprietary MR image software packages by MRI scanner manufacturers to allow users to generate consistent patient image registrations for image acquisition, a process otherwise known as AutoSlice Prescriptioning.

Device Description

AutoAlign™ Atlas-Based Image Registration software is intended to provide an output registration matrix that may be utilized to align an MRI brain scan to a known and consistent anatomic orientation, a process known as image registration. AutoAlign™ Atlas-Based Image Registration software is intended to be marketed as a software device that can provide improvements to the manual processes of image registration. The dominant use of AutoAlign™ Atlas-Based Image Registration software is its integration into proprietary MR image software packages by MRI scanner manufacturers to allow users to generate consistent patient image registrations for image acquisition, a process otherwise known as AutoSlice Prescriptioning.

AutoAlign™ Atlas-Based Image Registration has a feedback mechanism which measures and reports alignments which have the potential to be outside of stated specifications. This is reflected in as a "Measurement Index" value which is the average of the Mahalanobis distance for the voxel intensity of all atlas points to the patient images supplied for alignment.

AI/ML Overview

The provided document describes the validation of the AutoAlign™ Atlas-Based Image Registration software. Here's a breakdown of the acceptance criteria and the study that proves the device meets them:

1. Table of Acceptance Criteria and Reported Device Performance

Acceptance Criterion (Intended Use Claims)Reported Device Performance (Mean ± Standard Deviation)
a) Inter-subject variability of AC: ≤ 15 mmMean distance between individual AC and reference: 3.90 mm (± 3.38 mm)
b) Inter-subject variability of PC: ≤ 13 mmMean distance between individual PC and reference: 2.69 mm (± 1.34 mm)
c) Inter-subject variability of IHP (sagittal views): ≤ 6 mmMean position of the IHP: -0.285 mm (standard deviation not explicitly stated for this metric in relation to the 6mm criterion, but IHP position is part of the overall dispersion calculation)
d) Inter-subject variability of angle formed by IHP and anterior-posterior line (axial views): ≤ 5 degreesMean angle (beta): 0.789 degrees (± 1.13 degrees)
e) Inter-subject variability of angle formed by IHP and superior-inferior line (coronal views): ≤ 7 degreesMean angle (gamma): -0.465 degrees (± 0.717 degrees)

2. Sample Size for the Test Set and Data Provenance

  • Sample Size: 259 MR image volumes.
  • Data Provenance: Retrospective, anonymous, low-resolution multispectral MR scans from actual adult subjects (ages 15-89) with both normal and abnormal pathologies, supplied by Siemens AG, Erlangen, Germany.

3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

  • Number of Experts: One expert.
  • Qualifications: Ph.D. trained in neurosciences.

4. Adjudication Method for the Test Set

The document does not describe an explicit adjudication method involving multiple experts for the ground truth. Instead, it states that "Post alignment measurements were made by an expert" (Ph.D. trained in neurosciences). Thus, the ground truth was established by a single expert.

5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

No, a multi-reader multi-case (MRMC) comparative effectiveness study was not done. The study focused on the standalone performance of the algorithm.

6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

Yes, a standalone study was done. The "Effectiveness" section describes testing the AutoAlign software's ability to align MR Neuro images. The measurements were made post-alignment by a single expert, indicating an evaluation of the algorithm's output. The "Measurement Index" serves as a safety mechanism for operator review but is not part of the core performance validation against the specified criteria, which are purely algorithmic accuracy metrics.

7. The Type of Ground Truth Used

The ground truth used was expert consensus (from a single expert) on anatomical landmarks and angles. The expert made measurements on the aligned images to determine the positions of the anterior commissure (AC), posterior commissure (PC), and inter-hemispheric plane (IHP), as well as specific angles.

8. The Sample Size for the Training Set

The document does not explicitly state the sample size for the training set. It mentions the "embedded reference neuroanatomic Atlas" but does not detail its creation or the data used to train the AutoAlign algorithm. The 259 cases were used to "validate and test the efficacy" of the software, implying they were a test set, not a training set.

9. How the Ground Truth for the Training Set Was Established

The document does not describe how the ground truth for the training set (i.e., the "embedded reference neuroanatomic Atlas") was established.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).