K Number
K183019
Date Cleared
2019-03-19

(139 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
Predicate For
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

SIS Software is an application intended for use in the viewing, presentation of medical imaging, including different modules for image processing, image fusion, and intraoperative functional planning where the 3D output can be used with stereotactic image quided surgery or other devices for further processing and visualization. The device can be used in conjunction with other clinical methods as an aid in visualization of the subthalamic nuclei (STN).

Device Description

SIS Software is an application intended for use in the viewing, presentation and documentation of medical imaging, including different modules for image processing, image fusion, and intraoperative functional planning where the 3D output can be used with stereotactic image guided surgery or other devices for further processing and visualization. The device can be used in conjunction with other clinical methods as an aid in visualization of the subthalamic nuclei (STN).

SIS Software uses machine learning and image processing to enhance standard clinical images for the visualization of the subthalamic nucleus ("STN"). The SIS Software supplements the information available through standard clinical methods, providing adjunctive information for use in visualization and planning stereotactic surgical procedures. SIS Software provides a patient-specific, 3D anatomical model of the patient's own brain structures that supplements other clinical information to facilitate visualization in neurosurgical procedures. The version of the software that is the subject of the current submission (Version 3.3.0) can also be employed to co-register a post-operative CT scan with the clinical scan of the same patient from before a surgery (on which the software has already visualized the STN) and to segment in the CT image (where needed), to further assist with visualization.

AI/ML Overview

Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:

1. Table of Acceptance Criteria and Reported Device Performance

The acceptance criteria and performance data are presented for three main functionalities: STN Visualization, Co-Registration, and Segmentation.

FunctionalityAcceptance CriteriaReported Device Performance
STN Visualization90% of center of mass distances and surface distances not greater than 2.0mm. Significantly greater than the conservative literature estimate of 20% successful visualizations.98.3% of center of mass distances were not greater than 2.0mm (95% CI: 91-100%). 100% of surface distances were not greater than 2.0mm (95% CI: 94-100%). 90% of center of mass distances were below 1.66mm. 90% of surface distances were below 0.63mm. The rate of successful visualizations (98.3%) was significantly greater than 20% (p2mm vs 2mm distance to the expert-derived ground truth).
  • STN Smoothing Functionality:
    • Metric-Based (Derived from STN Visualization GT): Ground truth for evaluating smoothing was based on "COM, SD and DC" relative to the STN visualization ground truth.

8. The Sample Size for the Training Set

  • The document states that the STN visualization validation data set (68 STNs) was "completely separate from the data set that was used for development" and "none were used to optimize or design the company's software."
  • Regarding the anomaly detection component, it mentions "two separate commonly used outlier detection machine learning models were trained using the brains from the training set." The specific sample size for this training set is not provided.
  • For co-registration, there's no mention of a training set as it appears to be a direct registration process, not a machine learning model.
  • For segmentation, it's not explicitly stated if a training set was used for the automated segmentation; the validation focuses on the comparison to expert ground truth.

9. How the Ground Truth for the Training Set Was Established

  • For the anomaly detection component, it states the models were "trained using the brains from the training set, from which the same brain geometry characteristics were extracted." It then describes how anomaly scores were combined. However, the method for establishing the ground truth on this training set (i.e., what constituted an "anomaly" vs "non-anomaly" during training) is not detailed in the provided text. It presumably involved similar principles of accurate vs. inaccurate visualizations, but the source and method of that ground truth for training are not specified.
  • For any other machine learning components (like the core STN visualization algorithm), the document states the methodology "relies on a reference database of high-resolution brain images (7T MRI) and standard clinical brain images (1.5T or 3T MRI)." The algorithm "uses the 7T images from a database to find regions of interest within the brain (e.g., the STN) on a patient's clinical (1.5 or 3T MRI) image." This implies the 7T MRI data serves as a form of ground truth for training the algorithm to identify STNs on clinical MRI, but the specific process of creating that ground truth from the 7T data (e.g., manual segmentation by experts on 7T) is not detailed.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).