K Number
K241083
Device Name
SIS System
Date Cleared
2024-06-14

(56 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
Predicate For
N/A
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

SIS System is intended for use in the viewing, presentation of medical imaging, including different modules for image processing, image fusion, and intraoperative functional planning where the 3D output can be used with stereotactic image guided surgery or other devices for further processing, visualization. The device can be used in conjunction with other clinical methods as an aid in visualization of the subthalamic nuclei (STN), globus pallidus externa and interna (GPe and GPi, respectively) and the ventral intermediate nucleus (Vim) in neurological procedures.

The system is indicated for surgical procedures in which anatomical structure locations are identified in images, including Deep Brain Stimulation Lead Placement.

Typical users of the SIS Software are medical professionals, including but not limited to surgeons, neurologists and radiologists.

Device Description

The SIS System is a software only device based on machine learning and image processing. The device is designed to enhance standard clinical images for the visualization of structures in the basal ganglia and thalamus areas of the brain, specifically the subthalamic nucleus (STN), globus pallidus externa and interna (GPe/GPi), and ventral intermediate nucleus (Vim). The output of the SIS system supplements the information available through standard clinical methods by providing additional, adjunctive information to surgeons, neurologists, and radiologists for use in viewing brain structures for planning stereotactic surgical procedures and planning of lead output.

The SIS System provides a patient-specific, 3D anatomical model of specific brain structures based on the patient's own clinical MR image using pre-trained deep learning neural network models. The model training method incorporates ultra-high resolution 7T (7 Tesla) Magnetic Resonance (MR) images to determine ground truth for the training dataset, which is comprised of clinical (1.5T and 3T) MR imaqes. These pre-trained deep learning neural network models are then applied to a patient's clinical image to predict the shape and position of the patient's specific brain structures of interest. SIS System is further able to locate and identify implanted leads, where implanted, visible in post-operative CT images and place them in relation to the brain structure of interest from the preoperative processing.

The proposed device is a modification to the SIS System version 6.0.0 that was cleared under K230977. The primary change is the addition of new brain structures to prediction - Vim is now supported based on pre-trained deep learning neural network models.

AI/ML Overview

The SIS System, incorporating new brain structures for visualization, underwent performance testing to demonstrate substantial equivalence to its predicate device.

1. Table of Acceptance Criteria and Reported Device Performance

Performance MetricAcceptance CriteriaReported Device Performance
Center of Mass Distance (Vim)90% of distances $\leq$ 2.00mm90% of distances $\leq$ 1.83mm
Mean Surface Distance (Vim)90% of distances $\leq$ 2.00mm90% of distances $\leq$ 0.86mm
Dice Coefficient (Vim)Mean Dice coefficient $\geq$ 0.6Mean Dice coefficient = 0.7

2. Sample Size and Data Provenance for Test Set

The document states that a "series of images from clinical subjects were collected" for the visualization accuracy testing. The specific sample size (number of subjects/images) for this test set is not explicitly provided in the given text.

The data provenance for the test set is clinical, described as "images from clinical subjects." The country of origin is not specified but is implicitly part of a clinical setting. The data was collected prospectively for validation, as it states, "None of the images from this pivotal validation set were part of the company's database for algorithm development and none were used to optimize or design the SIS's software. This pivotal validation data set was separate from the data set that was used for development. The software development was frozen and labeled before testing on this validation set."

3. Number of Experts and Qualifications for Ground Truth - Test Set

The ground truth for the test set was established by manual segmentation of the Vim. The document does not specify the number of experts who performed these manual segmentations. It also does not specify the qualifications of these experts.

4. Adjudication Method for Test Set

The document does not specify any adjudication method used for establishing the ground truth on the test set. It mentions "manually segmented (as ground truth)" but does not describe any process involving multiple readers or consensus.

5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

A multi-reader multi-case (MRMC) comparative effectiveness study was not performed according to the provided text. The study focused on the standalone performance of the device's visualization accuracy.

6. Standalone Performance Study

Yes, a standalone performance study was done. The visualization accuracy testing for the Vim was conducted to compare the SIS System's output directly against expert-derived ground truth without human-in-the-loop assistance. The reported performance metrics (Center of Mass Distance, Mean Surface Distance, Dice Coefficient) are all measures of the algorithm's standalone accuracy.

7. Type of Ground Truth Used

The type of ground truth used for the validation of the Vim visualization was expert consensus / manual segmentation (as described in point 3). Specifically, the Vim was "manually segmented" on High Field (7T) MRI and DiMANI images.

8. Sample Size for Training Set

The sample size for the training set is not explicitly provided in the text. The document mentions that the "model training method incorporates ultra-high resolution 7T (7 Tesla) Magnetic Resonance (MR) images to determine ground truth for the training dataset, which is comprised of clinical (1.5T and 3T) MR images." However, the number of images or subjects in this training dataset is not given.

9. How Ground Truth for Training Set Was Established

The ground truth for the training set was established using ultra-high resolution 7T Magnetic Resonance (MR) images. The document states that these 7T images were used "to determine ground truth for the training dataset." This implies that experts likely performed precise annotations or segmentations on these high-resolution images to create the reference for training the deep learning neural network models.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).