Search Results
Found 1 results
510(k) Data Aggregation
(79 days)
SIS System is an application intended for use in the viewing, presentation of medical imaging, including different modules for image processing, image fusion, and intraoperative functional planning where the 3D output can be used with stereotactic image quided surgery or other processing and visualization. The device can be used in conjunction with other clinical methods as an aid in visualization of the subthalamic nuclei (STN) and globus pallidus externa and interna (GPe and GPi, respectively).
The SIS System version 5.1.0, a software only device based on machine learning and image processing, is designed to enhance standard clinical images for the visualization of structures in the basal ganglia area of the brain, specifically the subthalamic nucleus (STN) and globus pallidus externa and interna (GPe/GPi). The output of the SIS system supplements the information available through standard clinical methods by providing additional, adjunctive information to surgeons, neurologists, and radiologists for use in viewing brain structures for planning stereotactic surgical procedures and planning of lead output.
The SIS System version 5.1.0 provides a patient-specific, 3D anatomical model of specific brain structures based on the patient's own clinical MR image using pretrained deep learning neural network models. This method incorporates ultra-high resolution 7T (7 Tesla) Magnetic Resonance images to determine ground truth for the training data set to train the deep learning models. These pre-trained deep learning neural network models are then applied to a patient's clinical image to predict the shape and position of the patient's specific brain structures of interest. The SIS System is further able to locate and identify implanted leads, where implanted, visible in post-operative CT images and place them in relation to the brain structure of interest from the preoperative processing.
The proposed device is a modification to the SIS Software version 3.6.0 that was cleared under K192304. The changes made to the SIS System include (1) an updated algorithm that is based on deep learning Convolutional Neural Network models that were architected and optimized for brain image segmentation; (2) the addition of new targets for visualization, specifically the globus pallidus externa and interna (GPe/GPi); and (3) the addition of a functionality to determine the orientation of a directional lead, following its segmentation from the post-operative CT image.
The provided text describes the acceptance criteria and the study conducted for the SIS System (Version 5.1.0), a software-only device designed to enhance the visualization of specific brain structures (subthalamic nucleus - STN, and globus pallidus externa and interna - GPe/GPi) using deep learning and image processing.
Here's a breakdown of the requested information:
1. Table of Acceptance Criteria and Reported Device Performance:
The document states that "visualization accuracy testing was conducted for the STN and GPi/GPe structures using the same test methods and acceptance criteria for the previously cleared predicate device." However, the specific numerical acceptance criteria for visualization accuracy (e.g., minimum Dice similarity coefficient, maximum distance errors) are not explicitly provided in the text. The only specific performance metric reported is related to electrode orientation detection.
Table: Acceptance Criteria and Reported Device Performance
Performance Metric | Acceptance Criteria (Implied) | Reported Device Performance |
---|---|---|
STN Visualization Accuracy | (Same as predicate, but specific numerical criteria not provided) | "performed similarly to the predicate device" (no specific numbers given) |
GPi/GPe Visualization Accuracy | (Same as predicate, but specific numerical criteria not provided) | "performed similarly to the predicate device" (no specific numbers given) |
MRI to CT Registration Accuracy | (Requirement to remain accurate) | "ensure that 3D transformation remains accurate" (no specific numbers) |
CT Image Processing (Lead Segmentation) | (Validation of lead segmentation) | "validate the lead segmentation" (no specific numbers) |
Electrode Orientation Detection Accuracy (Trusted Detections) | >90% accurate within ± 30° | 91% of cases correct within ± 30° |
Electrode Orientation Detection Accuracy (Untrusted Detections) | (Not explicitly stated or reported, but mentioned as characterized) | (Not explicitly reported) |
2. Sample Size Used for the Test Set and Data Provenance:
- Test Set Sample Size:
- For electrode orientation detection: 43 CT image series that contained 55 leads.
- For visualization accuracy, MRI to CT registration, and lead segmentation: The text does not explicitly state the sample size for these tests. It only mentions "repeated to validate that the modified software functions as specified."
- Data Provenance: Not specified. The document does not mention the country of origin of the data or whether it was retrospective or prospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:
- The document does not explicitly state the number of experts used to establish ground truth for the test set.
- It mentions that "ultra-high resolution 7T (7 Tesla) Magnetic Resonance images to determine ground truth for the training data set." It does not directly link this to human experts for the test set.
4. Adjudication Method for the Test Set:
- The document does not mention a specific adjudication method (e.g., 2+1, 3+1) for establishing ground truth for the test set.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done:
- No, an MRMC comparative effectiveness study comparing human readers with AI vs. without AI assistance was not mentioned in the provided text. The study focuses on evaluating the device's performance in segmentation and lead detection, not its impact on human reader performance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done:
- Yes, the performance data described, particularly for visualization accuracy and electrode orientation detection, appears to be that of the standalone algorithm (without human-in-the-loop performance measurement). The assessments of "accuracy within ± 30°" and "performed similarly to the predicate device" refer to the output of the software itself.
7. The Type of Ground Truth Used:
- Expert Consensus/High-Resolution Imaging: For the training data set (and implicitly for evaluation, though not explicitly stated for the test set), the ground truth for brain structure segmentation was stated to be derived from "ultra-high resolution 7T (7 Tesla) Magnetic Resonance images." This implies that experts (e.g., neurologists, radiologists) likely delineated these structures on these high-resolution images to create the ground truth.
- For electrode orientation, it is stated that the "software was characterized by two probabilities: the probability of a trusted detection being accurate (within ± 30° of the ground truth) and the probability of an untrusted detection being accurate." This suggests a human-defined ground truth for lead orientation against which the algorithm's output was compared.
8. The Sample Size for the Training Set:
- The document does not explicitly state the sample size for the training set. It only mentions that "This method incorporates ultra-high resolution 7T (7 Tesla) Magnetic Resonance images to determine ground truth for the training data set to train the deep learning models."
9. How the Ground Truth for the Training Set Was Established:
- The ground truth for the training set was established using "ultra-high resolution 7T (7 Tesla) Magnetic Resonance images". This implies that these high-resolution images served as the reference standard, likely with manual or semi-manual expert annotations of the brain structures (STN, GPe/GPi) to create the ground truth labels for training the deep learning neural network models.
Ask a specific question about this device
Page 1 of 1