Search Results
Found 1 results
510(k) Data Aggregation
(53 days)
SIS System (Version 5.6.0)
SIS System is intended for use in the viewing, presentation and documentation of medical imaging, including different modules for image processing, image fusion, and intraoperative functional planning where the 3D output can be used with stereotactic image guided surgery or other devices for further processing, visualization and localization. The device can be used in conjunction with other clinical methods as an aid in visualization and location of the subthalamic nuclei (STN) and globus pallidus externa and interna (GPe and GPi, respectively) in neurological procedures. The system is indicated for surgical procedures in which anatomical structure locations are identified in images, including Deep Brain Stimulation Lead Placement. Typical users of the SIS Software are medical professionals, including but not limited to surgeons, neurologists and radiologists.
The SIS System version 5.6.0 is a software only device based on machine learning and image processing. The device is designed to enhance standard clinical images for the visualization of structures in the basal ganglia area of the brain, specifically the subthalamic nucleus (STN) and globus pallidus externa and interna (GPe/GPi). The output of the SIS system supplements the information available through standard clinical methods by providing additional, adjunctive information to surgeons, neurologists, and radiologists for use in viewing brain structures for planning stereotactic surqical procedures and planning of lead output. The SIS System version 5.6.0 provides a patient-specific, 3D anatomical model of specific brain structures based on the patient's own clinical MR image using pre-trained deep learning neural network models. As discussed in more detail below, the method incorporates ultra-high resolution 7T (7 Tesla) Magnetic Resonance images to determine ground truth for the training data set to train the deep learning models. These pre-trained deep learning neural network models are then applied to a patient's clinical image to predict the shape and position of the patient's specific brain structures of interest. SIS System is further able to locate and identify implanted leads, where implanted, visible in post-operative CT images and place them in relation to the brain structure of interest from the preoperative processing. The proposed device is a modification to the SIS System version 5.1.0 that was cleared under K210071. The primary change is an update to the indications for use statement to clarify that deep brain stimulation (DBS) lead placement is a type of procedure that may be assisted by the information generated by the SIS System. The technological characteristics of the proposed device are fundamentally the same with minor updates to the backend of the software. The core algorithm that processes patient images has not changed since the prior clearance.
The provided text is a 510(k) summary for the SIS System (version 5.6.0). It primarily focuses on demonstrating substantial equivalence to a predicate device (SIS System version 5.1.0) rather than providing a detailed study report with specific acceptance criteria and performance data in the format requested. While it mentions performance data, it doesn't provide the detailed metrics or the specific study setup to prove the device meets acceptance criteria.
However, based on the available information, I can infer and summarize some aspects and state what information is not present to answer your questions fully.
Key Information from the Document:
- Device: SIS System (version 5.6.0)
- Intended Use: Viewing, presentation, and documentation of medical imaging; image processing, fusion, and intraoperative functional planning; aid in visualization and location of STN, GPe, and GPi in neurological procedures; indicated for surgical procedures where anatomical structure locations are identified (including Deep Brain Stimulation Lead Placement).
- Technological Characteristics: Software-only device based on machine learning and image processing. Enhances standard clinical images for visualization of basal ganglia structures (STN, GPe/GPi). Uses pre-trained deep learning neural network models based on ultra-high resolution 7T MR images to determine ground truth for training. Applies these models to patient's clinical MR images to predict shape and position of brain structures. Can locate and identify implanted leads in post-operative CT images.
- Changes from Predicate (v5.1.0): Primary change is an update to the indications for use statement to clarify DBS lead placement. Core algorithm unchanged. Minor backend updates.
- Performance Data Mentioned: "software verification testing was repeated to validate that the software functions as specified and performs similarly to the predicate device using the same test methods and acceptance criteria for the previously cleared predicate device. Visualization accuracy testing was repeated to validation of the STN and GPi/GPe structures. In addition, the company repeated the MRI to CT registration to ensure that 3D transformation remains accurate. The company also repeated the testing for image processing of CT images to validate the lead segmentation, as well as testing for electrode orientation to validate the lead detection functionality."
Addressing Your Specific Questions based on the Provided Text:
1. A table of acceptance criteria and the reported device performance
Based on the provided text, a detailed table with specific acceptance criteria and reported numerical performance metrics is not available. The document generally states that the device "performs similarly to the predicate device" and "performs as intended and is as safe and effective." It does not quantify the "visualization accuracy" or present the results of the "MRI to CT registration" or "lead segmentation/detection validation" in a tabulated format with acceptance thresholds.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document does not specify the sample size for the test set or the provenance of the data (e.g., retrospective/prospective, country of origin). It only refers to a "test set" without explicit details.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
The document does not provide this information for the test set. It mentions ultra-high resolution 7T (7 Tesla) Magnetic Resonance images were used to "determine ground truth for the training data set," but this detail is specifically for the training data, not the test set, and it doesn't specify experts for even that.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The document does not describe any adjudication method for the test set.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study or any data on how human readers improve with AI assistance. The study described focuses on technical performance of the device itself and its similarity to the predicate, not human-in-the-loop performance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, the information implies that a standalone performance evaluation was done. The "software verification testing" and "visualization accuracy testing," alongside validation of "MRI to CT registration" and "lead segmentation," are inherent evaluations of the algorithm's performance without human interaction during the measurement of these specific metrics.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
The document states: "the method incorporates ultra-high resolution 7T (7 Tesla) Magnetic Resonance images to determine ground truth for the training data set to train the deep learning models." This suggests that high-resolution imaging was considered the ground truth for anatomical structure definition. It doesn't explicitly state whether expert consensus or pathology was additionally involved in establishing this ground truth from the 7T images for either training or testing.
8. The sample size for the training set
The document does not specify the sample size for the training set. It only mentions that the deep learning models were trained using 7T MR images for ground truth.
9. How the ground truth for the training set was established
The ground truth for the training set was established using "ultra-high resolution 7T (7 Tesla) Magnetic Resonance images." The document implies that these images themselves, due to their high resolution, served as the basis for defining the ground truth for the specific brain structures (STN, GPe, GPi) used to train the deep learning models. It doesn't explicitly detail a human outlining or consensus process on these 7T images, although such a process is commonly implicit when using anatomical imaging as ground truth for segmentation tasks.
Ask a specific question about this device
Page 1 of 1