Search Results
Found 4 results
510(k) Data Aggregation
(56 days)
SIS System is intended for use in the viewing, presentation of medical imaging, including different modules for image processing, image fusion, and intraoperative functional planning where the 3D output can be used with stereotactic image guided surgery or other devices for further processing, visualization. The device can be used in conjunction with other clinical methods as an aid in visualization of the subthalamic nuclei (STN), globus pallidus externa and interna (GPe and GPi, respectively) and the ventral intermediate nucleus (Vim) in neurological procedures.
The system is indicated for surgical procedures in which anatomical structure locations are identified in images, including Deep Brain Stimulation Lead Placement.
Typical users of the SIS Software are medical professionals, including but not limited to surgeons, neurologists and radiologists.
The SIS System is a software only device based on machine learning and image processing. The device is designed to enhance standard clinical images for the visualization of structures in the basal ganglia and thalamus areas of the brain, specifically the subthalamic nucleus (STN), globus pallidus externa and interna (GPe/GPi), and ventral intermediate nucleus (Vim). The output of the SIS system supplements the information available through standard clinical methods by providing additional, adjunctive information to surgeons, neurologists, and radiologists for use in viewing brain structures for planning stereotactic surgical procedures and planning of lead output.
The SIS System provides a patient-specific, 3D anatomical model of specific brain structures based on the patient's own clinical MR image using pre-trained deep learning neural network models. The model training method incorporates ultra-high resolution 7T (7 Tesla) Magnetic Resonance (MR) images to determine ground truth for the training dataset, which is comprised of clinical (1.5T and 3T) MR imaqes. These pre-trained deep learning neural network models are then applied to a patient's clinical image to predict the shape and position of the patient's specific brain structures of interest. SIS System is further able to locate and identify implanted leads, where implanted, visible in post-operative CT images and place them in relation to the brain structure of interest from the preoperative processing.
The proposed device is a modification to the SIS System version 6.0.0 that was cleared under K230977. The primary change is the addition of new brain structures to prediction - Vim is now supported based on pre-trained deep learning neural network models.
The SIS System, incorporating new brain structures for visualization, underwent performance testing to demonstrate substantial equivalence to its predicate device.
1. Table of Acceptance Criteria and Reported Device Performance
Performance Metric | Acceptance Criteria | Reported Device Performance |
---|---|---|
Center of Mass Distance (Vim) | 90% of distances $\leq$ 2.00mm | 90% of distances $\leq$ 1.83mm |
Mean Surface Distance (Vim) | 90% of distances $\leq$ 2.00mm | 90% of distances $\leq$ 0.86mm |
Dice Coefficient (Vim) | Mean Dice coefficient $\geq$ 0.6 | Mean Dice coefficient = 0.7 |
2. Sample Size and Data Provenance for Test Set
The document states that a "series of images from clinical subjects were collected" for the visualization accuracy testing. The specific sample size (number of subjects/images) for this test set is not explicitly provided in the given text.
The data provenance for the test set is clinical, described as "images from clinical subjects." The country of origin is not specified but is implicitly part of a clinical setting. The data was collected prospectively for validation, as it states, "None of the images from this pivotal validation set were part of the company's database for algorithm development and none were used to optimize or design the SIS's software. This pivotal validation data set was separate from the data set that was used for development. The software development was frozen and labeled before testing on this validation set."
3. Number of Experts and Qualifications for Ground Truth - Test Set
The ground truth for the test set was established by manual segmentation of the Vim. The document does not specify the number of experts who performed these manual segmentations. It also does not specify the qualifications of these experts.
4. Adjudication Method for Test Set
The document does not specify any adjudication method used for establishing the ground truth on the test set. It mentions "manually segmented (as ground truth)" but does not describe any process involving multiple readers or consensus.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
A multi-reader multi-case (MRMC) comparative effectiveness study was not performed according to the provided text. The study focused on the standalone performance of the device's visualization accuracy.
6. Standalone Performance Study
Yes, a standalone performance study was done. The visualization accuracy testing for the Vim was conducted to compare the SIS System's output directly against expert-derived ground truth without human-in-the-loop assistance. The reported performance metrics (Center of Mass Distance, Mean Surface Distance, Dice Coefficient) are all measures of the algorithm's standalone accuracy.
7. Type of Ground Truth Used
The type of ground truth used for the validation of the Vim visualization was expert consensus / manual segmentation (as described in point 3). Specifically, the Vim was "manually segmented" on High Field (7T) MRI and DiMANI images.
8. Sample Size for Training Set
The sample size for the training set is not explicitly provided in the text. The document mentions that the "model training method incorporates ultra-high resolution 7T (7 Tesla) Magnetic Resonance (MR) images to determine ground truth for the training dataset, which is comprised of clinical (1.5T and 3T) MR images." However, the number of images or subjects in this training dataset is not given.
9. How Ground Truth for Training Set Was Established
The ground truth for the training set was established using ultra-high resolution 7T Magnetic Resonance (MR) images. The document states that these 7T images were used "to determine ground truth for the training dataset." This implies that experts likely performed precise annotations or segmentations on these high-resolution images to create the reference for training the deep learning neural network models.
Ask a specific question about this device
(27 days)
SIS System is intended for use in the viewing, presentation of medical imaging, including different modules for image processing, image fusion, and intraoperative functional planning where the 3D output can be used with stereotactic image guided surgery or other processing, visualization and localization. The device can be used in coniunction with other cinical methods as an aid in visualization of the subthalamic nuclei (STN) and globus pallidus externa and interna (GPe and GPi, respectively) in neurological procedures. The system is indicated for surgical procedures in which anatomical structure locations are identified in images, including Deep Brain Stimulation Lead Placement.
The SIS System is a software only device based on machine learning and image processing. The device is designed to enhance standard clinical images for the visualization of structures in the basal gandlia area of the brain, specifically the subthalamic nucleus (STN) and globus pallidus externa and interna (GPe/GPi). The output of the SIS system supplements the information available through standard clinical methods by providing additional, adjunctive information to surgeons, neurologists, and radiologists for use in viewing brain structures for planning stereotactic surqical procedures and planning of lead output.
The SIS System provides a patient-specific, 3D anatomical model of specific brain structures based on the patient's own clinical MR image using pre-trained deep learning neural network models. As discussed in more detail below, the method incorporates ultra-high resolution 7T (7 Tesla) Maqnetic Resonance images to determine ground truth for the training data set to train the deep learning models. These pre-trained deep learning neural network models are then applied to a patient's clinical image to predict the shape and position of the patient's specific brain structures of interest. SIS System is further able to locate and identify implanted leads, where implanted, visible in post-operative CT images and place them in relation to the brain structure of interest from the preoperative processing.
The proposed device is a modification to the SIS System version 5.6.0 that was cleared under K223032. The primary changes are the addition of two compatible leads, minor modification to image registration algorithm, and a feature to allow users to view post-operative 3D model in a different coordinate system.
The provided text is a 510(k) summary for the SIS System, which is a modification of a previously cleared device. The summary states that "the software verification and validation testing was conducted to validate that the software functions as specified and performs similarly to the predicate device using the same acceptance criteria and the same test designs as used for the previously cleared predicate device." However, the document does not provide the specific acceptance criteria, the detailed results of the performance testing against these criteria, or the methodology of the studies (e.g., sample size, data provenance, ground truth establishment, expert qualifications, etc.) for either the original predicate device or the modified device.
Therefore, I cannot provide a table of acceptance criteria, reported device performance, or details about the studies (sample sizes, ground truth establishment, expert qualifications, MRMC studies, etc.) as the information is not present in the provided document. The document only confirms that such testing was performed and that the results demonstrated substantial equivalence to the predicate.
Ask a specific question about this device
(53 days)
SIS System is intended for use in the viewing, presentation and documentation of medical imaging, including different modules for image processing, image fusion, and intraoperative functional planning where the 3D output can be used with stereotactic image guided surgery or other devices for further processing, visualization and localization. The device can be used in conjunction with other clinical methods as an aid in visualization and location of the subthalamic nuclei (STN) and globus pallidus externa and interna (GPe and GPi, respectively) in neurological procedures. The system is indicated for surgical procedures in which anatomical structure locations are identified in images, including Deep Brain Stimulation Lead Placement. Typical users of the SIS Software are medical professionals, including but not limited to surgeons, neurologists and radiologists.
The SIS System version 5.6.0 is a software only device based on machine learning and image processing. The device is designed to enhance standard clinical images for the visualization of structures in the basal ganglia area of the brain, specifically the subthalamic nucleus (STN) and globus pallidus externa and interna (GPe/GPi). The output of the SIS system supplements the information available through standard clinical methods by providing additional, adjunctive information to surgeons, neurologists, and radiologists for use in viewing brain structures for planning stereotactic surqical procedures and planning of lead output. The SIS System version 5.6.0 provides a patient-specific, 3D anatomical model of specific brain structures based on the patient's own clinical MR image using pre-trained deep learning neural network models. As discussed in more detail below, the method incorporates ultra-high resolution 7T (7 Tesla) Magnetic Resonance images to determine ground truth for the training data set to train the deep learning models. These pre-trained deep learning neural network models are then applied to a patient's clinical image to predict the shape and position of the patient's specific brain structures of interest. SIS System is further able to locate and identify implanted leads, where implanted, visible in post-operative CT images and place them in relation to the brain structure of interest from the preoperative processing. The proposed device is a modification to the SIS System version 5.1.0 that was cleared under K210071. The primary change is an update to the indications for use statement to clarify that deep brain stimulation (DBS) lead placement is a type of procedure that may be assisted by the information generated by the SIS System. The technological characteristics of the proposed device are fundamentally the same with minor updates to the backend of the software. The core algorithm that processes patient images has not changed since the prior clearance.
The provided text is a 510(k) summary for the SIS System (version 5.6.0). It primarily focuses on demonstrating substantial equivalence to a predicate device (SIS System version 5.1.0) rather than providing a detailed study report with specific acceptance criteria and performance data in the format requested. While it mentions performance data, it doesn't provide the detailed metrics or the specific study setup to prove the device meets acceptance criteria.
However, based on the available information, I can infer and summarize some aspects and state what information is not present to answer your questions fully.
Key Information from the Document:
- Device: SIS System (version 5.6.0)
- Intended Use: Viewing, presentation, and documentation of medical imaging; image processing, fusion, and intraoperative functional planning; aid in visualization and location of STN, GPe, and GPi in neurological procedures; indicated for surgical procedures where anatomical structure locations are identified (including Deep Brain Stimulation Lead Placement).
- Technological Characteristics: Software-only device based on machine learning and image processing. Enhances standard clinical images for visualization of basal ganglia structures (STN, GPe/GPi). Uses pre-trained deep learning neural network models based on ultra-high resolution 7T MR images to determine ground truth for training. Applies these models to patient's clinical MR images to predict shape and position of brain structures. Can locate and identify implanted leads in post-operative CT images.
- Changes from Predicate (v5.1.0): Primary change is an update to the indications for use statement to clarify DBS lead placement. Core algorithm unchanged. Minor backend updates.
- Performance Data Mentioned: "software verification testing was repeated to validate that the software functions as specified and performs similarly to the predicate device using the same test methods and acceptance criteria for the previously cleared predicate device. Visualization accuracy testing was repeated to validation of the STN and GPi/GPe structures. In addition, the company repeated the MRI to CT registration to ensure that 3D transformation remains accurate. The company also repeated the testing for image processing of CT images to validate the lead segmentation, as well as testing for electrode orientation to validate the lead detection functionality."
Addressing Your Specific Questions based on the Provided Text:
1. A table of acceptance criteria and the reported device performance
Based on the provided text, a detailed table with specific acceptance criteria and reported numerical performance metrics is not available. The document generally states that the device "performs similarly to the predicate device" and "performs as intended and is as safe and effective." It does not quantify the "visualization accuracy" or present the results of the "MRI to CT registration" or "lead segmentation/detection validation" in a tabulated format with acceptance thresholds.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document does not specify the sample size for the test set or the provenance of the data (e.g., retrospective/prospective, country of origin). It only refers to a "test set" without explicit details.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
The document does not provide this information for the test set. It mentions ultra-high resolution 7T (7 Tesla) Magnetic Resonance images were used to "determine ground truth for the training data set," but this detail is specifically for the training data, not the test set, and it doesn't specify experts for even that.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The document does not describe any adjudication method for the test set.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study or any data on how human readers improve with AI assistance. The study described focuses on technical performance of the device itself and its similarity to the predicate, not human-in-the-loop performance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, the information implies that a standalone performance evaluation was done. The "software verification testing" and "visualization accuracy testing," alongside validation of "MRI to CT registration" and "lead segmentation," are inherent evaluations of the algorithm's performance without human interaction during the measurement of these specific metrics.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
The document states: "the method incorporates ultra-high resolution 7T (7 Tesla) Magnetic Resonance images to determine ground truth for the training data set to train the deep learning models." This suggests that high-resolution imaging was considered the ground truth for anatomical structure definition. It doesn't explicitly state whether expert consensus or pathology was additionally involved in establishing this ground truth from the 7T images for either training or testing.
8. The sample size for the training set
The document does not specify the sample size for the training set. It only mentions that the deep learning models were trained using 7T MR images for ground truth.
9. How the ground truth for the training set was established
The ground truth for the training set was established using "ultra-high resolution 7T (7 Tesla) Magnetic Resonance images." The document implies that these images themselves, due to their high resolution, served as the basis for defining the ground truth for the specific brain structures (STN, GPe, GPi) used to train the deep learning models. It doesn't explicitly detail a human outlining or consensus process on these 7T images, although such a process is commonly implicit when using anatomical imaging as ground truth for segmentation tasks.
Ask a specific question about this device
(79 days)
SIS System is an application intended for use in the viewing, presentation of medical imaging, including different modules for image processing, image fusion, and intraoperative functional planning where the 3D output can be used with stereotactic image quided surgery or other processing and visualization. The device can be used in conjunction with other clinical methods as an aid in visualization of the subthalamic nuclei (STN) and globus pallidus externa and interna (GPe and GPi, respectively).
The SIS System version 5.1.0, a software only device based on machine learning and image processing, is designed to enhance standard clinical images for the visualization of structures in the basal ganglia area of the brain, specifically the subthalamic nucleus (STN) and globus pallidus externa and interna (GPe/GPi). The output of the SIS system supplements the information available through standard clinical methods by providing additional, adjunctive information to surgeons, neurologists, and radiologists for use in viewing brain structures for planning stereotactic surgical procedures and planning of lead output.
The SIS System version 5.1.0 provides a patient-specific, 3D anatomical model of specific brain structures based on the patient's own clinical MR image using pretrained deep learning neural network models. This method incorporates ultra-high resolution 7T (7 Tesla) Magnetic Resonance images to determine ground truth for the training data set to train the deep learning models. These pre-trained deep learning neural network models are then applied to a patient's clinical image to predict the shape and position of the patient's specific brain structures of interest. The SIS System is further able to locate and identify implanted leads, where implanted, visible in post-operative CT images and place them in relation to the brain structure of interest from the preoperative processing.
The proposed device is a modification to the SIS Software version 3.6.0 that was cleared under K192304. The changes made to the SIS System include (1) an updated algorithm that is based on deep learning Convolutional Neural Network models that were architected and optimized for brain image segmentation; (2) the addition of new targets for visualization, specifically the globus pallidus externa and interna (GPe/GPi); and (3) the addition of a functionality to determine the orientation of a directional lead, following its segmentation from the post-operative CT image.
The provided text describes the acceptance criteria and the study conducted for the SIS System (Version 5.1.0), a software-only device designed to enhance the visualization of specific brain structures (subthalamic nucleus - STN, and globus pallidus externa and interna - GPe/GPi) using deep learning and image processing.
Here's a breakdown of the requested information:
1. Table of Acceptance Criteria and Reported Device Performance:
The document states that "visualization accuracy testing was conducted for the STN and GPi/GPe structures using the same test methods and acceptance criteria for the previously cleared predicate device." However, the specific numerical acceptance criteria for visualization accuracy (e.g., minimum Dice similarity coefficient, maximum distance errors) are not explicitly provided in the text. The only specific performance metric reported is related to electrode orientation detection.
Table: Acceptance Criteria and Reported Device Performance
Performance Metric | Acceptance Criteria (Implied) | Reported Device Performance |
---|---|---|
STN Visualization Accuracy | (Same as predicate, but specific numerical criteria not provided) | "performed similarly to the predicate device" (no specific numbers given) |
GPi/GPe Visualization Accuracy | (Same as predicate, but specific numerical criteria not provided) | "performed similarly to the predicate device" (no specific numbers given) |
MRI to CT Registration Accuracy | (Requirement to remain accurate) | "ensure that 3D transformation remains accurate" (no specific numbers) |
CT Image Processing (Lead Segmentation) | (Validation of lead segmentation) | "validate the lead segmentation" (no specific numbers) |
Electrode Orientation Detection Accuracy (Trusted Detections) | >90% accurate within ± 30° | 91% of cases correct within ± 30° |
Electrode Orientation Detection Accuracy (Untrusted Detections) | (Not explicitly stated or reported, but mentioned as characterized) | (Not explicitly reported) |
2. Sample Size Used for the Test Set and Data Provenance:
- Test Set Sample Size:
- For electrode orientation detection: 43 CT image series that contained 55 leads.
- For visualization accuracy, MRI to CT registration, and lead segmentation: The text does not explicitly state the sample size for these tests. It only mentions "repeated to validate that the modified software functions as specified."
- Data Provenance: Not specified. The document does not mention the country of origin of the data or whether it was retrospective or prospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:
- The document does not explicitly state the number of experts used to establish ground truth for the test set.
- It mentions that "ultra-high resolution 7T (7 Tesla) Magnetic Resonance images to determine ground truth for the training data set." It does not directly link this to human experts for the test set.
4. Adjudication Method for the Test Set:
- The document does not mention a specific adjudication method (e.g., 2+1, 3+1) for establishing ground truth for the test set.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done:
- No, an MRMC comparative effectiveness study comparing human readers with AI vs. without AI assistance was not mentioned in the provided text. The study focuses on evaluating the device's performance in segmentation and lead detection, not its impact on human reader performance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done:
- Yes, the performance data described, particularly for visualization accuracy and electrode orientation detection, appears to be that of the standalone algorithm (without human-in-the-loop performance measurement). The assessments of "accuracy within ± 30°" and "performed similarly to the predicate device" refer to the output of the software itself.
7. The Type of Ground Truth Used:
- Expert Consensus/High-Resolution Imaging: For the training data set (and implicitly for evaluation, though not explicitly stated for the test set), the ground truth for brain structure segmentation was stated to be derived from "ultra-high resolution 7T (7 Tesla) Magnetic Resonance images." This implies that experts (e.g., neurologists, radiologists) likely delineated these structures on these high-resolution images to create the ground truth.
- For electrode orientation, it is stated that the "software was characterized by two probabilities: the probability of a trusted detection being accurate (within ± 30° of the ground truth) and the probability of an untrusted detection being accurate." This suggests a human-defined ground truth for lead orientation against which the algorithm's output was compared.
8. The Sample Size for the Training Set:
- The document does not explicitly state the sample size for the training set. It only mentions that "This method incorporates ultra-high resolution 7T (7 Tesla) Magnetic Resonance images to determine ground truth for the training data set to train the deep learning models."
9. How the Ground Truth for the Training Set Was Established:
- The ground truth for the training set was established using "ultra-high resolution 7T (7 Tesla) Magnetic Resonance images". This implies that these high-resolution images served as the reference standard, likely with manual or semi-manual expert annotations of the brain structures (STN, GPe/GPi) to create the ground truth labels for training the deep learning neural network models.
Ask a specific question about this device
Page 1 of 1