Search Results
Found 7 results
510(k) Data Aggregation
(56 days)
SIS System is intended for use in the viewing, presentation of medical imaging, including different modules for image processing, image fusion, and intraoperative functional planning where the 3D output can be used with stereotactic image guided surgery or other devices for further processing, visualization. The device can be used in conjunction with other clinical methods as an aid in visualization of the subthalamic nuclei (STN), globus pallidus externa and interna (GPe and GPi, respectively) and the ventral intermediate nucleus (Vim) in neurological procedures.
The system is indicated for surgical procedures in which anatomical structure locations are identified in images, including Deep Brain Stimulation Lead Placement.
Typical users of the SIS Software are medical professionals, including but not limited to surgeons, neurologists and radiologists.
The SIS System is a software only device based on machine learning and image processing. The device is designed to enhance standard clinical images for the visualization of structures in the basal ganglia and thalamus areas of the brain, specifically the subthalamic nucleus (STN), globus pallidus externa and interna (GPe/GPi), and ventral intermediate nucleus (Vim). The output of the SIS system supplements the information available through standard clinical methods by providing additional, adjunctive information to surgeons, neurologists, and radiologists for use in viewing brain structures for planning stereotactic surgical procedures and planning of lead output.
The SIS System provides a patient-specific, 3D anatomical model of specific brain structures based on the patient's own clinical MR image using pre-trained deep learning neural network models. The model training method incorporates ultra-high resolution 7T (7 Tesla) Magnetic Resonance (MR) images to determine ground truth for the training dataset, which is comprised of clinical (1.5T and 3T) MR imaqes. These pre-trained deep learning neural network models are then applied to a patient's clinical image to predict the shape and position of the patient's specific brain structures of interest. SIS System is further able to locate and identify implanted leads, where implanted, visible in post-operative CT images and place them in relation to the brain structure of interest from the preoperative processing.
The proposed device is a modification to the SIS System version 6.0.0 that was cleared under K230977. The primary change is the addition of new brain structures to prediction - Vim is now supported based on pre-trained deep learning neural network models.
The SIS System, incorporating new brain structures for visualization, underwent performance testing to demonstrate substantial equivalence to its predicate device.
1. Table of Acceptance Criteria and Reported Device Performance
Performance Metric | Acceptance Criteria | Reported Device Performance |
---|---|---|
Center of Mass Distance (Vim) | 90% of distances $\leq$ 2.00mm | 90% of distances $\leq$ 1.83mm |
Mean Surface Distance (Vim) | 90% of distances $\leq$ 2.00mm | 90% of distances $\leq$ 0.86mm |
Dice Coefficient (Vim) | Mean Dice coefficient $\geq$ 0.6 | Mean Dice coefficient = 0.7 |
2. Sample Size and Data Provenance for Test Set
The document states that a "series of images from clinical subjects were collected" for the visualization accuracy testing. The specific sample size (number of subjects/images) for this test set is not explicitly provided in the given text.
The data provenance for the test set is clinical, described as "images from clinical subjects." The country of origin is not specified but is implicitly part of a clinical setting. The data was collected prospectively for validation, as it states, "None of the images from this pivotal validation set were part of the company's database for algorithm development and none were used to optimize or design the SIS's software. This pivotal validation data set was separate from the data set that was used for development. The software development was frozen and labeled before testing on this validation set."
3. Number of Experts and Qualifications for Ground Truth - Test Set
The ground truth for the test set was established by manual segmentation of the Vim. The document does not specify the number of experts who performed these manual segmentations. It also does not specify the qualifications of these experts.
4. Adjudication Method for Test Set
The document does not specify any adjudication method used for establishing the ground truth on the test set. It mentions "manually segmented (as ground truth)" but does not describe any process involving multiple readers or consensus.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
A multi-reader multi-case (MRMC) comparative effectiveness study was not performed according to the provided text. The study focused on the standalone performance of the device's visualization accuracy.
6. Standalone Performance Study
Yes, a standalone performance study was done. The visualization accuracy testing for the Vim was conducted to compare the SIS System's output directly against expert-derived ground truth without human-in-the-loop assistance. The reported performance metrics (Center of Mass Distance, Mean Surface Distance, Dice Coefficient) are all measures of the algorithm's standalone accuracy.
7. Type of Ground Truth Used
The type of ground truth used for the validation of the Vim visualization was expert consensus / manual segmentation (as described in point 3). Specifically, the Vim was "manually segmented" on High Field (7T) MRI and DiMANI images.
8. Sample Size for Training Set
The sample size for the training set is not explicitly provided in the text. The document mentions that the "model training method incorporates ultra-high resolution 7T (7 Tesla) Magnetic Resonance (MR) images to determine ground truth for the training dataset, which is comprised of clinical (1.5T and 3T) MR images." However, the number of images or subjects in this training dataset is not given.
9. How Ground Truth for Training Set Was Established
The ground truth for the training set was established using ultra-high resolution 7T Magnetic Resonance (MR) images. The document states that these 7T images were used "to determine ground truth for the training dataset." This implies that experts likely performed precise annotations or segmentations on these high-resolution images to create the reference for training the deep learning neural network models.
Ask a specific question about this device
(27 days)
SIS System is intended for use in the viewing, presentation of medical imaging, including different modules for image processing, image fusion, and intraoperative functional planning where the 3D output can be used with stereotactic image guided surgery or other processing, visualization and localization. The device can be used in coniunction with other cinical methods as an aid in visualization of the subthalamic nuclei (STN) and globus pallidus externa and interna (GPe and GPi, respectively) in neurological procedures. The system is indicated for surgical procedures in which anatomical structure locations are identified in images, including Deep Brain Stimulation Lead Placement.
The SIS System is a software only device based on machine learning and image processing. The device is designed to enhance standard clinical images for the visualization of structures in the basal gandlia area of the brain, specifically the subthalamic nucleus (STN) and globus pallidus externa and interna (GPe/GPi). The output of the SIS system supplements the information available through standard clinical methods by providing additional, adjunctive information to surgeons, neurologists, and radiologists for use in viewing brain structures for planning stereotactic surqical procedures and planning of lead output.
The SIS System provides a patient-specific, 3D anatomical model of specific brain structures based on the patient's own clinical MR image using pre-trained deep learning neural network models. As discussed in more detail below, the method incorporates ultra-high resolution 7T (7 Tesla) Maqnetic Resonance images to determine ground truth for the training data set to train the deep learning models. These pre-trained deep learning neural network models are then applied to a patient's clinical image to predict the shape and position of the patient's specific brain structures of interest. SIS System is further able to locate and identify implanted leads, where implanted, visible in post-operative CT images and place them in relation to the brain structure of interest from the preoperative processing.
The proposed device is a modification to the SIS System version 5.6.0 that was cleared under K223032. The primary changes are the addition of two compatible leads, minor modification to image registration algorithm, and a feature to allow users to view post-operative 3D model in a different coordinate system.
The provided text is a 510(k) summary for the SIS System, which is a modification of a previously cleared device. The summary states that "the software verification and validation testing was conducted to validate that the software functions as specified and performs similarly to the predicate device using the same acceptance criteria and the same test designs as used for the previously cleared predicate device." However, the document does not provide the specific acceptance criteria, the detailed results of the performance testing against these criteria, or the methodology of the studies (e.g., sample size, data provenance, ground truth establishment, expert qualifications, etc.) for either the original predicate device or the modified device.
Therefore, I cannot provide a table of acceptance criteria, reported device performance, or details about the studies (sample sizes, ground truth establishment, expert qualifications, MRMC studies, etc.) as the information is not present in the provided document. The document only confirms that such testing was performed and that the results demonstrated substantial equivalence to the predicate.
Ask a specific question about this device
(53 days)
SIS System is intended for use in the viewing, presentation and documentation of medical imaging, including different modules for image processing, image fusion, and intraoperative functional planning where the 3D output can be used with stereotactic image guided surgery or other devices for further processing, visualization and localization. The device can be used in conjunction with other clinical methods as an aid in visualization and location of the subthalamic nuclei (STN) and globus pallidus externa and interna (GPe and GPi, respectively) in neurological procedures. The system is indicated for surgical procedures in which anatomical structure locations are identified in images, including Deep Brain Stimulation Lead Placement. Typical users of the SIS Software are medical professionals, including but not limited to surgeons, neurologists and radiologists.
The SIS System version 5.6.0 is a software only device based on machine learning and image processing. The device is designed to enhance standard clinical images for the visualization of structures in the basal ganglia area of the brain, specifically the subthalamic nucleus (STN) and globus pallidus externa and interna (GPe/GPi). The output of the SIS system supplements the information available through standard clinical methods by providing additional, adjunctive information to surgeons, neurologists, and radiologists for use in viewing brain structures for planning stereotactic surqical procedures and planning of lead output. The SIS System version 5.6.0 provides a patient-specific, 3D anatomical model of specific brain structures based on the patient's own clinical MR image using pre-trained deep learning neural network models. As discussed in more detail below, the method incorporates ultra-high resolution 7T (7 Tesla) Magnetic Resonance images to determine ground truth for the training data set to train the deep learning models. These pre-trained deep learning neural network models are then applied to a patient's clinical image to predict the shape and position of the patient's specific brain structures of interest. SIS System is further able to locate and identify implanted leads, where implanted, visible in post-operative CT images and place them in relation to the brain structure of interest from the preoperative processing. The proposed device is a modification to the SIS System version 5.1.0 that was cleared under K210071. The primary change is an update to the indications for use statement to clarify that deep brain stimulation (DBS) lead placement is a type of procedure that may be assisted by the information generated by the SIS System. The technological characteristics of the proposed device are fundamentally the same with minor updates to the backend of the software. The core algorithm that processes patient images has not changed since the prior clearance.
The provided text is a 510(k) summary for the SIS System (version 5.6.0). It primarily focuses on demonstrating substantial equivalence to a predicate device (SIS System version 5.1.0) rather than providing a detailed study report with specific acceptance criteria and performance data in the format requested. While it mentions performance data, it doesn't provide the detailed metrics or the specific study setup to prove the device meets acceptance criteria.
However, based on the available information, I can infer and summarize some aspects and state what information is not present to answer your questions fully.
Key Information from the Document:
- Device: SIS System (version 5.6.0)
- Intended Use: Viewing, presentation, and documentation of medical imaging; image processing, fusion, and intraoperative functional planning; aid in visualization and location of STN, GPe, and GPi in neurological procedures; indicated for surgical procedures where anatomical structure locations are identified (including Deep Brain Stimulation Lead Placement).
- Technological Characteristics: Software-only device based on machine learning and image processing. Enhances standard clinical images for visualization of basal ganglia structures (STN, GPe/GPi). Uses pre-trained deep learning neural network models based on ultra-high resolution 7T MR images to determine ground truth for training. Applies these models to patient's clinical MR images to predict shape and position of brain structures. Can locate and identify implanted leads in post-operative CT images.
- Changes from Predicate (v5.1.0): Primary change is an update to the indications for use statement to clarify DBS lead placement. Core algorithm unchanged. Minor backend updates.
- Performance Data Mentioned: "software verification testing was repeated to validate that the software functions as specified and performs similarly to the predicate device using the same test methods and acceptance criteria for the previously cleared predicate device. Visualization accuracy testing was repeated to validation of the STN and GPi/GPe structures. In addition, the company repeated the MRI to CT registration to ensure that 3D transformation remains accurate. The company also repeated the testing for image processing of CT images to validate the lead segmentation, as well as testing for electrode orientation to validate the lead detection functionality."
Addressing Your Specific Questions based on the Provided Text:
1. A table of acceptance criteria and the reported device performance
Based on the provided text, a detailed table with specific acceptance criteria and reported numerical performance metrics is not available. The document generally states that the device "performs similarly to the predicate device" and "performs as intended and is as safe and effective." It does not quantify the "visualization accuracy" or present the results of the "MRI to CT registration" or "lead segmentation/detection validation" in a tabulated format with acceptance thresholds.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document does not specify the sample size for the test set or the provenance of the data (e.g., retrospective/prospective, country of origin). It only refers to a "test set" without explicit details.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
The document does not provide this information for the test set. It mentions ultra-high resolution 7T (7 Tesla) Magnetic Resonance images were used to "determine ground truth for the training data set," but this detail is specifically for the training data, not the test set, and it doesn't specify experts for even that.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The document does not describe any adjudication method for the test set.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study or any data on how human readers improve with AI assistance. The study described focuses on technical performance of the device itself and its similarity to the predicate, not human-in-the-loop performance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, the information implies that a standalone performance evaluation was done. The "software verification testing" and "visualization accuracy testing," alongside validation of "MRI to CT registration" and "lead segmentation," are inherent evaluations of the algorithm's performance without human interaction during the measurement of these specific metrics.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
The document states: "the method incorporates ultra-high resolution 7T (7 Tesla) Magnetic Resonance images to determine ground truth for the training data set to train the deep learning models." This suggests that high-resolution imaging was considered the ground truth for anatomical structure definition. It doesn't explicitly state whether expert consensus or pathology was additionally involved in establishing this ground truth from the 7T images for either training or testing.
8. The sample size for the training set
The document does not specify the sample size for the training set. It only mentions that the deep learning models were trained using 7T MR images for ground truth.
9. How the ground truth for the training set was established
The ground truth for the training set was established using "ultra-high resolution 7T (7 Tesla) Magnetic Resonance images." The document implies that these images themselves, due to their high resolution, served as the basis for defining the ground truth for the specific brain structures (STN, GPe, GPi) used to train the deep learning models. It doesn't explicitly detail a human outlining or consensus process on these 7T images, although such a process is commonly implicit when using anatomical imaging as ground truth for segmentation tasks.
Ask a specific question about this device
(79 days)
SIS System is an application intended for use in the viewing, presentation of medical imaging, including different modules for image processing, image fusion, and intraoperative functional planning where the 3D output can be used with stereotactic image quided surgery or other processing and visualization. The device can be used in conjunction with other clinical methods as an aid in visualization of the subthalamic nuclei (STN) and globus pallidus externa and interna (GPe and GPi, respectively).
The SIS System version 5.1.0, a software only device based on machine learning and image processing, is designed to enhance standard clinical images for the visualization of structures in the basal ganglia area of the brain, specifically the subthalamic nucleus (STN) and globus pallidus externa and interna (GPe/GPi). The output of the SIS system supplements the information available through standard clinical methods by providing additional, adjunctive information to surgeons, neurologists, and radiologists for use in viewing brain structures for planning stereotactic surgical procedures and planning of lead output.
The SIS System version 5.1.0 provides a patient-specific, 3D anatomical model of specific brain structures based on the patient's own clinical MR image using pretrained deep learning neural network models. This method incorporates ultra-high resolution 7T (7 Tesla) Magnetic Resonance images to determine ground truth for the training data set to train the deep learning models. These pre-trained deep learning neural network models are then applied to a patient's clinical image to predict the shape and position of the patient's specific brain structures of interest. The SIS System is further able to locate and identify implanted leads, where implanted, visible in post-operative CT images and place them in relation to the brain structure of interest from the preoperative processing.
The proposed device is a modification to the SIS Software version 3.6.0 that was cleared under K192304. The changes made to the SIS System include (1) an updated algorithm that is based on deep learning Convolutional Neural Network models that were architected and optimized for brain image segmentation; (2) the addition of new targets for visualization, specifically the globus pallidus externa and interna (GPe/GPi); and (3) the addition of a functionality to determine the orientation of a directional lead, following its segmentation from the post-operative CT image.
The provided text describes the acceptance criteria and the study conducted for the SIS System (Version 5.1.0), a software-only device designed to enhance the visualization of specific brain structures (subthalamic nucleus - STN, and globus pallidus externa and interna - GPe/GPi) using deep learning and image processing.
Here's a breakdown of the requested information:
1. Table of Acceptance Criteria and Reported Device Performance:
The document states that "visualization accuracy testing was conducted for the STN and GPi/GPe structures using the same test methods and acceptance criteria for the previously cleared predicate device." However, the specific numerical acceptance criteria for visualization accuracy (e.g., minimum Dice similarity coefficient, maximum distance errors) are not explicitly provided in the text. The only specific performance metric reported is related to electrode orientation detection.
Table: Acceptance Criteria and Reported Device Performance
Performance Metric | Acceptance Criteria (Implied) | Reported Device Performance |
---|---|---|
STN Visualization Accuracy | (Same as predicate, but specific numerical criteria not provided) | "performed similarly to the predicate device" (no specific numbers given) |
GPi/GPe Visualization Accuracy | (Same as predicate, but specific numerical criteria not provided) | "performed similarly to the predicate device" (no specific numbers given) |
MRI to CT Registration Accuracy | (Requirement to remain accurate) | "ensure that 3D transformation remains accurate" (no specific numbers) |
CT Image Processing (Lead Segmentation) | (Validation of lead segmentation) | "validate the lead segmentation" (no specific numbers) |
Electrode Orientation Detection Accuracy (Trusted Detections) | >90% accurate within ± 30° | 91% of cases correct within ± 30° |
Electrode Orientation Detection Accuracy (Untrusted Detections) | (Not explicitly stated or reported, but mentioned as characterized) | (Not explicitly reported) |
2. Sample Size Used for the Test Set and Data Provenance:
- Test Set Sample Size:
- For electrode orientation detection: 43 CT image series that contained 55 leads.
- For visualization accuracy, MRI to CT registration, and lead segmentation: The text does not explicitly state the sample size for these tests. It only mentions "repeated to validate that the modified software functions as specified."
- Data Provenance: Not specified. The document does not mention the country of origin of the data or whether it was retrospective or prospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:
- The document does not explicitly state the number of experts used to establish ground truth for the test set.
- It mentions that "ultra-high resolution 7T (7 Tesla) Magnetic Resonance images to determine ground truth for the training data set." It does not directly link this to human experts for the test set.
4. Adjudication Method for the Test Set:
- The document does not mention a specific adjudication method (e.g., 2+1, 3+1) for establishing ground truth for the test set.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done:
- No, an MRMC comparative effectiveness study comparing human readers with AI vs. without AI assistance was not mentioned in the provided text. The study focuses on evaluating the device's performance in segmentation and lead detection, not its impact on human reader performance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done:
- Yes, the performance data described, particularly for visualization accuracy and electrode orientation detection, appears to be that of the standalone algorithm (without human-in-the-loop performance measurement). The assessments of "accuracy within ± 30°" and "performed similarly to the predicate device" refer to the output of the software itself.
7. The Type of Ground Truth Used:
- Expert Consensus/High-Resolution Imaging: For the training data set (and implicitly for evaluation, though not explicitly stated for the test set), the ground truth for brain structure segmentation was stated to be derived from "ultra-high resolution 7T (7 Tesla) Magnetic Resonance images." This implies that experts (e.g., neurologists, radiologists) likely delineated these structures on these high-resolution images to create the ground truth.
- For electrode orientation, it is stated that the "software was characterized by two probabilities: the probability of a trusted detection being accurate (within ± 30° of the ground truth) and the probability of an untrusted detection being accurate." This suggests a human-defined ground truth for lead orientation against which the algorithm's output was compared.
8. The Sample Size for the Training Set:
- The document does not explicitly state the sample size for the training set. It only mentions that "This method incorporates ultra-high resolution 7T (7 Tesla) Magnetic Resonance images to determine ground truth for the training data set to train the deep learning models."
9. How the Ground Truth for the Training Set Was Established:
- The ground truth for the training set was established using "ultra-high resolution 7T (7 Tesla) Magnetic Resonance images". This implies that these high-resolution images served as the reference standard, likely with manual or semi-manual expert annotations of the brain structures (STN, GPe/GPi) to create the ground truth labels for training the deep learning neural network models.
Ask a specific question about this device
(21 days)
SIS Software is an application intended for use in the viewing, presentation and documentation of medical imaging, including different modules for image processing, image fusion, and intraoperative functional planning where the 3D output can be used with stereotactic image guided surgery or other devices for further processing and visualization. The device can be used in conjunction with other clinical methods as an aid in visualization of the subthalamic nuclei (STN).
SIS Software uses machine learning and image processing to enhance standard clinical images for the visualization of the subthalamic nucleus ("STN"). The SIS Software supplements the information available through standard clinical methods, providing adjunctive information for use in visualization and planning stereotactic surgical procedures. SIS Software provides a patient-specific, 3D anatomical model of the patient's own brain structures that supplements other clinical information to facilitate visualization in neurosurgical procedures.
The version of the software that is the subject of the current submission (Version 3.6.0) is a modification to the predicate SIS Software version 3.3.0 that was cleared under K183019. The subject and predicate devices rely on the same core technological principles. The only minor changes were modifications to enable the use of a more comprehensive MR to post operation CT registration methodology, and image processing techniques for CT images acquired with gantry tilt. The web user interface has also been enhanced to allow additional options for administrators/supervisors, and has added audit logging functions.
The provided text is a 510(k) summary for SIS Software Version 3.6.0. It describes the device, its intended use, and argues for its substantial equivalence to a predicate device (SIS Software Version 3.3.0). However, it does not provide detailed acceptance criteria or a comprehensive study report with the level of detail requested for each point in the prompt.
The document states that "software verification and validation testing has been repeated to validate that the modified software functions as specified and performs similarly to the predicate device." It also mentions "MRI to CT registration testing using the new methodology, which demonstrated that the software continued to register MR images to the CT space. The error was within the acceptance criteria, and was comparable to that for SIS Software version 3.3.0, which used the same protocol."
Based on the provided text, here is an attempt to address your request, highlighting where information is not provided in the source document.
Description of Acceptance Criteria and Proving Device Meets Criteria (Based on Provided Text)
The SIS Software Version 3.6.0 is a modification of a previously cleared device (Version 3.3.0). The study aims to demonstrate that the updated software continues to function as specified and performs similarly to the predicate device, specifically regarding MRI to CT registration and image processing for gantry-tilted CT scans. The primary acceptance criterion broadly seems to be that performance ("error") for the modified functions remains "within the acceptance criteria" and "comparable" to the predicate device.
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criterion | Reported Device Performance |
---|---|
MRI to CT Registration: Error of registration between MR images and CT space. | "The error was within the acceptance criteria, and was comparable to that for SIS Software version 3.3.0, which used the same protocol." (Specific numerical acceptance criteria and reported error values are not provided). |
CT Image Processing (Gantry Tilt): Does not affect object segmentation performance compared to the predicate device. | "Results demonstrated that the cropping image processing does not affect the performance of the software as compared to its predicate." (Specific metrics for "performance" or "affect" are not provided). |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: The document states that the MRI to CT registration testing used a "new methodology," and that the CT image processing for gantry tilt used "the same CT scans that were used in the validation testing for the predicate device." The specific numerical sample size (number of MR and CT scans) for the test sets is not provided.
- Data Provenance: The document does not provide information regarding the country of origin of the data or whether the data was retrospective or prospective.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Not provided. The document describes software validation and verification testing but does not mention the use of experts or their qualifications for establishing ground truth for the test set.
4. Adjudication Method for the Test Set
- Not provided. The document does not describe any adjudication method.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size
- No. The document describes a software validation study demonstrating that the modified software performs comparably to its predicate. It does not describe an MRMC comparative effectiveness study involving human readers with and without AI assistance. Therefore, no effect size for human reader improvement is provided.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study Was Done
- Yes, implicitly. The performance data section describes "software verification and validation testing" which "demonstrated that the software continued to register MR images to the CT space" and that "the cropping image processing does not affect the performance of the software." This implies standalone algorithm performance testing. No human-in-the-loop studies are mentioned.
7. The Type of Ground Truth Used
- The document implies that the ground truth for registration and segmentation performance was established against results from the predicate device and internal specifications/protocols ("within the acceptance criteria," "comparable to that for SIS Software version 3.3.0," "functions as specified"). It does not explicitly state the type of ground truth used (e.g., expert consensus, pathology, outcome data, etc.) beyond comparison to the predicate's performance.
8. The Sample Size for the Training Set
- Not provided. The document does not discuss the training set, only the validation/test set. The device uses "proprietary algorithms" and states "minor modifications to the registration and CT image processing techniques are introduced... the basis for the device algorithm remain the same." This suggests the core algorithm was developed previously.
9. How the Ground Truth for the Training Set Was Established
- Not provided. As the training set is not discussed, information on how its ground truth was established is absent.
Ask a specific question about this device
(139 days)
SIS Software is an application intended for use in the viewing, presentation of medical imaging, including different modules for image processing, image fusion, and intraoperative functional planning where the 3D output can be used with stereotactic image quided surgery or other devices for further processing and visualization. The device can be used in conjunction with other clinical methods as an aid in visualization of the subthalamic nuclei (STN).
SIS Software is an application intended for use in the viewing, presentation and documentation of medical imaging, including different modules for image processing, image fusion, and intraoperative functional planning where the 3D output can be used with stereotactic image guided surgery or other devices for further processing and visualization. The device can be used in conjunction with other clinical methods as an aid in visualization of the subthalamic nuclei (STN).
SIS Software uses machine learning and image processing to enhance standard clinical images for the visualization of the subthalamic nucleus ("STN"). The SIS Software supplements the information available through standard clinical methods, providing adjunctive information for use in visualization and planning stereotactic surgical procedures. SIS Software provides a patient-specific, 3D anatomical model of the patient's own brain structures that supplements other clinical information to facilitate visualization in neurosurgical procedures. The version of the software that is the subject of the current submission (Version 3.3.0) can also be employed to co-register a post-operative CT scan with the clinical scan of the same patient from before a surgery (on which the software has already visualized the STN) and to segment in the CT image (where needed), to further assist with visualization.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria and performance data are presented for three main functionalities: STN Visualization, Co-Registration, and Segmentation.
Functionality | Acceptance Criteria | Reported Device Performance |
---|---|---|
STN Visualization | 90% of center of mass distances and surface distances not greater than 2.0mm. Significantly greater than the conservative literature estimate of 20% successful visualizations. | 98.3% of center of mass distances were not greater than 2.0mm (95% CI: 91-100%). 100% of surface distances were not greater than 2.0mm (95% CI: 94-100%). 90% of center of mass distances were below 1.66mm. 90% of surface distances were below 0.63mm. The rate of successful visualizations (98.3%) was significantly greater than 20% (p2mm vs 2mm distance to the expert-derived ground truth). |
- STN Smoothing Functionality:
- Metric-Based (Derived from STN Visualization GT): Ground truth for evaluating smoothing was based on "COM, SD and DC" relative to the STN visualization ground truth.
8. The Sample Size for the Training Set
- The document states that the STN visualization validation data set (68 STNs) was "completely separate from the data set that was used for development" and "none were used to optimize or design the company's software."
- Regarding the anomaly detection component, it mentions "two separate commonly used outlier detection machine learning models were trained using the brains from the training set." The specific sample size for this training set is not provided.
- For co-registration, there's no mention of a training set as it appears to be a direct registration process, not a machine learning model.
- For segmentation, it's not explicitly stated if a training set was used for the automated segmentation; the validation focuses on the comparison to expert ground truth.
9. How the Ground Truth for the Training Set Was Established
- For the anomaly detection component, it states the models were "trained using the brains from the training set, from which the same brain geometry characteristics were extracted." It then describes how anomaly scores were combined. However, the method for establishing the ground truth on this training set (i.e., what constituted an "anomaly" vs "non-anomaly" during training) is not detailed in the provided text. It presumably involved similar principles of accurate vs. inaccurate visualizations, but the source and method of that ground truth for training are not specified.
- For any other machine learning components (like the core STN visualization algorithm), the document states the methodology "relies on a reference database of high-resolution brain images (7T MRI) and standard clinical brain images (1.5T or 3T MRI)." The algorithm "uses the 7T images from a database to find regions of interest within the brain (e.g., the STN) on a patient's clinical (1.5 or 3T MRI) image." This implies the 7T MRI data serves as a form of ground truth for training the algorithm to identify STNs on clinical MRI, but the specific process of creating that ground truth from the 7T data (e.g., manual segmentation by experts on 7T) is not detailed.
Ask a specific question about this device
(130 days)
SIS Software is an application intended for use in the viewing, presentation of medical imaging, including different modules for image fusion, and intraoperative functional planning where the 3D output can be used with stereotactic image quided surgery or other devices for further processing and visualization. The device can be used in conjunction with other clinical methods as an aid in visualization of the subthalamic nuclei (STN).
SIS Software is an application intended for use in the viewing, presentation and documentation of medical imaging, including different modules for image processing, image fusion, and intraoperative functional planning where the 3D output can be used with stereotactic image guided surgery or other devices for further processing and visualization. The device can be used in coniunction with other clinical methods as an aid in visualization of the subthalamic nuclei (STN).
SIS Software uses machine learning and image processing to enhance standard clinical images for the visualization of the subthalamic nucleus ("STN"). The SIS Software supplements the information available through standard clinical methods, providing additional, adjunctive information to surgeons, neurologists and radiologists for use in visualization and planning stereotactic surgical procedures. SIS Software provides a patient specific, 3D anatomical model of the patient's own brain structures that supplements other clinical information to facilitate visualization in neurosurqical procedures. The software makes use of the fact that some structures in the brain are not easily visualized in 1.5T or 3T clinical MRJ, but are better visualized using high-resolution and high-contrast 7T MRI.
The company's software methodology relies on a reference database of high-resolution brain images (7T MRI) and standard clinical brain images (1.5T or 3T MRI). The 7T images allow visualization of anatomical structures that are then used to find regions of interest within the brain (i.e., the STN) on a patient's clinical image.
SIS visualization is incorporated in the standard clinical MR data, thereby not changing the current standard-of-care workflow protocol and does not require any additional visualization software or hardware platforms.
Here's a breakdown of the acceptance criteria and the study that proves the SIS Software meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are focused on the accuracy of the Subthalamic Nuclei (STN) visualization. The study compared the machine-predicted STN to ground truth STN.
Acceptance Criteria (Pre-specified) | Reported Device Performance |
---|---|
90% of Center of Mass Distances not greater than 2.0mm | 95% of Center of Mass Distances were not greater than 2.0mm (95% CI: 86.91 - 98.37%) |
90% of Surface Distances not greater than 2.0mm | 100% of Surface Distances were not greater than 2.0mm (95% CI: 94.25 - 100%) |
Significance vs. Standard of Care (20% successful visualizations) | The rate of successful visualizations from SIS Software (95% of center of mass distances not greater than 2.0mm) is significantly greater than the standard of care (p |
Ask a specific question about this device
Page 1 of 1