Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K183019
    Date Cleared
    2019-03-19

    (139 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    Why did this record match?
    Reference Devices :

    K173475

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    SIS Software is an application intended for use in the viewing, presentation of medical imaging, including different modules for image processing, image fusion, and intraoperative functional planning where the 3D output can be used with stereotactic image quided surgery or other devices for further processing and visualization. The device can be used in conjunction with other clinical methods as an aid in visualization of the subthalamic nuclei (STN).

    Device Description

    SIS Software is an application intended for use in the viewing, presentation and documentation of medical imaging, including different modules for image processing, image fusion, and intraoperative functional planning where the 3D output can be used with stereotactic image guided surgery or other devices for further processing and visualization. The device can be used in conjunction with other clinical methods as an aid in visualization of the subthalamic nuclei (STN).

    SIS Software uses machine learning and image processing to enhance standard clinical images for the visualization of the subthalamic nucleus ("STN"). The SIS Software supplements the information available through standard clinical methods, providing adjunctive information for use in visualization and planning stereotactic surgical procedures. SIS Software provides a patient-specific, 3D anatomical model of the patient's own brain structures that supplements other clinical information to facilitate visualization in neurosurgical procedures. The version of the software that is the subject of the current submission (Version 3.3.0) can also be employed to co-register a post-operative CT scan with the clinical scan of the same patient from before a surgery (on which the software has already visualized the STN) and to segment in the CT image (where needed), to further assist with visualization.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria and performance data are presented for three main functionalities: STN Visualization, Co-Registration, and Segmentation.

    FunctionalityAcceptance CriteriaReported Device Performance
    STN Visualization90% of center of mass distances and surface distances not greater than 2.0mm. Significantly greater than the conservative literature estimate of 20% successful visualizations.98.3% of center of mass distances were not greater than 2.0mm (95% CI: 91-100%). 100% of surface distances were not greater than 2.0mm (95% CI: 94-100%). 90% of center of mass distances were below 1.66mm. 90% of surface distances were below 0.63mm. The rate of successful visualizations (98.3%) was significantly greater than 20% (p<0.0001). Dice coefficient was 0.69.
    Co-Registration95% confidence that 90% of registrations will have corresponding reference point distances below 2 mm.95% confidence that the error will be below 0.454 mm 90% of the time. (Mean of Maximum Error: 0.242 mm, STD: 0.062 mm). This meets the 2mm criterion.
    SegmentationCenter of Mass (COM): 95% confidence that 90% of segmentations will have COM distances below 1 mm.95% chance that 90% of the cases will be lower than 0.491 mm from the center of mass of the real contact. (Average Mean: 0.30 mm, STD: 0.12 mm). This meets the 1mm criterion.
    Orientation: 95% confidence that 90% of segmentations will have orientation differences below 5 degrees.95% chance that 90% of the cases will be lower than 2.486 degrees from the real orientation of the lead. (Average Mean: 1.00 Degrees, STD: 0.90 Degrees). This meets the 5 degrees criterion.
    Anomaly DetectionMinimize False Negatives; acceptable Sensitivity and Specificity; improved overall visualization success compared to version 1.0.0.Version 3.3.0 showed improved sensitivity (50.00% vs 0.00% for 1.0.0) and a marginally decreased specificity (89.39% vs 92.31% for 1.0.0). Overall system performance (success with AD) improved from 95.24% (1.0.0) to 98.33% (3.3.0).
    STN Smoothing FunctionalityThe smoothed STN visualizations should produce acceptable results for COM, DC, and SD; overall system performance remains in line with the verification criteria for the predicate device.Testing produced acceptable results for COM, DC, and SC. Significant correlation found between smoothed and non-smoothed STN objects, demonstrating that the overall system performance remains in line with the predicate device's verification criteria.

    2. Sample Size Used for the Test Set and Data Provenance

    • STN Visualization Test Set: 68 STNs (from 34 subjects).
      • Data Provenance: Not explicitly stated regarding country of origin. The data was "completely separate from the data set that was used for development" and "none of the 68 STNs were part of the company's database for algorithm development and none were used to optimize or design the company's software." This indicates it was a prospective test set, in the sense that it was not used for model development.
    • Co-Registration Test Set: 5 MR series and 1 CT series of a phantom brain. This suggests a synthetic, controlled test environment rather than patient data.
    • Segmentation Test Set: 26 post-surgical CT scans that contained leads, with a total sample size of 45 electrodes.
      • Data Provenance: Not explicitly stated regarding country of origin or whether it was retrospective or prospective patient data, but it involved "post-surgical CT scans."
    • Anomaly Detection Test Set: The same 68 cases (68 total STNs, 65 successful/3 failed for v1.0.0 and 66 successful/2 failed for v3.3.0) used for STN Visualization.
      • Data Provenance: Same as STN Visualization.
    • STN Smoothing Functionality Test Set: The shapes of the visualized targets from the "verification accuracy testing" were compared. This likely refers to the same 68 STNs from the STN Visualization study.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • STN Visualization: The text mentions "ground truth STNs (manually segmented clinical images superimposed)", but it doesn't specify the number or qualifications of experts who performed these manual segmentations.
    • Co-Registration: "6 fiducial points were marked by an expert." The qualification of this expert is not provided.
    • Segmentation: "ground truth segmentations were generated by 2 experts." The qualifications of these experts are not provided.
    • Anomaly Detection: Ground truth for anomaly detection was defined by whether visualizations were "Inaccurate visualization" or "Accurate visualization," based on the STN visualization success criteria (>2mm vs <=2mm distance relative to ground truth). The establishment of this underlying ground truth (manual segmentation of STNs) is not detailed beyond what's mentioned for STN Visualization.
    • STN Smoothing Functionality: Ground truth for accuracy was based on "verification accuracy testing," which likely refers back to the STN visualization ground truth.

    4. Adjudication Method for the Test Set

    • STN Visualization: Not explicitly stated. The "ground truth STNs (manually segmented clinical images superimposed)" implies a reference standard, but how discrepancies or initial ground truth was agreed upon if multiple experts were involved is not mentioned.
    • Co-Registration: A single expert marked points. No adjudication method mentioned.
    • Segmentation: "ground truth segmentations were generated by 2 experts." It does not mention an adjudication process if their segmentations differed (e.g., 2+1, 3+1). It's possible they reached consensus, or one might have corrected the other, but this is not stated.
    • Anomaly Detection: No (applicable) adjudication as the ground truth was based on quantitative metrics from STN visualization.
    • STN Smoothing Functionality: No (applicable) adjudication, as it relies on quantitative comparison to ground truth from STN visualization.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance

    No MRMC comparative effectiveness study was mentioned. The study focuses on the device's standalone performance in providing aid for visualization and measurement. The claim is that the device provides "adjunctive information" and is an "aid in visualization." No human reader performance data (with or without AI) is provided.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    Yes, the studies described for STN Visualization, Co-Registration, and Segmentation report the performance of the algorithm itself, without human-in-the-loop interaction for the specific quantitative metrics used. The anomaly detection component also describes the algorithm's performance in identifying anomalies.

    7. The Type of Ground Truth Used

    • STN Visualization:
      • Expert Consensus/Manual Segmentation: The ground truth for STN visualization was "manually segmented clinical images superimposed" and "High Field (7T) MRI." The 7T MRI serves as a high-resolution reference considered superior for STN visualization, and the manual segmentations on these images would form the core of the ground truth.
    • Co-Registration:
      • Expert Marking on Phantom: Ground truth was based on fiducial points marked by an expert on a physical phantom.
    • Segmentation:
      • Expert Segmentation: Ground truth was established by "2 experts" who generated segmentations of electrodes from CT images and manually aligned 3D components to those segmentations.
    • Anomaly Detection:
      • Metric-Based (Derived from STN Visualization GT): Ground truth for anomaly detection was defined by the quantitative "accuracy" of the STN visualization (<2mm vs >2mm distance to the expert-derived ground truth).
    • STN Smoothing Functionality:
      • Metric-Based (Derived from STN Visualization GT): Ground truth for evaluating smoothing was based on "COM, SD and DC" relative to the STN visualization ground truth.

    8. The Sample Size for the Training Set

    • The document states that the STN visualization validation data set (68 STNs) was "completely separate from the data set that was used for development" and "none were used to optimize or design the company's software."
    • Regarding the anomaly detection component, it mentions "two separate commonly used outlier detection machine learning models were trained using the brains from the training set." The specific sample size for this training set is not provided.
    • For co-registration, there's no mention of a training set as it appears to be a direct registration process, not a machine learning model.
    • For segmentation, it's not explicitly stated if a training set was used for the automated segmentation; the validation focuses on the comparison to expert ground truth.

    9. How the Ground Truth for the Training Set Was Established

    • For the anomaly detection component, it states the models were "trained using the brains from the training set, from which the same brain geometry characteristics were extracted." It then describes how anomaly scores were combined. However, the method for establishing the ground truth on this training set (i.e., what constituted an "anomaly" vs "non-anomaly" during training) is not detailed in the provided text. It presumably involved similar principles of accurate vs. inaccurate visualizations, but the source and method of that ground truth for training are not specified.
    • For any other machine learning components (like the core STN visualization algorithm), the document states the methodology "relies on a reference database of high-resolution brain images (7T MRI) and standard clinical brain images (1.5T or 3T MRI)." The algorithm "uses the 7T images from a database to find regions of interest within the brain (e.g., the STN) on a patient's clinical (1.5 or 3T MRI) image." This implies the 7T MRI data serves as a form of ground truth for training the algorithm to identify STNs on clinical MRI, but the specific process of creating that ground truth from the 7T data (e.g., manual segmentation by experts on 7T) is not detailed.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1