Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K153660
    Date Cleared
    2016-09-14

    (268 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    StealthStation System with Cranial Software

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The StealthStation® System, with StealthStation® Cranial Software, is intended to aid in locating anatomical structures in either open or percutaneous neurosurgical procedures. The system is indicated for any medical condition in which reference to a rigid anatomical structure can be identified relative to images of the anatomy.

    This can include, but is not limited to, the following cranial procedures (including stereotactic frame-based and stereotactic frame alternatives-based procedures):

    • Cranial Biopsies
    • Deep Brain Stimulation (DBS) Lead Placement
    • Depth Electrode Placement
    • Tumor Resections
    • Craniotomies/Craniectomies
    • Skull Base Procedures
    • Transsphenoidal Procedures
    • Thalamotomies/Pallidotomies
    • Pituitary Tumor Removal
    • CSF Leak Repair
    • Pediatric Ventricular Catheter Placement
    • General Ventricular Catheter Placement

    The user should consult the "Navigational Accuracy" section of the User Manual to assess if the accuracy of the system is suitable for their needs.

    Device Description

    The StealthStation® System, with Station® Cranial v3.0 software helps guide surgeons during cranial surgical procedures such as biopsies, tumor resections, and shunt and lead placements. The StealthStation® Cranial v3.0 software works in conjunction with an Image Guided System (IGS) which consists of clinical software, surgical instruments, a referencing system and platform/computer hardware. Image guidance, also called navigation, tracks the position of instruments in relation to the surgical anatomy and identifies this position on diagnostic or intraoperative images of the patient. StealthStation® Cranial v3.0 software functionality is described in terms of its feature sets which are categorized as imaging modalities, registration, planning, interfaces with medical devices, and views. Feature sets include functionality that contributes to clinical decision making and are necessary to achieve system performance.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    Performance MetricAcceptance Criteria (Required Performance)Reported Device Performance
    3D Positional AccuracyMean error ≤ 2.0 mmStereotactic Frame Systems:
    - Cranial 3.0 Stereotactic Frame System: 1.57 mm (Mean)
    - Cranial 3.0 with Nexframe® System: 1.65 mm (Mean)
    - Cranial 3.0 with STarFix™ System: 1.08 mm (Mean)
    Electromagnetic (EM) Localization:
    - Cranial 3.0 with EM Localization System: 1.67 mm (Mean)
    Trajectory Angle AccuracyMean error ≤ 2.0 degreesStereotactic Frame Systems:
    - Cranial 3.0 Stereotactic Frame System: 0.52 degrees (Mean)
    - Cranial 3.0 with Nexframe® System: 0.68 degrees (Mean)
    - Cranial 3.0 with STarFix™ System: 0.70 degrees (Mean)
    Electromagnetic (EM) Localization:
    - Cranial 3.0 with EM Localization System: 1.31 degrees (Mean)

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not explicitly state the numerical sample size (e.g., number of phantoms, number of trials) used for the performance testing. It mentions using "anatomically representative phantoms" and "a subset of system components and features that represent the worst-case combinations."

    The data provenance is from laboratory and simulated use settings, implying prospective testing conducted by Medtronic Navigation, Inc. The country of origin is not explicitly stated, but the company is located in Louisville, Colorado, USA.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The document does not mention the use of experts to establish ground truth for the performance validation tests. The "ground truth" for positional and trajectory accuracy in this context would likely be established by precise measurements within the engineered phantoms themselves, not by human experts.

    4. Adjudication Method for the Test Set

    Not applicable. The performance validation used objective measurements against engineered phantoms, not human assessment requiring adjudication.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve With AI vs Without AI Assistance?

    No, an MRMC comparative effectiveness study involving human readers or AI assistance in that context was not performed. This device is an image-guided surgery system, not an AI diagnostic tool that assists human readers in interpreting images. Its function is to provide real-time navigation during surgical procedures based on pre-operative imaging.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done?

    Yes, the performance validation described is a standalone evaluation of the algorithm and system's accuracy. The tests measured the system's ability to accurately determine 3D positional and trajectory angle, independent of direct human-in-the-loop performance during the measurement process. It's evaluating the mechanical and software accuracy of the navigation system itself.

    7. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, etc.)

    The ground truth used was based on engineered phantoms with known anatomical representations and precise target locations. The "ground truth" for accuracy measurements would be the known, precisely defined coordinates and trajectories within these phantoms.

    8. The Sample Size for the Training Set

    The document does not explicitly mention a training set or its sample size. This implies that the device's accuracy is being verified against its design specifications, likely through a deterministic system or one with pre-calibrated components, rather than a machine learning model that requires a distinct training phase.

    9. How the Ground Truth for the Training Set Was Established

    Not applicable, as a distinct "training set" and associated ground truth establishment for a machine learning model are not described in the context of this device's validation. The device's performance is driven by its inherent design and calibration, not by learning from a training dataset.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1