Search Filters

Search Results

Found 18 results

510(k) Data Aggregation

    K Number
    K231976
    Date Cleared
    2023-10-19

    (108 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    StealthStation Cranial Software, v3.1.5 (9735585)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The StealthStation System, with StealthStation Cranial software, is intended to aid in precisely locating anatomical structures in either open or percutaneous neurosurgical procedures. The system is indicated for any medical condition in which reference to a rigid anatomical structure can be identified relative to images of the anatomy. This can include, but is not limited to, the following cranial procedures (including stereotactic frame-based and stereotactic frame alternatives-based procedures):

    • Cranial biopsies (including stereotactic)
    • Deep brain stimulation (DBS) lead placement
    • Depth electrode placement
    • Tumor resections
    • Craniotomies/Craniectomies
    • Skull Base Procedures
    • Transsphenoidal Procedures
    • Thalamotomies/Pallidotomies
    • Pituitary Tumor Removal
    • CSF leak repair
    • Pediatric Ventricular Catheter Placement
    • General Ventricular Catheter Placement
    Device Description

    The StealthStation System, with StealthStation Cranial software helps guide surgeons during cranial surgical procedures such as biopsies, tumor resections, and shunt and lead placements. The StealthStation Cranial Software works in conjunction with an Image Guided System (IGS) which consists of clinical software, surgical instruments, a referencing system and platform/computer hardware. Image guidance, also called navigation, tracks the position of instruments in relation to the surgical anatomy and identifies this position on diagnostic or intraoperative images of the patient. StealthStation Cranial Software functionality is described in terms of its feature sets which are categorized as imaging modalities, registration, planning, interfaces with medical devices, and views. Feature sets include functionality that contributes to clinical decision making and are necessary to achieve system performance.

    AI/ML Overview

    The furnished document is a 510(k) premarket notification for the StealthStation Cranial Software, version 3.1.5. It details the device's indications for use, technological characteristics, and substantiates its equivalence to a predicate device through performance testing.

    Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:


    1. Table of Acceptance Criteria and Reported Device Performance:

    Acceptance CriteriaReported Device Performance (StealthStation Cranial Software Version 3.1.5)Predicate Device Performance (StealthStation Cranial Software Version 3.1.4)
    3D Positional Accuracy (Mean Error) ≤ 2.0 mm0.824 mm1.27 mm
    Trajectory Angle Accuracy (Mean Error) ≤ 2.0 degrees0.615 degrees1.02 degrees

    2. Sample Size Used for the Test Set and Data Provenance:

    The document mentions "System accuracy validation testing" was conducted. However, it does not specify the sample size for this test set (e.g., number of cases, images, or measurements).

    Regarding data provenance, the document does not explicitly state the country of origin of the data nor whether the data used for accuracy testing was retrospective or prospective. The study focuses on demonstrating substantial equivalence through testing against predefined accuracy thresholds rather than utilizing patient-specific clinical data.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:

    The document does not provide information on the number of experts used to establish ground truth for the system accuracy validation testing, nor their specific qualifications. It mentions "User exploratory testing to explore clinical workflows, including standard and unusual clinically relevant workflows. This testing will include subject matter experts, internal and field support personnel," but this refers to a different type of testing (usability/workflow exploration) rather than objective ground truth establishment for accuracy measurements.

    4. Adjudication Method for the Test Set:

    The document does not specify an adjudication method (e.g., 2+1, 3+1, none) for establishing ground truth for the system accuracy validation testing. The accuracy measurements appear to be objective, derived from controlled testing environments rather than subjective expert interpretations requiring adjudication.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not conducted as part of this submission. The testing described is focused on the standalone performance of the device's accuracy in a controlled environment, not on how human readers perform with or without AI assistance.

    6. Standalone Performance (Algorithm Only without Human-in-the-loop Performance):

    Yes, standalone performance testing was done. The "System accuracy validation testing" directly assesses the algorithm's performance in achieving specific positional and angular accuracy. The reported "Positional Error - 0.824 mm" and "Trajectory Error - 0.615 degrees" are metrics of the standalone algorithm's accuracy without direct human intervention in the measurement process itself, although the device is ultimately used by humans in a clinical context.

    7. Type of Ground Truth Used:

    The ground truth for the system accuracy validation testing appears to be based on objective, controlled measurements within a testing environment, likely involving phantom models or precise physical setups where the true position and orientation are known or can be measured with high precision. This is implied by the nature of "3D positional accuracy" and "trajectory angle accuracy" measurements, which are typically determined against a known, precise reference. It is not expert consensus, pathology, or outcomes data.

    8. Sample Size for the Training Set:

    The document does not provide any information regarding the sample size for a training set. This is because the StealthStation Cranial Software is a navigation system that uses image processing and registration algorithms, rather than a machine learning model that requires a distinct training dataset in the traditional sense. The software's development likely involves engineering principles and rigorous testing against design specifications, not iterative learning from data.

    9. How the Ground Truth for the Training Set Was Established:

    As the device does not appear to be an AI/ML model that undergoes a machine learning "training" phase with a labeled dataset in the conventional understanding for medical imaging analysis, the concept of establishing ground truth for a training set is not applicable in this context. The software's functionality is based on established algorithms for image registration and instrument tracking, which are then validated through performance testing against pre-defined accuracy thresholds.

    Ask a Question

    Ask a specific question about this device

    K Number
    K221087
    Date Cleared
    2022-06-10

    (58 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Synergy Cranial v2.2.9, StealthStation Cranial v3.1.4

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Synergy Cranial v2.2.9:
    The StealthStation System, with Synergy Cranial software, is intended as an aid for precisely locating anatomical structures in either open or percutaneous neurosurgical procedures. The system is indicated for any medical condition in which reference to a rigid anatomical structure can be identified relative to images of the anatomy. This can include, but is not limited to, the following cranial procedures:

    • Cranial Biopsies
    • Tumor Resections
    • Craniotomies/Craniectomies
    • Skull Base Procedures
    • Transsphenoidal Procedures
    • Thalamotomies/Pallidotomies
    • Pituitary Tumor Removal
    • CSF Leak Repair
    • Pediatric Catheter Shunt Placement
    • General Catheter Shunt Placement

    StealthStation Cranial Software v3.1.4:
    The StealthStation System, with StealthStation Cranial software, is intended to aid in precisely locating anatomical structures in either open or percutaneous neurosurgical procedures. The system is indicated for any medical condition in which reference to a rigid anatomical structure can be identified relative to images of the anatomy. This can include, but is not limited to, the following cranial procedures (including stereotactic frame-based and stereotactic frame alternatives-based procedures):

    • Cranial biopsies (including stereotactic)
    • Deep brain stimulation (DBS) lead placement
    • Depth electrode placement
    • Tumor resections
    • Craniotomies/Craniectomies
    • Skull Base Procedures
    • Transsphenoidal Procedures
    • Thalamotomies/Pallidotomies
    • Pituitary Tumor Removal
    • CSF leak repair
    • Pediatric Ventricular Catheter Placement
    • General Ventricular Catheter Placement
    Device Description

    The StealthStation System, with StealthStation Cranial software helps guide surgeons during cranial surgical procedures such as biopsies, tumor resections, and shunt and lead placements. The StealthStation Cranial software works in conjunction with an Image Guided System (IGS) which consists of clinical software, surgical instruments, a referencing system and platform/computer hardware. Image guidance, also called navigation, tracks the position of instruments in relation to the surgical anatomy and identifies this position on diagnostic or intraoperative images of the patient. StealthStation Cranial software functionality is described in terms of its feature sets which are categorized as imaging modalities, registration, planning, interfaces with medical devices, and views. Feature sets include functionality that contributes to clinical decision making and are necessary to achieve system performance.

    AI/ML Overview

    The Medtronic Navigation, Inc. StealthStation Cranial Software (v3.1.4) and Synergy Cranial Software (v2.2.9) are image-guided surgery (IGS) systems intended to aid in precisely locating anatomical structures during neurosurgical procedures.

    Here's an analysis of the acceptance criteria and study that proves the device meets them, based on the provided FDA 510(k) summary:

    1. Table of Acceptance Criteria and Reported Device Performance

    The primary acceptance criteria for both software versions are related to system accuracy in 3D positional and trajectory angle measurements.

    Acceptance Criteria (Synergy Cranial v2.2.9 & StealthStation Cranial v3.1.3/v3.0)Reported Device Performance (Synergy Cranial v2.2.9)Reported Device Performance (StealthStation Cranial v3.1.3/v3.0)
    System Accuracy:
    3D positional accuracy: mean error ≤ 2.0 mm1.29 mm1.27 mm
    Trajectory angle accuracy: mean error ≤ 2.0 degrees0.87 degrees1.02 degrees

    Note: The document refers to "StealthStation Cranial v3.1.3" and also "StealthStation Cranial v3.0 Software" in the testing section for the newer version's accuracy. Assuming v3.1.3 is the subject device and v3.0 is a close predecessor or the system version used for the test. The "v3.1.4" in the 510(k) letter is likely a minor update from v3.1.3, and the reported performance for v3.1.3 is considered representative.

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not explicitly state the sample size (number of patients or phantom configurations) used for the quantitative accuracy testing (test set). It mentions:

    • "Under representative worst-case configuration"
    • "utilizing a subset of system components and features that represent the worst-case combinations of all potential system components."
    • "Test configurations included CT images with slice spacing and thickness ranging between 0.6 mm to 1.25 mm and T1-weighted MR images with slice spacing and thickness ranging between 1.0 mm to 3.0 mm."

    Data Provenance: The data appears to be prospective as it was generated through laboratory and simulated use settings with "anatomically representative phantoms." The country of origin is not explicitly stated, but given Medtronic Navigation, Inc. is located in Louisville, Colorado, USA, it's highly probable the testing was conducted in the USA.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    The document indicates that the accuracy was determined using "anatomically representative phantoms." This implies that the ground truth for positional and angular accuracy was engineered and precisely measured within a controlled phantom environment, rather than established by human experts interpreting clinical data. Therefore, human experts were likely involved in designing and validating the phantom setup and measurement methodologies, but not in directly establishing ground truth from patient data. The qualifications of these individuals are not specified but would typically be engineers, physicists, or metrology specialists.

    4. Adjudication Method for the Test Set

    Given that the ground truth was established through a designed phantom and precise measurements, an adjudication method for human interpretation is not applicable here. The measurements are objective and quantitative.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No MRMC comparative effectiveness study was mentioned for human readers improving with AI vs. without AI assistance. The device is a surgical navigation system, aiding in real-time guidance, not an AI-assisted diagnostic tool that would typically undergo MRMC studies.

    6. Standalone (i.e., algorithm only without human-in-the-loop performance)

    Yes, a standalone performance was done for the system's accuracy. The reported positional and trajectory angle errors are measures of the system's inherent accuracy, independent of a specific human-in-the-loop scenario. The study describes "Design verification and validation was performed using the StealthStation Cranial software in laboratory and simulated use settings."

    7. The Type of Ground Truth Used

    The ground truth used was engineered truth derived from precisely measured anatomical phantoms. This is a highly controlled and quantitative method, suitable for measuring the accuracy of a navigation system.

    8. The Sample Size for the Training Set

    The document does not describe a "training set" in the context of an AI/machine learning model. The device is referred to as "software" for an Image Guided System (IGS), which typically relies on established algorithms for image processing, registration, and tracking, rather than deep learning models that require large training datasets with ground truth labels in the conventional sense. The "training" for such a system would involve rigorous formal verification and validation of these algorithms.

    9. How the Ground Truth for the Training Set Was Established

    As noted above, the concept of a "training set" and its associated ground truth, as typically applied to AI/machine learning, does not appear to be directly applicable to the description of this device's development as presented in the 510(k) summary. The development involved "Software verification and validation testing for each requirement specification" and "System integration performance testing for cranial surgical procedures using anatomical phantoms," suggesting traditional software engineering and testing methodologies rather than machine learning training.

    Ask a Question

    Ask a specific question about this device

    K Number
    K212397
    Date Cleared
    2021-12-22

    (142 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    StealthStation S8 Cranial v2.0

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The StealthStation System, with StealthStation Cranial software, is intended as an aid for locating anatomical structures in either open or percutaneous neurosurgical procedures. Their use is indicated for any medical condition in which the use of stereotactic surgery may be appropriate, and where reference to a rigid anatomical structure, such as the skull, can be identified relative to images of the anatomy.

    This can include, but is not limited to, the following cranial procedures (including stereotactic frame-based and stereotactic frame alternatives-based procedures):

    • Tumor resections
    • General ventricular catheter placement
    • Pediatric ventricular catheter placement
    • Depth electrode, lead, and probe placement
    • Cranial biopsies
    Device Description

    The StealthStation™ Cranial Software v2.0 works in conjunction with an Image Guided System (IGS) which consists of clinical software, surgical instruments, a referencing system and platform/computer hardware. Image guidance, also called navigation, tracks the position of instruments in relation to the surgical anatomy and identifies this position on diagnostic or intraoperative images of the patient. During surgery, positions of specialized surgical instruments are continuously updated on these images either by optical tracking or electromagnetic tracking.

    Cranial software functionality is described in terms of its feature sets which are categorized as imaging modalities, registration, planning, interfaces with medical devices, and views. Feature sets include functionality that contributes to clinical decision making and are necessary to achieve system performance.

    The changes to the currently cleared StealthStation™ S8 Cranial Software are as follows:

    • Addition of white matter tractography (WMT) fiber tract creation for the brain referred to as diffusion Magnetic Resonance Imaging (dMRI) tractography. dMRI tractography will process diffusion-weighted MRI data into 3D fiber tract models that represent whitematter tracts. This will be marketed as a software option called Stealth™ Tractography.
    • Addition of the Medtronic SenSight™ directional DBS lead to the existing list of view overlays.
    • Minor changes to the software were made to address user preferences and to fix minor anomalies.
    AI/ML Overview

    The provided text describes the performance testing and acceptance criteria for the Medtronic Navigation StealthStation S8 Cranial v2.0 software, particularly focusing on the new white matter tractography (WMT) feature.

    Here's a breakdown of the requested information:

    1. Table of acceptance criteria and the reported device performance:

    Acceptance Criteria (Performance Measure)Threshold / TargetReported Device Performance
    System Accuracy (3D positional accuracy)Mean error ≤ 2.0 mmMean error ≤ 2.0 mm
    System Accuracy (Trajectory angle accuracy)Mean error ≤ 2.0 degreesMean error ≤ 2.0 degrees
    Software Functionality (dMRI tractography)Correct creation and rendering of dMRI tracts in views and functionality of dMRI tractography feature requirements.Performance testing demonstrated the design and implementation of the correct creation and rendering of dMRI tracts in views in the application and the functionality of the dMRI tractography feature requirements.
    Usability (Summative Validation)Safe and effective for intended users, uses, and use environments.Summative evaluations demonstrated StealthStation™ Cranial Software v2.0 with Stealth™ Tractography has been found to be safe and effective for the intended users, uses and use environments.
    Clinical Expert Evaluation (White Matter Tracts)Assessment of rendering of white matter tracts and their relationship to other key structures with respect to treatment planning, intraoperative navigation, and potential to aid clinical decision making.Clinical experts assessed the rendering of the white matter tracts and their relationship to other key structures with respect to treatment planning, intraoperative navigation and the potential to aid clinical decision making.

    2. Sample size used for the test set and the data provenance:

    • Test Set Sample Size: The document does not specify a numerical sample size for the "datasets" used in summative usability validation and clinical expert evaluation. It states "datasets not used for development, composed of normal and abnormal brains in both pediatric and adult populations."
    • Data Provenance: Not explicitly stated, but the mention of "datasets not used for development" suggests a separate, possibly curated, test set. There is no information on the country of origin or whether the data was retrospective or prospective.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Number of Experts: The document refers to "clinical experts" (plural) but does not specify the exact number.
    • Qualifications of Experts: Not explicitly stated (e.g., "radiologist with 10 years of experience"). It only identifies them as "representative users" and "clinical experts."

    4. Adjudication method for the test set:

    • The document does not describe a formal adjudication method (e.g., 2+1, 3+1). It states that "Clinical expert evaluations included white matter tract generation and editing," implying direct assessment by these experts.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No MRMC comparative effectiveness study is mentioned. The study focuses on the device's performance and validation through usability and clinical expert evaluation of the tractography feature, not on human reader performance improvement with AI assistance. The device functions as an aid for locating anatomical structures and displays information; it doesn't appear to be an AI that assists human interpretation in a comparative effectiveness sense.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • Yes, a standalone performance test was done for the "System Accuracy" related to 3D positional accuracy and trajectory angle accuracy. This was determined "using anatomically representative phantoms and utilizing a subset of system components and features that represent the worst-case combinations of all potential system components." This implies assessment of the system's ability to achieve these accuracy metrics independently of human interaction during the measurement. The "correct creation and rendering of dMRI tracts" also implies an algorithm-only assessment of the output.

    7. The type of ground truth used:

    • For System Accuracy (Positional and Trajectory): "Anatomically representative phantoms" were used. The ground truth would be the known, precisely measured dimensions and positions within these phantoms.
    • For Software Functionality (dMRI tractography): The ground truth appears to be based on whether the software correctly creates and renders the dMRI tracts as per established specifications and expectations, as assessed by performance testing. Clinical experts further evaluated the quality and clinical utility of these rendered tracts in relation to other structures.
    • For Usability and Clinical Expert Evaluation: The ground truth is effectively the consensus or expert judgment of the "representative users" and "clinical experts" regarding the safety, effectiveness, and clinical utility of the software and its new tractography feature. This is a form of expert consensus or clinical judgment. No mention of pathology or outcomes data for establishing ground truth is made in this context.

    8. The sample size for the training set:

    • The document does not provide any information about a training set since this is a regulatory submission for a software device, not an AI model that requires a distinct training phase. The new feature, dMRI tractography, processes diffusion-weighted MRI data into 3D fiber models. While the underlying algorithms would have been developed and "trained" (in a broader development sense), this document does not refer to a dedicated "training set" in the context of the device's clearance.

    9. How the ground truth for the training set was established:

    • Not applicable, as a "training set" distinct for an AI model is not described in this regulatory submission. The development and verification of the tractography algorithms would have involved internal processes and known physics/mathematics of dMRI data processing.
    Ask a Question

    Ask a specific question about this device

    K Number
    K211442
    Date Cleared
    2021-07-08

    (59 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    StealthStation Spinous Process Clamps

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The navigated instruments are specifically designed for use with the StealthStation™ System, which is indicated for any medical condition in which the use of stereotactic surgery may be appropriate, and where reference to a rigid anatomical structure such as a skull, a long bone, or vertebra can be identified relative to a CT or MR based model, fluoroscopy images, or digitized landmarks of the anatomy.

    When used with a Medtronic StealthStation™ Navigation System, the Spine Referencing fixation devices are intended to provide rigid attachment between patient and patient reference frame for the duration of the surgery.

    Device Description

    The Spinous Process Clamps are intended to provide rigid attachment between patient and patient reference frame for the duration of the surgery. The subject devices are designed for use with the StealthStation™ System and are intended to be reusable.

    AI/ML Overview

    The provided text describes a 510(k) premarket notification for a medical device, the StealthStation™ Spinous Process Clamps. This document outlines the device's characteristics, intended use, and a comparison to a predicate device, along with a summary of performance testing.

    However, the document does not describe an AI/ML-driven device or a study involving "human readers" improving with "AI vs without AI assistance." It pertains to a physical stereotaxic instrument used in spinal surgery for rigid attachment to a patient's anatomy for navigation.

    Therefore, many of the specifics requested in your prompt (e.g., sample size for test/training sets, data provenance, number of experts for ground truth, adjudication method, MRMC studies, standalone performance, type of ground truth for AI, training set details) are not applicable to this type of medical device submission.

    The document discusses performance testing relevant to a mechanical device, such as functional verification, useful life testing, navigation accuracy testing, and packaging verification, as well as biological endpoint testing. These tests are to ensure the device's safety and effectiveness as a physical surgical tool and reference system, not as an AI diagnostic or assistive tool.

    To answer your prompt, I will extract the information that is present and explicitly state when information is not applicable given the nature of the device.


    Acceptance Criteria and Device Performance for Medtronic StealthStation™ Spinous Process Clamps

    The device in question, the StealthStation™ Spinous Process Clamps, is a physical stereotaxic instrument, not an AI/ML-driven device. Therefore, the "acceptance criteria" and "study" described in the provided text relate to the mechanical and biological performance of this instrument, not to the performance of an AI algorithm or its impact on human reader performance.

    The "studies" are performance tests designed to demonstrate the device's substantial equivalence to a predicate device and its safety and effectiveness for its intended use as a surgical instrument.

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not present a formal table of quantitative acceptance criteria with corresponding performance metrics like those typically seen for AI/ML device validations (e.g., sensitivity, specificity, AUC thresholds). Instead, the performance testing described is qualitative or refers to compliance with established standards for mechanical and biological safety.

    CategoryAcceptance Criteria (Implied / Stated Objective)Reported Device Performance (Summary)
    FunctionalDevice satisfies functional requirements.Functional Verification confirms the design satisfies functional requirements.
    Useful LifeDevice operates normally throughout its useful life.Useful Life Testing confirms normal operation throughout its useful life.
    Navigational AccuracyRobustness and navigational accuracy are verified.Navigation Accuracy Testing verifies robustness and navigational accuracy.
    Packaging IntegrityDevice can withstand ship testing per ASTM D4169 and ISTA 2A.Packaging Verification confirms packaging withstands ship testing per ASTM D4169 and ISTA 2A.
    BiocompatibilityNon-cytotoxic, non-sensitizing, non-irritating, non-toxic, non-pyrogenic; negligible risk of adverse biological effects to patients.Biological endpoint testing (per ISO 10993-1:2018) indicates non-cytotoxic, non-sensitizing, non-irritating, non-toxic, and non-pyrogenic.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: Not explicitly stated as a "sample size" in the context of patient data or algorithm testing. The performance testing likely involved a limited number of physical devices (e.g., clamps) for mechanical and biological evaluations. This is not a data-driven AI model.
    • Data Provenance: Not applicable. The "data" comes from physical testing of the device, not from patient medical records or imaging scans. The testing would have occurred in a laboratory or manufacturing environment.
    • Retrospective/Prospective: Not applicable. The testing is a controlled, experimental assessment of the device's physical properties and performance, not a study on historical or future patient data.

    3. Number of Experts Used to Establish Ground Truth and Qualifications

    • Not applicable. This section is relevant for AI/ML applications where expert labeling is used to create ground truth for image classification, segmentation, etc. For a mechanical device, "ground truth" relates to engineering specifications, physical measurements, and compliance with industry standards, which are evaluated by engineers and technical specialists, not typically "experts" in the context of medical image interpretation.

    4. Adjudication Method for the Test Set

    • Not applicable. Adjudication methods (e.g., 2+1, 3+1 consensus) are used in studies involving human interpretation of complex medical data, especially for establishing ground truth in AI model development. This device's testing involves objective engineering and biological assessments.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done

    • No. An MRMC study is specific to evaluating the impact of an AI algorithm on human reader performance, usually in diagnostics. This device is a physical surgical instrument, not an AI diagnostic tool.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    • No, not applicable. This concept pertains to the performance of an AI algorithm by itself. The StealthStation™ Spinous Process Clamps are physical devices that are used with a navigation system and by a human surgeon. Their performance is inherently related to their physical interaction and functionality for surgical navigation.

    7. The Type of Ground Truth Used

    • Engineering Specifications and Standardized Test Methods: For functional verification, useful life, packaging, and navigational accuracy, the "ground truth" would be the pre-defined engineering specifications, design requirements, and objective measurements obtained using established test methodologies (e.g., ASTM, ISTA, internal quality standards).
    • ISO 10993-1:2018 Standards: For biocompatibility, the ground truth is established by the accepted biological safety endpoints and testing protocols outlined in the ISO 10993 series of standards.

    8. The Sample Size for the Training Set

    • Not applicable. This device is not an AI/ML algorithm that requires a "training set" of data.

    9. How the Ground Truth for the Training Set was Established

    • Not applicable. As there is no training set for an AI/ML algorithm, the concept of establishing ground truth for it does not apply.
    Ask a Question

    Ask a specific question about this device

    K Number
    K203639
    Date Cleared
    2021-01-13

    (30 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    StealthStation Cranial Software v1.3.2

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The StealthStation System, with StealthStation Cranial Software, is intended as an aid for locating anatomical structures in either open or percutaneous neurosurgical procedures. Their use is indicated for any medical condition in which the use of stereotactic surgery may be appropriate, and where reference to a rigid anatomical structure, such as the skull, can be identified relative to images of the anatomy.

    This can include, but is not limited to, the following cranial procedures (including stereotactic frame-based and stereotactic frame alternatives-based procedures):

    • Tumor resections
    • General ventricular catheter placement
    • Pediatric ventricular catheter placement
    • Depth electrode, lead, and probe placement
    • Cranial biopsies
    Device Description

    The StealthStation™ Cranial Software v1.3.2 works in conjunction with an Image Guided System (IGS) which consists of clinical software, surgical instruments, a referencing system and platform/computer hardware. Image guidance, also called navigation, tracks the position of instruments in relation to the surgical anatomy and identifies this position on diagnostic or intraoperative images of the patient. During surgery, positions of specialized surgical instruments are continuously updated on these images either by optical tracking or electromagnetic tracking.

    Cranial software functionality is described in terms of its feature sets which are categorized as imaging modalities, registration, planning, interfaces with medical devices, and views. Feature sets include functionality that contributes to clinical decision making and are necessary to achieve system performance.

    AI/ML Overview

    The acceptance criteria for the StealthStation™ Cranial Software v1.3.2 are not explicitly detailed in the provided document beyond the general statement of "System Accuracy Requirements" being "Identical" to the predicate device. The performance characteristics of the predicate device, StealthStation™ Cranial Software v1.3.0, are stated as the benchmark for system accuracy.

    Here's the information extracted from the document:

    1. Table of Acceptance Criteria and Reported Device Performance:

    Criteria/FeatureAcceptance Criteria (based on Predicate Device K201175)Reported Device Performance (StealthStation™ Cranial Software v1.3.2)
    System AccuracyMean 3D positional error ≤ 2.0 mmIdentical; no changes made to the StealthStation™ Cranial Software that would require System Accuracy testing for v1.3.2
    Mean trajectory angle accuracy ≤ 2.0 degrees
    All other featuresFunctions and performs as described for the predicate device.All other features are identical to the predicate device.

    2. Sample size used for the test set and the data provenance:

    • The document states that "Software verification testing for each requirement specification" was conducted and "Design verification was performed using the StealthStation™ System with Station™ Cranial Software v1.3.2 in laboratory."
    • No specific sample size for a test set is mentioned. The testing described is software verification and design verification, not a clinical study on patient data for performance evaluation in the typical sense of AI/ML devices.
    • Data provenance is not applicable or not disclosed as the document indicates "Clinical testing was not considered necessary prior to release as this is not new technology."

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Not applicable. The testing described is software and design verification rather than a clinical performance study requiring expert ground truth establishment from patient data.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

    • Not applicable. This information is relevant for clinical studies involving multiple reviewers adjudicating findings, which was not performed.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No. An MRMC comparative effectiveness study was not performed. The device is a navigation system and not an AI-assisted diagnostic tool that would typically involve human readers.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • No. The device is a surgical navigation system, which is inherently a human-in-the-loop tool. The performance evaluation focuses on its accuracy specifications within that use case during design verification.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • Not applicable. For the system accuracy, the ground truth would be precise measurements taken in a laboratory setting for the navigational accuracy, rather than clinical ground truth from patient data like pathology or outcomes.

    8. The sample size for the training set:

    • Not applicable. The document describes a software update for a stereotaxic instrument, not an AI/ML device that undergoes model training with a dataset.

    9. How the ground truth for the training set was established:

    • Not applicable. As the device is not described as an AI/ML system requiring a training set, the establishment of ground truth for such a set is not relevant.
    Ask a Question

    Ask a specific question about this device

    K Number
    K200723
    Date Cleared
    2020-06-26

    (99 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    StealthStation FlexENT (9736242), StealthStation S8 ENT Software (9735762)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The StealthStation FlexENT™ System, with the StealthStation™ ENT Software, is intended as an aid for precisely locating anatomical structures in either open or percutaneous ENT procedures. Their use is indicated for any medical condition in which the use of stereotactic surgery may be appropriate, and where reference to a rigid anatomical structure, such as the skull, can be identified relative to images of the anatomy.

    This can include, but is not limited to, the following procedures:

    • Functional Endoscopic Sinus Surgery (FESS)
    • Endoscopic Skull Base procedures
    • Lateral Skull Base procedures

    The Medtronic SteathStation FlexENT™ computer-assisted surgery system and its associated applications are intended as an aid for precisely locating anatomical structures in either open or percutaneous ENT procedures. Their use is indicated for any medical condition in which the use of stereotactic surgery may be appropriate, and where reference to a rigid anatomical structure, such as the skull, can be identified relative to images of the anatomy.

    Device Description

    The StealthStation FlexENT™ is an electromagnetic based surgical guidance platform that supports use of special application software (StealthStation™ S8 ENT Software 1.3 and associated instruments.

    The StealthStation™ S8 ENT Software 1.3 helps guide surgeons during ENT procedures such as functional endoscopic sinus surgery (FESS), endoscopic skull base procedures, and lateral skull base procedures. StealthStation™ S8 ENT Software 1.3 functionality is described in terms of its feature sets which are categorized as imaging modalities, registration, planning, and views. Feature sets include functionality that contributes to clinical decision making and are necessary to achieve system performance.

    Patient images can be displayed by the StealthStation™ S8 ENT Software 1.3 from a variety of perspectives (axial, sagittal, coronal, oblique) and 3dimensional (3D) renderings of anatomical structures can also be displayed. During navigation, the system identifies the tip location and traiectory of the tracked instrument on images and models the user has selected to display. The surgeon may also create and store one or more surgical plan trajectories before surgery and simulate progression along these trajectories. During surgery, the software can display how the actual instrument tip position and trajectory relate to the plan, helping to guide the surgeon along the planned trajectory. While the surgeon's judgment remains the ultimate authority, realtime positional information obtained through the StealthStation™ System can serve to validate this judgment as well as guide. The StealthStation™ S8 ENT v1.3 Software can be run on both the StealthStation FlexENT™ and StealthStation™ S8 Platforms.

    The StealthStation™ System is an Image Guided System (IGS), comprised of a platform (StealthStation FlexENT™ or StealthStation™ S8), clinical software, surgical instruments, and a referencing system (which includes patient and instrument trackers). The IGS tracks the position of instruments in relation to the surgical anatomy, known as localization, and then identifies this position on preoperative or intraoperative images of a patient.

    AI/ML Overview

    1. Table of Acceptance Criteria and Reported Device Performance:

    Performance MetricAcceptance Criteria (mean error)Reported Performance (StealthStation FlexENT™)Reported Performance (StealthStation™ S8)Reported Performance (Predicate: StealthStation™ S8 ENT v1.0)
    3D Positional Accuracy≤ 2.0 mm0.93 mm1.04 mm0.88 mm
    Trajectory Angle Accuracy≤ 2.0 degrees0.55°1.31°0.73°

    2. Sample Size Used for the Test Set and Data Provenance:

    The document states that "Testing was performed under the representative worst-case configuration... utilizing a subset of system components and features that represent the worst-case combinations of all potential system components." It does not specify a numerical sample size for the test set (e.g., number of phantoms or trials).

    The data provenance is not explicitly stated in terms of country of origin. The test appears to be a prospective bench study conducted by the manufacturer, Medtronic Navigation, Inc.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications:

    The document does not mention the use of experts to establish ground truth for this accuracy testing. The ground truth for positional and trajectory accuracy would typically be established by precise measurements on the anatomically representative phantoms using highly accurate measurement systems, not by expert consensus.

    4. Adjudication Method for the Test Set:

    Not applicable, as this was a bench accuracy test with directly measurable metrics, not a subjective assessment requiring adjudication.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size:

    No, an MRMC comparative effectiveness study was not conducted. The study focuses on the standalone accuracy of the device.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done:

    Yes, a standalone performance study was done. The accuracy testing described ("3D positional accuracy" and "trajectory angle accuracy") measures the device's inherent accuracy in locating anatomical structures and guiding trajectories, independent of human interaction during the measurement process. The system tracks instruments and displays their position and trajectory on images without direct human interpretation being part of the measurement for these accuracy metrics.

    7. The Type of Ground Truth Used:

    The ground truth used for this accuracy study was derived from precise physical measurements taken on "anatomically representative phantoms." This implies that the true position and trajectory were known and used as reference points against which the device's reports were compared.

    8. The Sample Size for the Training Set:

    The document does not provide information about a training set since the study described is a performance validation of a medical device's accuracy, not a machine learning model that would require a dedicated training set. The software likely undergoes extensive internal development and testing, but separate "training set" details are not provided in this context.

    9. How the Ground Truth for the Training Set Was Established:

    Not applicable, as no training set information is provided or relevant for this type of accuracy study.

    Ask a Question

    Ask a specific question about this device

    K Number
    K201175
    Date Cleared
    2020-06-03

    (33 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    StealthStation Cranial Software v1.3.0

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The StealthStation™ System, with StealthStation™ Cranial Software, is intended as an aid for locating anatomical structures in either open or percutaneous neurosurgical procedures. Their use is indicated for any medical condition in which the use of stereotactic surgery may be appropriate, and where reference to a rigid anatomical structure, such as the skull, can be identified relative to images of the anatomy.

    This can include, but is not limited to, the following cranial procedures (including stereotactic frame-based and stereotactic frame alternatives-based procedures):

    • · Tumor resections
    • General ventricular catheter placement
    • · Pediatric ventricular catheter placement
    • · Depth electrode, lead, and probe placement
    • · Cranial biopsies
    Device Description

    The StealthStation™ Cranial Software v1.3.0 works in conjunction with an Image Guided System (IGS) which consists of clinical software, surgical instruments, a referencing system and platform/computer hardware. Image guidance, also called navigation, tracks the position of instruments in relation to the surgical anatomy and identifies this position on diagnostic or intraoperative images of the patient. During surgery, positions of specialized surgical instruments are continuously updated on these images either by optical tracking or electromagnetic tracking.

    Cranial software functionality is described in terms of its feature sets which are categorized as imaging modalities, registration, planning, interfaces with medical devices, and views. Feature sets include functionality that contributes to clinical decision making and are necessary to achieve system performance.

    The changes to the currently cleared StealthStation S8 Cranial Software are as follows:

    • . Addition of an optional image display that allows the user to see through outer layers to increase the visibility of other models.
    • . Update the imaging protocol to support overlapping slices.
    • . Minor changes to the software were made to address user preferences and to fix minor anomalies.
    AI/ML Overview

    The provided document is a 510(k) premarket notification summary for Medtronic's StealthStation Cranial Software v1.3.0. It describes the device, its intended use, and a comparison to a predicate device, along with performance testing.

    Here's an analysis to address your specific questions:

    1. A table of acceptance criteria and the reported device performance

    Acceptance CriteriaReported Device Performance
    System Accuracy Requirements
    3D Positional Accuracy: mean error ≤ 2.0 mmMean error ≤ 2.0 mm
    Trajectory Angle Accuracy: mean error ≤ 2.0 degreesMean error ≤ 2.0 degrees

    2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)

    The document does not specify a sample size (e.g., number of cases or images) for the performance testing. It states that the performance was determined "using anatomically representative phantoms and utilizing a subset of system components and features that represent the worst-case combinations of all potential system components."

    Regarding data provenance, the testing was conducted in "laboratory and simulated use settings" using "anatomically representative phantoms." This indicates that the data was generated specifically for testing purposes, likely in a controlled environment, rather than being derived from real patient scans. The country of origin for the data is not specified, but the applicant company, Medtronic Navigation Inc., is based in Louisville, Colorado, USA. The testing appears to be prospective in nature, as it was specifically carried out to demonstrate equivalence.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)

    The document does not mention the involvement of human experts for establishing ground truth for the performance testing. The accuracy measurements (3D positional and trajectory angle) are typically derived from physical measurements against known ground truth (e.g., phantom dimensions, known instrument positions) in the context of navigation systems, not by expert consensus on image interpretation.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

    Not applicable. The performance testing described is objective measurement against physical phantoms, not subjective assessment by experts requiring adjudication.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No. The document explicitly states: "Clinical testing was not considered necessary prior to release as this is not new technology." This device is an image-guided surgery system software, not an AI-assisted diagnostic tool that would typically undergo MRMC studies. The changes in this version (v1.3.0) are described as "minor changes to the software were made to address user preferences and to fix minor anomalies" and "Addition of an optional image display that allows the user to see through outer layers," suggesting incremental updates rather than a fundamentally new AI algorithm.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

    Yes, the performance testing was effectively "standalone" in the sense that the system's accuracy was measured against a known physical ground truth (phantoms) rather than evaluating human performance with the system. The reported accuracy metrics describe the device's inherent precision in tracking and navigation, independent of user interaction during the measurement process itself.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    The ground truth used was based on known physical properties of anatomically representative phantoms. This means that a physical phantom with precisely known dimensions and features was used, and the device's ability to accurately locate points and trajectories within that known physical structure was measured. This is a common and appropriate method for validating the accuracy of surgical navigation systems.

    8. The sample size for the training set

    Not applicable. This device, as described, is a software for image-guided surgery, not an AI/ML model that would typically have a "training set" in the context of deep learning. The changes are described as minor software updates and an optional display feature, not a new algorithm requiring a training phase from data.

    9. How the ground truth for the training set was established

    Not applicable, as there is no mention of a training set for an AI/ML model in this submission.

    Ask a Question

    Ask a specific question about this device

    K Number
    K201189
    Date Cleared
    2020-05-29

    (28 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Stealthstation S8 Spine Software v1.3.0

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The StealthStation™ System, with StealthStation™ Spine Software, is intended as an aid for precisely locating anatomical structures in either open or percutaneous neurosurgical and orthopedic procedures. Their use is indicated for any medical condition in which the use of stereotactic surgery may be appropriate, and where reference to a rigid anatomical structure, such as the spine or pelvis, can be identified relative to images of the anatomy. This can include the following spinal implant procedures, such as:

    o Pedicle Screw Placement

    • o Iliosacral Screw Placement
      o Interbody Device Placement
    Device Description

    The StealthStation System, also known as an Image Guided System (IGS), is comprised of a platform, clinical software, surgical instruments and a referencing system. The IGS tracks the position of instruments in relation to the surgical anatomy and identifies this position on diagnostic or intraoperative images of a patient. The StealthStation Spine software helps guide surgeons during spine procedures such as spinal fusion and trauma treatments. StealthStation Spine Software functionality is described in terms of its feature sets which are categorized as imaging modalities, registration, planning, interfaces with medical devices, and views. Feature sets include functionality that contributes to clinical decision making and are necessary to achieve system performance.

    AI/ML Overview

    Based on the provided text, the acceptance criteria and study proving the device meets these criteria for the StealthStation S8 Spine Software v1.3.0 can be summarized as follows:

    1. A table of acceptance criteria and the reported device performance:

    Acceptance Criteria (System Accuracy Requirements)Reported Device Performance (StealthStation S8 Spine Software v1.3.0)
    Mean positional error ≤ 2.0 mmWorst-case Configuration: Mean positional error of ≤ 2.0 mm
    StealthAiR Spine (Specific feature): Positional Error – 1.01 mm
    Overlapping Slices (Specific feature): Positional Error – 0.51 mm
    Mean trajectory error ≤ 2 degreesWorst-case Configuration: Mean trajectory error of ≤ 2 degrees
    StealthAiR Spine (Specific feature): Trajectory Error – 0.37 degrees
    Overlapping Slices (Specific feature): Trajectory Error – 0.41 degrees

    2. Sample size used for the test set and the data provenance:

    • Sample Size: The document does not specify a numerical sample size for the test set used for the accuracy performance. It mentions "anatomically representative phantoms" were used.
    • Data Provenance: The study was conducted using "anatomically representative phantoms." The country of origin of the data is not specified, but the applicant company is Medtronic Navigation, Inc., located in Louisville, Colorado, USA. The study design is implied to be prospective testing on phantoms rather than retrospective patient data.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    This information is not provided in the document. The accuracy testing was performed on phantoms, which typically rely on engineered and measurable ground truth, not expert consensus on anatomical structures or clinical outcomes.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    This information is not provided in the document, as the ground truth for phantom testing is typically established by the design of the phantom and measurement techniques, not human adjudication.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    No MRMC comparative effectiveness study was mentioned. The device, StealthStation S8 Spine Software, is an image-guided surgery system, not an AI-assisted diagnostic tool that would typically be evaluated with MRMC studies comparing human reader performance. The software aids surgeons in precisely locating anatomical structures.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    The device's performance was evaluated in terms of its ability to measure positional and trajectory accuracy on phantoms. This can be considered a form of standalone performance assessment as it evaluates the system's inherent accuracy capabilities, albeit in a simulated (phantom) environment, without directly measuring human-in-the-loop clinical workflow improvement. The text refers to "System integration performance testing for spine surgical procedures using anatomical phantoms."

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    The ground truth was established by the design and measurement capabilities of the "anatomically representative phantoms." This type of ground truth is based on precise, engineered physical properties and known measurements of the phantom. It is not based on expert consensus, pathology, or outcomes data.

    8. The sample size for the training set:

    This information is not provided in the document. The document describes a software update (v1.3.0) to an existing device, and the focus of the 510(k) summary is on performance testing for substantial equivalence, not on the training data used for the algorithm's development.

    9. How the ground truth for the training set was established:

    This information is not provided in the document. As this document is a 510(k) summary for a software update, details about the original model training and ground truth establishment are typically not included unless significant changes related to the algorithm's core functionality or AI components are introduced which necessitate new data for training or re-training.

    Ask a Question

    Ask a specific question about this device

    K Number
    K190672
    Date Cleared
    2019-07-31

    (138 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    StealthStation Synergy Cranial S7 Software v.2.2.8, StealthStation Cranial Software v3.1.1

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Synergy® Cranial software is surgical navigation software that, when used with the StealthStation® System as a planning and intraoperative guidance system, is intended to aid in precisely locating anatomical structures in either open or percutaneous neurosurgical procedures. The system is indicated for any medical condition in which reference to a rigid anatomical structure can be identified relative to images of the anatomy.

    This can include, but is not limited to, the following cranial procedures:

    • Cranial Biopsies
    • Tumor Resections
    • Craniotomies/Craniectomies
    • Skull Base Procedures
    • Transsphenoidal Procedures
    • Thalamotomies/Pallidotomies
    • Pituitary Tumor Removal
    • CSF Leak Repair
    • Pediatric Catheter Shunt Placement
    • General Catheter Shunt Placement

    The StealthStation® System, with StealthStation® Cranial software, is intended to aid in precisely locating anatomical structures in either open or percutaneous neurosurgical procedures. The system is indicated for any medical condition in which reference to a rigid anatomical structure can be identified relative to images of the anatomy.

    This can include, but is not limited to, the following cranial procedures (including stereotactic frame-based and stereotactic frame alternatives-based procedures):

    • Cranial biopsies (including stereotactic)
    • Deep brain stimulation (DBS) lead placement
    • Depth electrode placement
    • Tumor resections
    • Craniotomies/Craniectomies
    • Skull Base Procedures
    • Transsphenoidal Procedures
    • Thalamotomies/Pallidotomies
    • Pituitary Tumor Removal
    • CSF leak repair
    • Pediatric Ventricular Catheter Placement
    • General Ventricular Catheter Placement
    Device Description

    The StealthStation® System, with StealthStation Cranial software helps guide surgeons during cranial surgical procedures such as biopsies, tumor resections, and shunt and lead placements. The StealthStation® Cranial software works in conjunction with an Image Guided System (IGS) which consists of clinical software, surgical instruments, a referencing system and platform/computer hardware. Image guidance, also called navigation, tracks the position of instruments in relation to the surgical anatomy and identifies this position on diagnostic or intraoperative images of the patient. StealthStation® Cranial software functionality is described in terms of its feature sets which are categorized as imaging modalities, registration, planning, interfaces with medical devices, and views. Feature sets include functionality that contributes to clinical decision making and are necessary to achieve system performance.

    AI/ML Overview

    The provided document is a 510(k) Premarket Notification from Medtronic Navigation Inc. to the FDA for their StealthStation Synergy Cranial S7 Software v.2.2.8 and StealthStation Cranial Software v3.1.1. The document primarily discusses the substantial equivalence of these devices to previously cleared predicate devices.

    While it mentions system accuracy requirements and some aspects of testing, it does NOT contain the detailed information typically found in a study proving a device meets acceptance criteria for an AI/ML medical device, especially regarding clinical performance, expert ground truth, multi-reader studies, or large-scale data sets.

    The document describes a surgical navigation software, which is a different category from an AI/ML diagnostic or predictive device. The "performance testing" described focuses on 3D positional and trajectory accuracy of the surgical navigation system itself, not on the performance of an AI algorithm in interpreting medical images or making clinical assessments.

    Therefore, many of the requested sections about AI/ML device performance (e.g., ground truth methods, sample sizes for training/test sets in the context of AI, expert adjudication, MRMC studies) are not applicable or not provided in this document.

    Here's what can be extracted and inferred from the text regarding the device's acceptance criteria and the study that "proves" it meets them, framed within the context of a surgical navigation system:


    Device: StealthStation® Synergy Cranial S7 Software v2.2.8 and StealthStation® Cranial Software v3.1.1 (used with the StealthStation® System)

    Function: Surgical navigation software intended to aid in precisely locating anatomical structures in neurosurgical procedures.

    Nature of Device's "Performance": The performance here refers to the accuracy of the navigation system in guiding surgical instruments, not an AI's ability to interpret images or predict outcomes.


    1. Table of Acceptance Criteria and Reported Device Performance

    This information is presented within the "Summary of the Technological Characteristics" section, specifically under the "System Accuracy Requirement" for both software versions.

    Criterion TypeAcceptance Criterion (Predicate Device Performance)Reported Device Performance (Subject Device - Synergy Cranial v2.2.8)Reported Device Performance (Subject Device - Cranial v3.1.1)
    3D Positional AccuracyMean error ≤ 2.0 mm (for both predicates K150216 and K153660)0.70 mm1.16 mm
    Trajectory Angle AccuracyMean error ≤ 2.0 degrees (for both predicates K150216 and K153660)0.46 degrees0.41 degrees

    Conclusion: Both subject devices (v2.2.8 and v3.1.1) demonstrate positional and trajectory accuracy values better than or equal to the specified acceptance criteria (which are based on the predicate devices' performance).

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: Not explicitly stated as a numerical 'sample size' of cases or images in the typical AI/ML sense. The document states: "This performance was determined using anatomically representative phantoms and utilizing a subset of system components and features that represent the worst-case combinations of all potential system components." This implies testing was done on physical phantoms rather than patient data.
    • Data Provenance: Not applicable in the sense of patient data origin (e.g., country of origin, retrospective/prospective). The testing used "anatomically representative phantoms."

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Number of Experts/Qualifications: Not applicable. The "ground truth" for a surgical navigation system's accuracy is typically established by direct physical measurements against known values on precise phantoms, not by expert human interpretation of images for diagnosis or outcomes.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not applicable. As the "ground truth" is established by physical measurement on phantoms, or engineering validation, there is no need for expert adjudication.

    5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done

    • MRMC Study Done?: No. This type of study is relevant for AI systems that assist human readers in tasks like image interpretation or diagnosis. This document pertains to a surgical navigation system, where the 'device performance' is its physical accuracy, not its interpretative assistance to a human reader.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Standalone Performance: Yes, implicitly. The "System Accuracy" testing is a standalone test of the device's accuracy in a controlled, "worst-case configuration" using phantoms. This measures the device's inherent precision and accuracy independent of direct human-in-the-loop performance during an actual surgery. However, this is not an AI algorithm's standalone performance in a diagnostic sense, but rather an engineering performance metric.

    7. The Type of Ground Truth Used

    • Type of Ground Truth: Engineering measurements / Physical reference standard. The document states the performance was determined using "anatomically representative phantoms." The ground truth for positional and trajectory accuracy would be the known, precisely measured positions and angles on these phantoms.

    8. The Sample Size for the Training Set

    • Sample Size for Training Set: Not applicable / Not provided. This device is a software for surgical navigation, not an AI/ML model trained on a large dataset of patient images to perform diagnostic or predictive tasks. The software's functionality is based on algorithms that process imaging data (CT, MR) for registration and guidance, not on a machine learning training paradigm.

    9. How the Ground Truth for the Training Set was Established

    • How Ground Truth for Training Established: Not applicable. As there's no evident "training set" in the AI/ML sense, this question is not relevant. The software's functionality is based on established physics, geometry, and image processing algorithms.
    Ask a Question

    Ask a specific question about this device

    K Number
    K170018
    Date Cleared
    2017-05-19

    (136 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    StealthStationTM S8 ENT Software

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The StealthStation™ System, with StealthStation™ ENT software, is intended as an aid for locating anatomical structures in either open or percutaneous ENT procedures. Their use is indicated for any medical condition in which the use of stereotactic surgery may be appropriate, and where reference to a rigid anatomical structure, such as the skull, can be identified relative to images of the anatomy.

    This can include, but is not limited to, the following procedures:

    • · Functional endoscopic sinus surgery (FESS)
    • · Endoscopic skull base procedures
    • Lateral skull base procedures
    Device Description

    The StealthStation™ ENT software helps guide surgeons during ENT procedures such as functional endoscopic sinus surgery (FESS), endoscopic skull base procedures, and lateral skull base procedures. The StealthStation™ ENT software runs on the StealthStation™ S8 Platform. The StealthStation system is an Image Guided System (IGS), comprised of a platform, clinical software, surgical instruments, and a referencing system (which includes patient and instrument trackers). The IGS tracks the position of instruments in relation to the surgical anatomy, known as localization, and then identifies this position on preoperative or intraoperative images of a patient.

    The ENT software can display patient images from a variety of perspectives (axial, sagittal, coronal, oblique) and 3-dimensional (3D) renderings of anatomical structures can also be displayed. During navigation, the system identifies the tip location and trajectory of the tracked instrument on images and models the user has selected to display. The surgeon may also use the ENT software to create and store one or more surgical plan trajectories before surgery and simulate progression along these trajectories. During surgery, the software can display how the actual instrument tip position and trajectory relate to the plan, helping to guide the surgeon along the planned trajectory. While the surgeon's judgment remains the ultimate authority, real-time positional information obtained through the StealthStation™ System can serve to validate this judgment as well as guide.

    AI/ML Overview

    Here's an analysis of the provided text, focusing on the acceptance criteria and study information for the StealthStation™ S8 ENT Software:

    Acceptance Criteria and Device Performance

    Acceptance CriteriaReported Device Performance
    3D positional accuracy: mean error ≤ 2.0 mmPositional Error: 0.88 mm
    Trajectory angle accuracy: mean error ≤ 2.0 degreesTrajectory Error: 0.73°

    Note: The reported performance metrics meet the acceptance criteria (0.88mm ≤ 2.0mm and 0.73° ≤ 2.0°).


    Study Details

    1. Sample size used for the test set and the data provenance:

      • Test Set Sample Size: Not explicitly stated as a number of "cases" or "patients." The testing involved "anatomically representative phantoms" and "a subset of system components and features that represent the worst-case combinations of all potential system components." The text does not provide a specific numerical sample size for the test set.
      • Data Provenance: The data provenance is from "anatomically representative phantoms" and "laboratory and simulated use settings." It's not human patient data. As such, country of origin is not applicable in the typical sense; the data is generated from simulated environments. This is not retrospective or prospective clinical data.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • The provided text does not mention the use of human experts to establish ground truth for the test set. The ground truth for positional and trajectory accuracy is inherent in the design and measurement capabilities of the "anatomically representative phantoms" and the testing methodology itself, which would involve precise physical measurements.
    3. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

      • Not applicable. The ground truth was established through physical measurements on phantoms, not through expert consensus requiring adjudication.
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No, an MRMC comparative effectiveness study was not done. The device is a navigation system that aids surgeons, and the performance testing focuses on its accuracy rather than its impact on human reader (or surgeon) diagnostic/interpretative performance in a comparative study. Clinical testing "was not considered necessary prior to release as this is not new technology."
    5. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

      • Yes, the accuracy testing described (positional error and trajectory error) represents the standalone performance of the algorithm and system, as measured on phantoms. This is the performance of the StealthStation™ S8 ENT Software itself in tracking and displaying anatomical information accurately, independent of direct human judgment during the measurement process.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • The ground truth for the performance testing (accuracy) was established through physical measurements and known parameters of "anatomically representative phantoms." The expected or "true" position and trajectory within the phantom were precisely known, allowing for the calculation of measurement errors.
    7. The sample size for the training set:

      • Not applicable. The StealthStation™ S8 ENT Software is an Image Guided System (IGS) that relies on tracking and displaying pre-acquired patient images and instrument positions. It is not an AI/ML algorithm that requires a separate "training set" in the traditional sense of machine learning for classification or prediction tasks. Its functionality is based on established engineering principles and algorithms, not a trained model from a specific data set. The document refers to "Software Verification and Validation testing" which implies testing against requirements, not training data.
    8. How the ground truth for the training set was established:

      • Not applicable, as there is no mention of a "training set" in the context of an AI/ML algorithm that predicts or classifies. The software's functionality is deterministic based on its programming and inputs (like imaging data and tracker signals).
    Ask a Question

    Ask a specific question about this device

    Page 1 of 2