Search Filters

Search Results

Found 31 results

510(k) Data Aggregation

    K Number
    K242464
    Date Cleared
    2025-06-05

    (290 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Medtronic Navigation, Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Stealth™ Spine Clamps

    When used with Medtronic computer assisted surgery systems, defined as including the Stealth™ System, the following indications of use are applicable:

    • The spine referencing devices are intended to provide rigid fixation between patient and patient reference frame for the duration of the surgery. The devices are intended to be reusable.
    • The navigated instruments are specifically designed for use with Medtronic computer-assisted surgery systems, which are indicated for any medical condition in which the use of stereotactic surgery may be appropriate or vertebra can be identified relative to a CT or MR based model, fluoroscopy images, or digitized landmarks of the anatomy.
    • The Stealth™ spine clamps are indicated for skeletally mature patients.

    ModuLeX™ Shank Mounts

    When used with Medtronic computer assisted surgery systems, defined as including the Stealth™ System, the following indications of use are applicable:

    • The spine referencing devices are intended to provide rigid fixation between patient and patient reference frame for the duration of the surgery. The devices are intended to be reusable.
    • The navigated instruments are specifically designed for use with Medtronic computer assisted surgery systems, which are indicated for any medical condition in which the use of stereotactic surgery may be appropriate or vertebra can be identified relative to a CT or MR based model, fluoroscopy images, or digitized landmarks of the anatomy.
    • The ModuLeX™ shank mounts are indicated to be used with the CD Horizon™ ModuLeX™ Spinal System during surgery.
    • The ModuLeX™ shank mounts are indicated for skeletally mature patients.
    Device Description

    The Stealth™ Spine Clamps are intended to provide rigid attachment between the patient and patient reference frame for the duration of the surgery. The subject devices are designed for use with the Stealth™ System and are intended to be reusable.

    The ModuLeX™ Shank Mounts are intended to provide rigid attachment between the patient and patient reference frame for the duration of the surgery. The subject devices are designed for use with the Stealth™ System and are intended to be reusable.

    AI/ML Overview

    This document, an FDA 510(k) Clearance Letter, does not contain the specific details about acceptance criteria and study data that would be found in a full submission. 510(k) summary documents typically provide a high-level overview.

    Based on the provided text, here's what can be extracted and what information is not available:

    Information from the document:

    • Device Type: Stealth™ Spine Clamps and ModuLeX™ Shank Mounts, which are orthopedic stereotaxic instruments used with computer-assisted surgery systems (specifically the Medtronic Stealth™ System).
    • Purpose: To provide rigid fixation between the patient and a patient reference frame for the duration of spine surgery, and to serve as navigated instruments for surgical guidance.
    • Predicate Devices:
      • Stealth™ Spine Clamps: StealthStation™ Spinous Process Clamps (K211442)
      • ModuLeX™ Shank Mounts: Rod Clamps (K131425)
    • Testing Summary (XI. Discussion of the Performance Testing):
      • Mechanical Robustness and Navigation Accuracy
      • Functional Verification
      • Useful Life Testing
      • Packaging Verification
      • Design Validation
      • Summative Usability
      • Biocompatibility (non-cytotoxic, non-sensitizing, non-irritating, non-toxic, non-pyrogenic)

    Information NOT available in the provided document (and why):

    This 510(k) summary describes physical medical devices (clamps and mounts) used in conjunction with a computer-assisted surgery system, but it does not describe an AI/software device whose performance is measured in terms of accuracy, sensitivity, or specificity for diagnostic or guidance purposes. Therefore, many of the requested points related to AI performance, ground truth, and reader studies are not applicable or not detailed in this type of submission.

    Specifically, the document does not contain:

    1. A table of acceptance criteria and reported device performance (with specific numerical metrics for "Navigation Accuracy"): While "Navigation Accuracy" is listed as a test conducted, the actual acceptance criteria (e.g., "accuracy must be within X mm") and the quantitative results are not provided in this summary. This would typically be in a detailed test report within the full 510(k) submission.
    2. Sample sizes used for the test set and data provenance: No information on the number of units tested, or if any patient data was used for "Navigation Accuracy" (it's likely bench testing).
    3. Number of experts used to establish ground truth and their qualifications: Not applicable as this is a mechanical device submission, not an AI diagnostic submission. Ground truth for mechanical accuracy would be established by precise measurement tools, not human experts in this context.
    4. Adjudication method for the test set: Not applicable for mechanical/functional testing.
    5. Multi-Reader Multi-Case (MRMC) comparative effectiveness study: Not mentioned or applicable. This type of study is for evaluating human performance (e.g., radiologists interpreting images) with and without AI assistance.
    6. Stand-alone (algorithm only) performance: Not applicable; this is not an algorithm for diagnosis or image analysis.
    7. Type of ground truth used (expert consensus, pathology, outcomes data, etc.): For "Navigation Accuracy," the ground truth would be based on highly precise measurement systems (e.g., optical tracking validation) in a lab setting, not clinical outcomes or expert consensus.
    8. Sample size for the training set: Not applicable; there is no "training set" as this is not a machine learning model.
    9. How the ground truth for the training set was established: Not applicable.

    Summary of what is known concerning acceptance criteria and proof of adherence:

    • Acceptance Criteria/Proof (General): The document states that "Testing conducted to demonstrate equivalency of the subject device to the predicate is summarized as follows: Mechanical Robustness and Navigation Accuracy, Functional Verification, Useful Life Testing, Packaging Verification, Design Validation, Summative Usability, Biocompatibility."
    • Implied Acceptance: The FDA's clearance (K242464) indicates that Medtronic successfully demonstrated that the new devices are "substantially equivalent" to predicate devices based on the submitted testing. This means the performance met the FDA's expectations for safety and effectiveness, likely by demonstrating equivalent or better performance against the predicates in the specified tests (e.g., meeting established benchmarks for sterility, material strength, and precision when interfaced with the navigation system). However, the specific numerical criteria for "Navigation Accuracy" are not disclosed in this summary letter.

    Conclusion based on the provided text:

    This 510(k) summary is for a Class II mechanical stereotaxic instrument and, as such, focuses on demonstrating mechanical, functional, and biocompatibility equivalency to predicate devices. It does not contain the detailed performance metrics, ground truth establishment methods, or human reader study results that would be pertinent to an AI/software medical device submission.

    Ask a Question

    Ask a specific question about this device

    K Number
    K240465
    Date Cleared
    2024-06-21

    (126 days)

    Product Code
    Regulation Number
    892.1650
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Medtronic Navigation, Inc

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The O-arm™ O2 Imaging System is a mobile x-ray system, designed for 2D and 3D imaging for adult and pediatric patients weighing 60 lbs or greater and having an abdominal thickness greater than 16 cm, and is intended to be used where a physician benefits from 2D and 3D information of anatomic structures and objects with high x-ray attenuation such as bony anatomy and metallic objects. The O-arm™ O2 Imaging System is compatible with certain image guided surgery systems.

    Device Description

    The O-arm™ O2 Imaging System is a mobile x-ray system that provides 3D and 2D imaging. The O-arm™ O2 Imaging System consists of two main assemblies that are used together: The Image Acquisition System (IAS) and The Mobile View Station (MVS). The two units are interconnected by a single cable that provides power and signal data. The IAS has an internal battery pack that provides power for motorized transportation and gantry positioning. In addition, the battery pack is used to power the X-ray tank. The MVS has an internal UPS to support its function when mains power is disconnected. The O-arm™ O2 Imaging System operates off standard line voltage within the following voltages: VAC 100, 120 or 240, Frequency 60Hz or 50Hz, Power Requirements 1440 VA.

    AI/ML Overview

    The Medtronic O-arm™ O2 Imaging System with 4.3.0 software introduces three new features: Medtronic Implant Resolution (MIR) (referred to as KCMAR in the document), 3D Long Scan (3DLS), and Spine Smart Dose (SSD). The device's performance was evaluated through various studies to ensure substantial equivalence to the predicate device (O-arm™ O2 Imaging System 4.2.0 software) and to verify that the new features function as intended without raising new safety or effectiveness concerns.

    Here's a breakdown of the requested information:

    1. Table of Acceptance Criteria and Reported Device Performance

    Feature/MetricAcceptance Criteria (Implicit from Study Design/Results)Reported Device Performance
    Spine Smart Dose (SSD)Clinical equivalence to predicate 3D acquisition modes (Standard and HD) by board-certified neuroradiologists.Deemed clinically equivalent to O-arm™ O2 Imaging System 4.2.x Standard and Predicate High-Definition modes by board-certified neuroradiologists in a blinded review of 100 clinical image pairs.
    SSD Image Quality (Bench Testing)Meet system-level requirements for 3D Line pair, Contrast, MTF, Uniformity, and Geometric accuracy.Met all system-level requirements.
    SSD Navigational Accuracy (Bench Testing)Meet system-level requirements in terms of millimeters.Met all system-level requirements.
    Medtronic Implant Resolution (KCMAR)Clinical utility of KCMAR images to be statistically better than corresponding non-KCMAR images from the predicate device by board-certified radiologists.Statistically better clinical value when compared to corresponding images from the Predicate Device (O-arm O2 Imaging System version 4.2.0) under specified indications.
    KCMAR Metal Artifact Reduction (Bench Testing)Qualitative comparison to demonstrate metal artifact reduction between non-KCMAR and KCMAR processed images.Demonstrated metal artifact reduction.
    KCMAR Implant Location Accuracy (Bench Testing)Quantitative assessment of implant location accuracy in millimeters and degrees to meet system requirements.Met all system-level requirements.
    3D Long Scan (3DLS) Clinical UtilityClinical utility of Standard 3DLS and SSD 3DLS to be statistically equivalent to the corresponding Standard acquisition mode available in the predicate system by board-certified radiologists.Statistically equivalent clinical utility when compared to the corresponding Standard acquisition mode available in the predicate system (version 4.2.0).
    3DLS Image Quality (Bench Testing)Meet system-level requirements for 3D Line pair, Contrast, MTF, and Geometric accuracy.Met all system-level requirements.
    3DLS Navigational Accuracy (Bench Testing)Meet system-level requirements in terms of millimeters.Met all system-level requirements.
    Usability (3DLS, SSD, KCMAR)Pass summative validation with critical tasks and new workflows for intended users in simulated use environments.Passed summative validation, providing objective evidence of safety and effectiveness for intended users, uses, and environments.
    Dosimetry (SSD, 3DLS)Confirm dose accuracy (kV, mA, CTDI, DLP) meets system-level requirements for new acquisition features.All dosimetry testing passed system-level requirements.

    2. Sample Size for the Test Set and Data Provenance

    • Spine Smart Dose (SSD) Clinical Equivalence:
      • Sample Size: 100 clinical image pairs.
      • Data Provenance: "Clinical" images, suggesting retrospective or prospective clinical data. No specific country of origin is mentioned.
    • KCMAR Clinical Equivalence:
      • Sample Size:
        • Initial study: 40 image pairs from four cadavers (small, medium, large, and extra-large habitus).
        • Subsequent study: 33 image pairs from two cadavers (small and extra-large habitus).
      • Data Provenance: Cadavers (ex-vivo data). No country of origin specified.
    • 3D Long Scan (3DLS) Clinical Utility:
      • Sample Size: 45 paired samples from acquisitions of three cadavers (small, medium, and extra-large habitus). Two cadavers were instrumented with pedicle screw hardware.
      • Data Provenance: Cadavers (ex-vivo data). No country of origin specified.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Spine Smart Dose (SSD) Clinical Equivalence: "board-certified neuroradiologist" (singular, implied one, but could be more based on typical study designs not explicitly stated as count of 1). The document mentions "board-certified neuroradiologist involving 100 clinical image pairs".
    • KCMAR Clinical Equivalence: "Board-certified radiologists" (plural).
    • 3D Long Scan (3DLS) Clinical Utility: "Board-certified radiologists" (plural).

    4. Adjudication Method for the Test Set

    The document does not explicitly state an adjudication method (e.g., 2+1, 3+1). It describes a "blinded review" for SSD and "clinical utility scores (1-5 scale)" for KCMAR and 3DLS, implying individual assessments that were then potentially aggregated or statistically compared.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance

    The studies for SSD, KCMAR, and 3DLS involved multiple readers (board-certified radiologists/neuroradiologists) evaluating images.

    • SSD: Compared "O-arm™ O2 Imaging System 4.3.0 SSD images" to "O-arm™ O2 Imaging System 4.2.x Standard and Predicate High-Definition modes." The outcome was clinical equivalence, not an improvement in human reader performance with AI assistance. It states the SSD leverages Machine Learning technology to reduce noise.
    • KCMAR: Compared images reconstructed "without KCMAR feature" to images "with KCMAR feature." The outcome was "statistically better" clinical value for KCMAR. This indicates that the feature itself (which uses an algorithm for metal artifact reduction) resulted in better images, which would indirectly benefit the reader, but it doesn't quantify an improvement in human reader performance directly.
    • 3DLS: Compared "Standard 3DLS and SSD 3DLS" to "corresponding Standard acquisition mode." The outcome was "statistically equivalent clinical utility." This specifically relates to the utility of the scan modes, not an AI-assisted interpretation by readers.

    Therefore, while MRMC-like studies were conducted to assess the performance of the features, the focus was on the characteristics of the images produced by the device (clinical equivalence/utility/better value) rather than quantifying an effect size of how much human readers improve with AI versus without AI assistance in their diagnostic accuracy or efficiency.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, aspects of standalone performance were evaluated through bench testing.

    • SSD Bench Testing: Evaluated image quality parameters (3D Line pair, Contrast, MTF, Uniformity, Geometric accuracy) and navigational accuracy.
    • KCMAR Bench Testing: Qualitatively compared metal artifact reduction and quantitatively assessed implant location accuracy.
    • 3DLS Bench Testing: Verified system-level requirements for image quality (3D Line pair, Contrast, MTF, Geometric accuracy) and navigational accuracy.

    These bench tests assess the algorithmic output directly against defined performance metrics, independent of human interpretation.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • Clinical Equivalence/Utility for SSD, KCMAR, 3DLS: Ground truth was established by expert assessment/consensus from board-certified neuroradiologists/radiologists providing clinical utility scores and making equivalence/superiority judgments.
    • Bench Testing: Ground truth was based on phantom measurements and objective system-level requirements for image quality, geometric accuracy, and navigational accuracy.

    8. The Sample Size for the Training Set

    The document states that the Spine Smart Dose (SSD) feature "leverages Machine Learning technology with existing O-arm™ images to achieve reduction in dose..." However, it does not specify the sample size of the training set used for this Machine Learning model.

    9. How the Ground Truth for the Training Set was Established

    For the Spine Smart Dose (SSD) feature, which uses Machine Learning, the document mentions "existing O-arm™ images." It does not explicitly state how the ground truth for these training images was established. Typically, for such denoising or image enhancement tasks, the "ground truth" might be considered the higher-dose, higher-quality images, with the ML model learning to reconstruct a similar quality image from lower-dose acquisitions. The document implies that the model's output (low-dose reconstruction) was then validated against expert opinion for clinical equivalence.

    Ask a Question

    Ask a specific question about this device

    K Number
    K231976
    Date Cleared
    2023-10-19

    (108 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Medtronic Navigation, Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The StealthStation System, with StealthStation Cranial software, is intended to aid in precisely locating anatomical structures in either open or percutaneous neurosurgical procedures. The system is indicated for any medical condition in which reference to a rigid anatomical structure can be identified relative to images of the anatomy. This can include, but is not limited to, the following cranial procedures (including stereotactic frame-based and stereotactic frame alternatives-based procedures):

    • Cranial biopsies (including stereotactic)
    • Deep brain stimulation (DBS) lead placement
    • Depth electrode placement
    • Tumor resections
    • Craniotomies/Craniectomies
    • Skull Base Procedures
    • Transsphenoidal Procedures
    • Thalamotomies/Pallidotomies
    • Pituitary Tumor Removal
    • CSF leak repair
    • Pediatric Ventricular Catheter Placement
    • General Ventricular Catheter Placement
    Device Description

    The StealthStation System, with StealthStation Cranial software helps guide surgeons during cranial surgical procedures such as biopsies, tumor resections, and shunt and lead placements. The StealthStation Cranial Software works in conjunction with an Image Guided System (IGS) which consists of clinical software, surgical instruments, a referencing system and platform/computer hardware. Image guidance, also called navigation, tracks the position of instruments in relation to the surgical anatomy and identifies this position on diagnostic or intraoperative images of the patient. StealthStation Cranial Software functionality is described in terms of its feature sets which are categorized as imaging modalities, registration, planning, interfaces with medical devices, and views. Feature sets include functionality that contributes to clinical decision making and are necessary to achieve system performance.

    AI/ML Overview

    The furnished document is a 510(k) premarket notification for the StealthStation Cranial Software, version 3.1.5. It details the device's indications for use, technological characteristics, and substantiates its equivalence to a predicate device through performance testing.

    Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:


    1. Table of Acceptance Criteria and Reported Device Performance:

    Acceptance CriteriaReported Device Performance (StealthStation Cranial Software Version 3.1.5)Predicate Device Performance (StealthStation Cranial Software Version 3.1.4)
    3D Positional Accuracy (Mean Error) ≤ 2.0 mm0.824 mm1.27 mm
    Trajectory Angle Accuracy (Mean Error) ≤ 2.0 degrees0.615 degrees1.02 degrees

    2. Sample Size Used for the Test Set and Data Provenance:

    The document mentions "System accuracy validation testing" was conducted. However, it does not specify the sample size for this test set (e.g., number of cases, images, or measurements).

    Regarding data provenance, the document does not explicitly state the country of origin of the data nor whether the data used for accuracy testing was retrospective or prospective. The study focuses on demonstrating substantial equivalence through testing against predefined accuracy thresholds rather than utilizing patient-specific clinical data.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:

    The document does not provide information on the number of experts used to establish ground truth for the system accuracy validation testing, nor their specific qualifications. It mentions "User exploratory testing to explore clinical workflows, including standard and unusual clinically relevant workflows. This testing will include subject matter experts, internal and field support personnel," but this refers to a different type of testing (usability/workflow exploration) rather than objective ground truth establishment for accuracy measurements.

    4. Adjudication Method for the Test Set:

    The document does not specify an adjudication method (e.g., 2+1, 3+1, none) for establishing ground truth for the system accuracy validation testing. The accuracy measurements appear to be objective, derived from controlled testing environments rather than subjective expert interpretations requiring adjudication.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not conducted as part of this submission. The testing described is focused on the standalone performance of the device's accuracy in a controlled environment, not on how human readers perform with or without AI assistance.

    6. Standalone Performance (Algorithm Only without Human-in-the-loop Performance):

    Yes, standalone performance testing was done. The "System accuracy validation testing" directly assesses the algorithm's performance in achieving specific positional and angular accuracy. The reported "Positional Error - 0.824 mm" and "Trajectory Error - 0.615 degrees" are metrics of the standalone algorithm's accuracy without direct human intervention in the measurement process itself, although the device is ultimately used by humans in a clinical context.

    7. Type of Ground Truth Used:

    The ground truth for the system accuracy validation testing appears to be based on objective, controlled measurements within a testing environment, likely involving phantom models or precise physical setups where the true position and orientation are known or can be measured with high precision. This is implied by the nature of "3D positional accuracy" and "trajectory angle accuracy" measurements, which are typically determined against a known, precise reference. It is not expert consensus, pathology, or outcomes data.

    8. Sample Size for the Training Set:

    The document does not provide any information regarding the sample size for a training set. This is because the StealthStation Cranial Software is a navigation system that uses image processing and registration algorithms, rather than a machine learning model that requires a distinct training dataset in the traditional sense. The software's development likely involves engineering principles and rigorous testing against design specifications, not iterative learning from data.

    9. How the Ground Truth for the Training Set Was Established:

    As the device does not appear to be an AI/ML model that undergoes a machine learning "training" phase with a labeled dataset in the conventional understanding for medical imaging analysis, the concept of establishing ground truth for a training set is not applicable in this context. The software's functionality is based on established algorithms for image registration and instrument tracking, which are then validated through performance testing against pre-defined accuracy thresholds.

    Ask a Question

    Ask a specific question about this device

    K Number
    K221087
    Date Cleared
    2022-06-10

    (58 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Medtronic Navigation, Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Synergy Cranial v2.2.9:
    The StealthStation System, with Synergy Cranial software, is intended as an aid for precisely locating anatomical structures in either open or percutaneous neurosurgical procedures. The system is indicated for any medical condition in which reference to a rigid anatomical structure can be identified relative to images of the anatomy. This can include, but is not limited to, the following cranial procedures:

    • Cranial Biopsies
    • Tumor Resections
    • Craniotomies/Craniectomies
    • Skull Base Procedures
    • Transsphenoidal Procedures
    • Thalamotomies/Pallidotomies
    • Pituitary Tumor Removal
    • CSF Leak Repair
    • Pediatric Catheter Shunt Placement
    • General Catheter Shunt Placement

    StealthStation Cranial Software v3.1.4:
    The StealthStation System, with StealthStation Cranial software, is intended to aid in precisely locating anatomical structures in either open or percutaneous neurosurgical procedures. The system is indicated for any medical condition in which reference to a rigid anatomical structure can be identified relative to images of the anatomy. This can include, but is not limited to, the following cranial procedures (including stereotactic frame-based and stereotactic frame alternatives-based procedures):

    • Cranial biopsies (including stereotactic)
    • Deep brain stimulation (DBS) lead placement
    • Depth electrode placement
    • Tumor resections
    • Craniotomies/Craniectomies
    • Skull Base Procedures
    • Transsphenoidal Procedures
    • Thalamotomies/Pallidotomies
    • Pituitary Tumor Removal
    • CSF leak repair
    • Pediatric Ventricular Catheter Placement
    • General Ventricular Catheter Placement
    Device Description

    The StealthStation System, with StealthStation Cranial software helps guide surgeons during cranial surgical procedures such as biopsies, tumor resections, and shunt and lead placements. The StealthStation Cranial software works in conjunction with an Image Guided System (IGS) which consists of clinical software, surgical instruments, a referencing system and platform/computer hardware. Image guidance, also called navigation, tracks the position of instruments in relation to the surgical anatomy and identifies this position on diagnostic or intraoperative images of the patient. StealthStation Cranial software functionality is described in terms of its feature sets which are categorized as imaging modalities, registration, planning, interfaces with medical devices, and views. Feature sets include functionality that contributes to clinical decision making and are necessary to achieve system performance.

    AI/ML Overview

    The Medtronic Navigation, Inc. StealthStation Cranial Software (v3.1.4) and Synergy Cranial Software (v2.2.9) are image-guided surgery (IGS) systems intended to aid in precisely locating anatomical structures during neurosurgical procedures.

    Here's an analysis of the acceptance criteria and study that proves the device meets them, based on the provided FDA 510(k) summary:

    1. Table of Acceptance Criteria and Reported Device Performance

    The primary acceptance criteria for both software versions are related to system accuracy in 3D positional and trajectory angle measurements.

    Acceptance Criteria (Synergy Cranial v2.2.9 & StealthStation Cranial v3.1.3/v3.0)Reported Device Performance (Synergy Cranial v2.2.9)Reported Device Performance (StealthStation Cranial v3.1.3/v3.0)
    System Accuracy:
    3D positional accuracy: mean error ≤ 2.0 mm1.29 mm1.27 mm
    Trajectory angle accuracy: mean error ≤ 2.0 degrees0.87 degrees1.02 degrees

    Note: The document refers to "StealthStation Cranial v3.1.3" and also "StealthStation Cranial v3.0 Software" in the testing section for the newer version's accuracy. Assuming v3.1.3 is the subject device and v3.0 is a close predecessor or the system version used for the test. The "v3.1.4" in the 510(k) letter is likely a minor update from v3.1.3, and the reported performance for v3.1.3 is considered representative.

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not explicitly state the sample size (number of patients or phantom configurations) used for the quantitative accuracy testing (test set). It mentions:

    • "Under representative worst-case configuration"
    • "utilizing a subset of system components and features that represent the worst-case combinations of all potential system components."
    • "Test configurations included CT images with slice spacing and thickness ranging between 0.6 mm to 1.25 mm and T1-weighted MR images with slice spacing and thickness ranging between 1.0 mm to 3.0 mm."

    Data Provenance: The data appears to be prospective as it was generated through laboratory and simulated use settings with "anatomically representative phantoms." The country of origin is not explicitly stated, but given Medtronic Navigation, Inc. is located in Louisville, Colorado, USA, it's highly probable the testing was conducted in the USA.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    The document indicates that the accuracy was determined using "anatomically representative phantoms." This implies that the ground truth for positional and angular accuracy was engineered and precisely measured within a controlled phantom environment, rather than established by human experts interpreting clinical data. Therefore, human experts were likely involved in designing and validating the phantom setup and measurement methodologies, but not in directly establishing ground truth from patient data. The qualifications of these individuals are not specified but would typically be engineers, physicists, or metrology specialists.

    4. Adjudication Method for the Test Set

    Given that the ground truth was established through a designed phantom and precise measurements, an adjudication method for human interpretation is not applicable here. The measurements are objective and quantitative.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No MRMC comparative effectiveness study was mentioned for human readers improving with AI vs. without AI assistance. The device is a surgical navigation system, aiding in real-time guidance, not an AI-assisted diagnostic tool that would typically undergo MRMC studies.

    6. Standalone (i.e., algorithm only without human-in-the-loop performance)

    Yes, a standalone performance was done for the system's accuracy. The reported positional and trajectory angle errors are measures of the system's inherent accuracy, independent of a specific human-in-the-loop scenario. The study describes "Design verification and validation was performed using the StealthStation Cranial software in laboratory and simulated use settings."

    7. The Type of Ground Truth Used

    The ground truth used was engineered truth derived from precisely measured anatomical phantoms. This is a highly controlled and quantitative method, suitable for measuring the accuracy of a navigation system.

    8. The Sample Size for the Training Set

    The document does not describe a "training set" in the context of an AI/machine learning model. The device is referred to as "software" for an Image Guided System (IGS), which typically relies on established algorithms for image processing, registration, and tracking, rather than deep learning models that require large training datasets with ground truth labels in the conventional sense. The "training" for such a system would involve rigorous formal verification and validation of these algorithms.

    9. How the Ground Truth for the Training Set Was Established

    As noted above, the concept of a "training set" and its associated ground truth, as typically applied to AI/machine learning, does not appear to be directly applicable to the description of this device's development as presented in the 510(k) summary. The development involved "Software verification and validation testing for each requirement specification" and "System integration performance testing for cranial surgical procedures using anatomical phantoms," suggesting traditional software engineering and testing methodologies rather than machine learning training.

    Ask a Question

    Ask a specific question about this device

    K Number
    K211269
    Date Cleared
    2022-01-07

    (255 days)

    Product Code
    Regulation Number
    878.4810
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Medtronic Navigation, Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Visualase MRI-Guided Laser Ablation System is a neurosurgical tool and is indicated for use to ablate, necrotize, or coagulate intracranial soft tissue including brain structures (for example, brain tumor, radiation necrosis and epileptic foci as identified by non-invasive and invasive neurodiagnostic testing, including imaging) through interstitial irradiation or thermal therapy in medicine and surgery in the discipline of neurosurgery with 800nm through 1064mm lasers.

    Device Description

    The Visualase MRI-Guided Laser Ablation System comprises hardware and software components used in combination with three MR-compatible (conditional), sterile, single-use, saline-cooled laser applicators with proprietary diffusing tips that deliver controlled energy to the tissue of interest. The system consists of: a diode laser (energy source) a coolant pump to circulate saline through the laser application Visualase workstation which interfaces with MRI scanner's host computer Visualase software which provides the system's ability to visualize and monitor relative changes in tissue temperature during ablation procedures, set temperature limits and control the laser output; two monitors to display all system imaging and laser ablation via a graphical user interface and peripherals for interconnections Remote Presence software provides a non-clinical utility application for use by Medtronic only and is not accessible by the user

    AI/ML Overview

    The provided text describes specific details about the Visualase MRI-Guided Laser Ablation System (SW 3.4) and its comparison to predicate devices, but it does not contain a table of acceptance criteria or a detailed study description with performance metrics in the format requested.

    The "Testing Summary" section mentions in vivo testing to demonstrate accuracy and performance of MR Thermometry and Thermal Damage Estimate, as well as software and system verification and validation. However, it does not provide:

    • Specific acceptance criteria values (e.g., "accuracy must be within X degrees Celsius").
    • Reported device performance values against these criteria.
    • Sample sizes for the test set.
    • Data provenance.
    • Details about expert involvement or adjudication.
    • Information on MRMC studies or standalone AI performance.
    • Details about the training set.

    Therefore, most of the requested information cannot be extracted from the given text.

    Here's a breakdown of what can be extracted and what is missing based on your request:

    1. A table of acceptance criteria and the reported device performance

    • Acceptance Criteria: Not explicitly stated with numeric values in the document. The general statement is "Testing demonstrated the accuracy and precision of the Visualase MRI-Guided Ablation System's Thermal Damage Estimate and MR Thermometry for its intended use."
    • Reported Device Performance: Not provided (e.g., no specific accuracy values, precision values, or success rates are given).

    2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Sample Size: Not specified.
    • Data Provenance: The testing was "In vivo testing conducted 1.5T and 3.0T (in accordance with 21 CFR 58)". 21 CFR Part 58 refers to Good Laboratory Practice for nonclinical laboratory studies, which implies prospective in vivo studies, but does not specify the origin of the data (e.g., country).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    • Not specified.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    • Not specified.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • The document implies the device is a tool used by a neurosurgeon. It does not describe a comparative effectiveness study involving human readers with or without AI assistance, or any effect size for such a study.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • The document states the system "provides the system's ability to visualize and monitor relative changes in tissue temperature during ablation procedures, set temperature limits and control the laser output." It is an MRI-guided system implying human-in-the-loop operation. No standalone algorithm-only performance is described.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • Given it's "in vivo testing" for "Thermal Damage Estimate" and "MR Thermometry," the ground truth likely involved a direct measurement method for temperature or thermal damage in the tissue, possibly through implanted probes or post-ablation pathological assessment, but the specific method is not detailed.

    8. The sample size for the training set

    • Not applicable as this document describes performance of a medical device (laser ablation system with software), not a machine learning model explicitly detailing training data. The software components are verified and validated, but no "training set" in the context of AI/ML is mentioned.

    9. How the ground truth for the training set was established

    • Not applicable for the reasons stated above.
    Ask a Question

    Ask a specific question about this device

    K Number
    K203639
    Date Cleared
    2021-01-13

    (30 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Medtronic Navigation, Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The StealthStation System, with StealthStation Cranial Software, is intended as an aid for locating anatomical structures in either open or percutaneous neurosurgical procedures. Their use is indicated for any medical condition in which the use of stereotactic surgery may be appropriate, and where reference to a rigid anatomical structure, such as the skull, can be identified relative to images of the anatomy.

    This can include, but is not limited to, the following cranial procedures (including stereotactic frame-based and stereotactic frame alternatives-based procedures):

    • Tumor resections
    • General ventricular catheter placement
    • Pediatric ventricular catheter placement
    • Depth electrode, lead, and probe placement
    • Cranial biopsies
    Device Description

    The StealthStation™ Cranial Software v1.3.2 works in conjunction with an Image Guided System (IGS) which consists of clinical software, surgical instruments, a referencing system and platform/computer hardware. Image guidance, also called navigation, tracks the position of instruments in relation to the surgical anatomy and identifies this position on diagnostic or intraoperative images of the patient. During surgery, positions of specialized surgical instruments are continuously updated on these images either by optical tracking or electromagnetic tracking.

    Cranial software functionality is described in terms of its feature sets which are categorized as imaging modalities, registration, planning, interfaces with medical devices, and views. Feature sets include functionality that contributes to clinical decision making and are necessary to achieve system performance.

    AI/ML Overview

    The acceptance criteria for the StealthStation™ Cranial Software v1.3.2 are not explicitly detailed in the provided document beyond the general statement of "System Accuracy Requirements" being "Identical" to the predicate device. The performance characteristics of the predicate device, StealthStation™ Cranial Software v1.3.0, are stated as the benchmark for system accuracy.

    Here's the information extracted from the document:

    1. Table of Acceptance Criteria and Reported Device Performance:

    Criteria/FeatureAcceptance Criteria (based on Predicate Device K201175)Reported Device Performance (StealthStation™ Cranial Software v1.3.2)
    System AccuracyMean 3D positional error ≤ 2.0 mmIdentical; no changes made to the StealthStation™ Cranial Software that would require System Accuracy testing for v1.3.2
    Mean trajectory angle accuracy ≤ 2.0 degrees
    All other featuresFunctions and performs as described for the predicate device.All other features are identical to the predicate device.

    2. Sample size used for the test set and the data provenance:

    • The document states that "Software verification testing for each requirement specification" was conducted and "Design verification was performed using the StealthStation™ System with Station™ Cranial Software v1.3.2 in laboratory."
    • No specific sample size for a test set is mentioned. The testing described is software verification and design verification, not a clinical study on patient data for performance evaluation in the typical sense of AI/ML devices.
    • Data provenance is not applicable or not disclosed as the document indicates "Clinical testing was not considered necessary prior to release as this is not new technology."

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Not applicable. The testing described is software and design verification rather than a clinical performance study requiring expert ground truth establishment from patient data.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

    • Not applicable. This information is relevant for clinical studies involving multiple reviewers adjudicating findings, which was not performed.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No. An MRMC comparative effectiveness study was not performed. The device is a navigation system and not an AI-assisted diagnostic tool that would typically involve human readers.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • No. The device is a surgical navigation system, which is inherently a human-in-the-loop tool. The performance evaluation focuses on its accuracy specifications within that use case during design verification.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • Not applicable. For the system accuracy, the ground truth would be precise measurements taken in a laboratory setting for the navigational accuracy, rather than clinical ground truth from patient data like pathology or outcomes.

    8. The sample size for the training set:

    • Not applicable. The document describes a software update for a stereotaxic instrument, not an AI/ML device that undergoes model training with a dataset.

    9. How the ground truth for the training set was established:

    • Not applicable. As the device is not described as an AI/ML system requiring a training set, the establishment of ground truth for such a set is not relevant.
    Ask a Question

    Ask a specific question about this device

    K Number
    K200723
    Date Cleared
    2020-06-26

    (99 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Medtronic Navigation Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The StealthStation FlexENT™ System, with the StealthStation™ ENT Software, is intended as an aid for precisely locating anatomical structures in either open or percutaneous ENT procedures. Their use is indicated for any medical condition in which the use of stereotactic surgery may be appropriate, and where reference to a rigid anatomical structure, such as the skull, can be identified relative to images of the anatomy.

    This can include, but is not limited to, the following procedures:

    • Functional Endoscopic Sinus Surgery (FESS)
    • Endoscopic Skull Base procedures
    • Lateral Skull Base procedures

    The Medtronic SteathStation FlexENT™ computer-assisted surgery system and its associated applications are intended as an aid for precisely locating anatomical structures in either open or percutaneous ENT procedures. Their use is indicated for any medical condition in which the use of stereotactic surgery may be appropriate, and where reference to a rigid anatomical structure, such as the skull, can be identified relative to images of the anatomy.

    Device Description

    The StealthStation FlexENT™ is an electromagnetic based surgical guidance platform that supports use of special application software (StealthStation™ S8 ENT Software 1.3 and associated instruments.

    The StealthStation™ S8 ENT Software 1.3 helps guide surgeons during ENT procedures such as functional endoscopic sinus surgery (FESS), endoscopic skull base procedures, and lateral skull base procedures. StealthStation™ S8 ENT Software 1.3 functionality is described in terms of its feature sets which are categorized as imaging modalities, registration, planning, and views. Feature sets include functionality that contributes to clinical decision making and are necessary to achieve system performance.

    Patient images can be displayed by the StealthStation™ S8 ENT Software 1.3 from a variety of perspectives (axial, sagittal, coronal, oblique) and 3dimensional (3D) renderings of anatomical structures can also be displayed. During navigation, the system identifies the tip location and traiectory of the tracked instrument on images and models the user has selected to display. The surgeon may also create and store one or more surgical plan trajectories before surgery and simulate progression along these trajectories. During surgery, the software can display how the actual instrument tip position and trajectory relate to the plan, helping to guide the surgeon along the planned trajectory. While the surgeon's judgment remains the ultimate authority, realtime positional information obtained through the StealthStation™ System can serve to validate this judgment as well as guide. The StealthStation™ S8 ENT v1.3 Software can be run on both the StealthStation FlexENT™ and StealthStation™ S8 Platforms.

    The StealthStation™ System is an Image Guided System (IGS), comprised of a platform (StealthStation FlexENT™ or StealthStation™ S8), clinical software, surgical instruments, and a referencing system (which includes patient and instrument trackers). The IGS tracks the position of instruments in relation to the surgical anatomy, known as localization, and then identifies this position on preoperative or intraoperative images of a patient.

    AI/ML Overview

    1. Table of Acceptance Criteria and Reported Device Performance:

    Performance MetricAcceptance Criteria (mean error)Reported Performance (StealthStation FlexENT™)Reported Performance (StealthStation™ S8)Reported Performance (Predicate: StealthStation™ S8 ENT v1.0)
    3D Positional Accuracy≤ 2.0 mm0.93 mm1.04 mm0.88 mm
    Trajectory Angle Accuracy≤ 2.0 degrees0.55°1.31°0.73°

    2. Sample Size Used for the Test Set and Data Provenance:

    The document states that "Testing was performed under the representative worst-case configuration... utilizing a subset of system components and features that represent the worst-case combinations of all potential system components." It does not specify a numerical sample size for the test set (e.g., number of phantoms or trials).

    The data provenance is not explicitly stated in terms of country of origin. The test appears to be a prospective bench study conducted by the manufacturer, Medtronic Navigation, Inc.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications:

    The document does not mention the use of experts to establish ground truth for this accuracy testing. The ground truth for positional and trajectory accuracy would typically be established by precise measurements on the anatomically representative phantoms using highly accurate measurement systems, not by expert consensus.

    4. Adjudication Method for the Test Set:

    Not applicable, as this was a bench accuracy test with directly measurable metrics, not a subjective assessment requiring adjudication.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size:

    No, an MRMC comparative effectiveness study was not conducted. The study focuses on the standalone accuracy of the device.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done:

    Yes, a standalone performance study was done. The accuracy testing described ("3D positional accuracy" and "trajectory angle accuracy") measures the device's inherent accuracy in locating anatomical structures and guiding trajectories, independent of human interaction during the measurement process. The system tracks instruments and displays their position and trajectory on images without direct human interpretation being part of the measurement for these accuracy metrics.

    7. The Type of Ground Truth Used:

    The ground truth used for this accuracy study was derived from precise physical measurements taken on "anatomically representative phantoms." This implies that the true position and trajectory were known and used as reference points against which the device's reports were compared.

    8. The Sample Size for the Training Set:

    The document does not provide information about a training set since the study described is a performance validation of a medical device's accuracy, not a machine learning model that would require a dedicated training set. The software likely undergoes extensive internal development and testing, but separate "training set" details are not provided in this context.

    9. How the Ground Truth for the Training Set Was Established:

    Not applicable, as no training set information is provided or relevant for this type of accuracy study.

    Ask a Question

    Ask a specific question about this device

    K Number
    K201175
    Date Cleared
    2020-06-03

    (33 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Medtronic Navigation Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The StealthStation™ System, with StealthStation™ Cranial Software, is intended as an aid for locating anatomical structures in either open or percutaneous neurosurgical procedures. Their use is indicated for any medical condition in which the use of stereotactic surgery may be appropriate, and where reference to a rigid anatomical structure, such as the skull, can be identified relative to images of the anatomy.

    This can include, but is not limited to, the following cranial procedures (including stereotactic frame-based and stereotactic frame alternatives-based procedures):

    • · Tumor resections
    • General ventricular catheter placement
    • · Pediatric ventricular catheter placement
    • · Depth electrode, lead, and probe placement
    • · Cranial biopsies
    Device Description

    The StealthStation™ Cranial Software v1.3.0 works in conjunction with an Image Guided System (IGS) which consists of clinical software, surgical instruments, a referencing system and platform/computer hardware. Image guidance, also called navigation, tracks the position of instruments in relation to the surgical anatomy and identifies this position on diagnostic or intraoperative images of the patient. During surgery, positions of specialized surgical instruments are continuously updated on these images either by optical tracking or electromagnetic tracking.

    Cranial software functionality is described in terms of its feature sets which are categorized as imaging modalities, registration, planning, interfaces with medical devices, and views. Feature sets include functionality that contributes to clinical decision making and are necessary to achieve system performance.

    The changes to the currently cleared StealthStation S8 Cranial Software are as follows:

    • . Addition of an optional image display that allows the user to see through outer layers to increase the visibility of other models.
    • . Update the imaging protocol to support overlapping slices.
    • . Minor changes to the software were made to address user preferences and to fix minor anomalies.
    AI/ML Overview

    The provided document is a 510(k) premarket notification summary for Medtronic's StealthStation Cranial Software v1.3.0. It describes the device, its intended use, and a comparison to a predicate device, along with performance testing.

    Here's an analysis to address your specific questions:

    1. A table of acceptance criteria and the reported device performance

    Acceptance CriteriaReported Device Performance
    System Accuracy Requirements
    3D Positional Accuracy: mean error ≤ 2.0 mmMean error ≤ 2.0 mm
    Trajectory Angle Accuracy: mean error ≤ 2.0 degreesMean error ≤ 2.0 degrees

    2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)

    The document does not specify a sample size (e.g., number of cases or images) for the performance testing. It states that the performance was determined "using anatomically representative phantoms and utilizing a subset of system components and features that represent the worst-case combinations of all potential system components."

    Regarding data provenance, the testing was conducted in "laboratory and simulated use settings" using "anatomically representative phantoms." This indicates that the data was generated specifically for testing purposes, likely in a controlled environment, rather than being derived from real patient scans. The country of origin for the data is not specified, but the applicant company, Medtronic Navigation Inc., is based in Louisville, Colorado, USA. The testing appears to be prospective in nature, as it was specifically carried out to demonstrate equivalence.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)

    The document does not mention the involvement of human experts for establishing ground truth for the performance testing. The accuracy measurements (3D positional and trajectory angle) are typically derived from physical measurements against known ground truth (e.g., phantom dimensions, known instrument positions) in the context of navigation systems, not by expert consensus on image interpretation.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

    Not applicable. The performance testing described is objective measurement against physical phantoms, not subjective assessment by experts requiring adjudication.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No. The document explicitly states: "Clinical testing was not considered necessary prior to release as this is not new technology." This device is an image-guided surgery system software, not an AI-assisted diagnostic tool that would typically undergo MRMC studies. The changes in this version (v1.3.0) are described as "minor changes to the software were made to address user preferences and to fix minor anomalies" and "Addition of an optional image display that allows the user to see through outer layers," suggesting incremental updates rather than a fundamentally new AI algorithm.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

    Yes, the performance testing was effectively "standalone" in the sense that the system's accuracy was measured against a known physical ground truth (phantoms) rather than evaluating human performance with the system. The reported accuracy metrics describe the device's inherent precision in tracking and navigation, independent of user interaction during the measurement process itself.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    The ground truth used was based on known physical properties of anatomically representative phantoms. This means that a physical phantom with precisely known dimensions and features was used, and the device's ability to accurately locate points and trajectories within that known physical structure was measured. This is a common and appropriate method for validating the accuracy of surgical navigation systems.

    8. The sample size for the training set

    Not applicable. This device, as described, is a software for image-guided surgery, not an AI/ML model that would typically have a "training set" in the context of deep learning. The changes are described as minor software updates and an optional display feature, not a new algorithm requiring a training phase from data.

    9. How the ground truth for the training set was established

    Not applicable, as there is no mention of a training set for an AI/ML model in this submission.

    Ask a Question

    Ask a specific question about this device

    K Number
    K201189
    Date Cleared
    2020-05-29

    (28 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Medtronic Navigation, Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The StealthStation™ System, with StealthStation™ Spine Software, is intended as an aid for precisely locating anatomical structures in either open or percutaneous neurosurgical and orthopedic procedures. Their use is indicated for any medical condition in which the use of stereotactic surgery may be appropriate, and where reference to a rigid anatomical structure, such as the spine or pelvis, can be identified relative to images of the anatomy. This can include the following spinal implant procedures, such as:

    o Pedicle Screw Placement

    • o Iliosacral Screw Placement
      o Interbody Device Placement
    Device Description

    The StealthStation System, also known as an Image Guided System (IGS), is comprised of a platform, clinical software, surgical instruments and a referencing system. The IGS tracks the position of instruments in relation to the surgical anatomy and identifies this position on diagnostic or intraoperative images of a patient. The StealthStation Spine software helps guide surgeons during spine procedures such as spinal fusion and trauma treatments. StealthStation Spine Software functionality is described in terms of its feature sets which are categorized as imaging modalities, registration, planning, interfaces with medical devices, and views. Feature sets include functionality that contributes to clinical decision making and are necessary to achieve system performance.

    AI/ML Overview

    Based on the provided text, the acceptance criteria and study proving the device meets these criteria for the StealthStation S8 Spine Software v1.3.0 can be summarized as follows:

    1. A table of acceptance criteria and the reported device performance:

    Acceptance Criteria (System Accuracy Requirements)Reported Device Performance (StealthStation S8 Spine Software v1.3.0)
    Mean positional error ≤ 2.0 mmWorst-case Configuration: Mean positional error of ≤ 2.0 mm
    StealthAiR Spine (Specific feature): Positional Error – 1.01 mm
    Overlapping Slices (Specific feature): Positional Error – 0.51 mm
    Mean trajectory error ≤ 2 degreesWorst-case Configuration: Mean trajectory error of ≤ 2 degrees
    StealthAiR Spine (Specific feature): Trajectory Error – 0.37 degrees
    Overlapping Slices (Specific feature): Trajectory Error – 0.41 degrees

    2. Sample size used for the test set and the data provenance:

    • Sample Size: The document does not specify a numerical sample size for the test set used for the accuracy performance. It mentions "anatomically representative phantoms" were used.
    • Data Provenance: The study was conducted using "anatomically representative phantoms." The country of origin of the data is not specified, but the applicant company is Medtronic Navigation, Inc., located in Louisville, Colorado, USA. The study design is implied to be prospective testing on phantoms rather than retrospective patient data.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    This information is not provided in the document. The accuracy testing was performed on phantoms, which typically rely on engineered and measurable ground truth, not expert consensus on anatomical structures or clinical outcomes.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    This information is not provided in the document, as the ground truth for phantom testing is typically established by the design of the phantom and measurement techniques, not human adjudication.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    No MRMC comparative effectiveness study was mentioned. The device, StealthStation S8 Spine Software, is an image-guided surgery system, not an AI-assisted diagnostic tool that would typically be evaluated with MRMC studies comparing human reader performance. The software aids surgeons in precisely locating anatomical structures.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    The device's performance was evaluated in terms of its ability to measure positional and trajectory accuracy on phantoms. This can be considered a form of standalone performance assessment as it evaluates the system's inherent accuracy capabilities, albeit in a simulated (phantom) environment, without directly measuring human-in-the-loop clinical workflow improvement. The text refers to "System integration performance testing for spine surgical procedures using anatomical phantoms."

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    The ground truth was established by the design and measurement capabilities of the "anatomically representative phantoms." This type of ground truth is based on precise, engineered physical properties and known measurements of the phantom. It is not based on expert consensus, pathology, or outcomes data.

    8. The sample size for the training set:

    This information is not provided in the document. The document describes a software update (v1.3.0) to an existing device, and the focus of the 510(k) summary is on performance testing for substantial equivalence, not on the training data used for the algorithm's development.

    9. How the ground truth for the training set was established:

    This information is not provided in the document. As this document is a 510(k) summary for a software update, details about the original model training and ground truth establishment are typically not included unless significant changes related to the algorithm's core functionality or AI components are introduced which necessitate new data for training or re-training.

    Ask a Question

    Ask a specific question about this device

    K Number
    K191597
    Date Cleared
    2019-11-01

    (137 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Medtronic Navigation, Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Stealth Autoguide™ System is a positioning and guidance system intended for the spatial positioning and orientation of instrument holders or tool guides to be used by neurosurgeons to guide standard neurosurgical instruments, based on a pre-operative plan and feedback from an image-guided navigation system with three-dimensional imaging software.
    The Stealth Autoguide™ System is a remotely-operated positioning and guidance system, indicated for any neurological condition in which the use of stereotactic surgery may be appropriate (for example, stereotactic EEG, laser tissue ablation, etc.).
    The Midas Rex™ Legend™ depth stop attachment and tools are incision, cutting, removing, and drilling of soft and hard tissue during cranial surgical procedures with the intent to create a hole through the cranium to allow surgeons access to desired surgical locations and/or to facilitate insertion, placement of other surgical devices during such procedures.

    Device Description

    Stealth Autoguide™ System: The Stealth Autoguide System is a robotic positioning and guidance system intended to interpret navigation tracker coordinates and surgical plan coordinates from the StealthStation to robotically position and orient instrument holders or tool guides to be used by neurosurgeons to guide standard neurosurgical instruments to pre-defined plans.
    Midas Rex™ Legend™ Depth Stop System: The Midas Rex™ Legend™ Depth Stop System consists of a Depth Stop Attachment and specific surgical dissecting tools that will be used in conjunction with the Stealth Autoguide System to create cranial access holes for neurosurgical procedures.

    AI/ML Overview

    The provided text describes the Medtronic Stealth Autoguide System and Midas Rex Legend Depth Stop System. It includes information on performance testing for the Stealth Autoguide System, but lacks specific details on acceptance criteria and a study to prove the device meets all acceptance criteria in a comprehensive format. It also doesn't contain the requested information about training sets, expert ground truth development, MRMC studies, or standalone performance.

    However, based on the provided text, I can extract the following information concerning the performance testing for the Stealth Autoguide System's accuracy:

    Acceptance Criteria and Reported Device Performance for Stealth Autoguide™ System

    Acceptance CriterionReported Device Performance (Mean)Standard Deviation99% CI* Upper
    3D Positional Accuracy: Mean error ≤ 2.0 mm
    Biopsy Needle Accuracy Validation - StealthStation S70.92 mm0.47 mm3.03 mm
    Biopsy Needle Accuracy Validation - StealthStation S80.97 mm0.26 mm1.70 mm
    sEEG bolts/Visualase Accuracy Validation - StealthStation S71.50 mm0.68 mm3.08 mm
    sEEG bolts/Visualase Accuracy Validation - StealthStation S81.48 mm0.48 mm2.60 mm
    Trajectory Angle Accuracy: Mean error ≤ 2.0 degrees
    Biopsy Needle Accuracy Validation - StealthStation S71.22 degrees0.51 degrees2.41 degrees
    Biopsy Needle Accuracy Validation - StealthStation S80.59 degrees0.23 degrees1.11 degrees
    sEEG bolts/Visualase Accuracy Validation - StealthStation S71.04 degrees0.76 degrees2.81 degrees
    sEEG bolts/Visualase Accuracy Validation - StealthStation S80.42 degrees0.17 degrees0.82 degrees

    Details of the Accuracy Study:

    1. Sample size used for the test set and the data provenance: The document states that performance was determined using "overall end-to-end worst-case system level accuracy testing which incorporated clinically relevant anatomical phantoms." Further specifics about the sample size (e.g., number of phantoms, number of measurement points per phantom) and data provenance (e.g., country of origin, retrospective or prospective) are not provided in this document.

    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: This information is not provided in the document. The accuracy testing seems to be based on direct physical measurements against defined targets on phantoms rather than expert interpretation of images.

    3. Adjudication method for the test set: This information is not provided. Given the nature of the accuracy testing (physical measurements), traditional adjudication methods for image interpretation would likely not apply.

    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: This information is not provided. The assessment described is a technical accuracy validation of the device's navigation and positioning capabilities, not a study involving human readers or AI assistance.

    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: The "Stealth Autoguide™ System" is described as a "robotic positioning and guidance system" and the accuracy validation focuses on its "performance in 3D positional accuracy" and "trajectory angle accuracy." This implies standalone technical performance testing of the system's ability to achieve planned trajectories, before a human surgeon uses it to guide instruments. The system is designed to "robotically position and orient instrument holders or tool guides," suggesting its core function is algorithm-driven positioning. However, the evaluation here focuses on the accuracy of the guidance provided, which would then be utilized by a surgeon.

    6. The type of ground truth used: The ground truth for the accuracy study was established by defining "clinically relevant anatomical phantoms" and measuring the device's "performance in 3D positional accuracy" and "trajectory angle accuracy" against the known positions and trajectories on these phantoms. This is a phantom-based measurement ground truth.

    7. The sample size for the training set: This information is not provided. The document describes an accuracy validation study, not the development or training of an AI algorithm.

    8. How the ground truth for the training set was established: This information is not provided, as details about a training set are not included in the document.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 4