Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K253379

    Validate with FDA (Live)

    Date Cleared
    2026-03-26

    (177 days)

    Product Code
    Regulation Number
    882.4560
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    Reference Devices :

    SureTune4 Software (DEN210003)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Stealth AXIS Surgical System, with the Stealth AXIS Cranial clinical application, is intended for precise positioning of surgical instruments and as an aid for locating anatomical structures in open, minimally invasive, and percutaneous neurosurgical procedures. Their use is indicated for any medical condition in which the use of stereotactic surgery may be appropriate, and where reference to a rigid anatomical structure, such as the skull, can be identified relative to images of the anatomy.

    This can include, but is not limited to, the following cranial procedures (including stereotactic frame-based and stereotactic frame alternatives-based procedures):

    • Tumor resections
    • General ventricular catheter placement
    • Pediatric ventricular catheter placement
    • Depth electrode, lead, and probe placement
    • Cranial biopsies
    Device Description

    The Stealth AXIS Cranial clinical application works in conjunction with the Stealth AXIS Surgical System. The Stealth AXIS Cranial clinical application helps guide surgeons during cranial procedures such as biopsies, tumor resections, shunt placements and depth electrode and probe placement. The system tracks the position of instruments in relation to surgical anatomy and identifies this position on diagnostic or intraoperative images of the patient.

    Patient images are transferred to the system, and the Stealth AXIS Cranial clinical application displays the image of the patient anatomy from a variety of perspectives (axial, sagittal, coronal, oblique) and 3-dimensional (3D) renderings of anatomical structures. During navigation, the Stealth AXIS Surgical System identifies the tip location and trajectory of the tracked instrument on images and models the user has selected to display on the monitor. The surgeon may also create and store one or more surgical plan trajectories before surgery and simulate progression along these trajectories. During surgery, the Stealth AXIS Cranial clinical application can display how the actual instrument tip position and trajectory relate to the pre-surgical plan, helping to guide the surgeon along the planned trajectory.

    AI/ML Overview

    The provided FDA 510(k) clearance letter and summary for the Stealth AXiS Cranial clinical application offers surprisingly limited detail on the specific acceptance criteria and the study that proves the device meets them, especially concerning the AI/ML components. However, I will extract and infer as much as possible from the provided text to answer your questions.

    It's important to note that the clearance letter itself doesn't present the full study details, but rather summarizes the findings that Medtronic provided to the FDA. Many of the specific details you requested (like exact sample sizes for test sets, data provenance, number and qualifications of experts, and adjudication methods for ground truth) are not explicitly stated in this public document. I will highlight what is present and what is missing.


    Acceptance Criteria and Device Performance Study for Stealth AXiS Cranial Clinical Application

    1. Table of Acceptance Criteria and Reported Device Performance

    The document primarily focuses on overall system accuracy requirements and general software validation. For the AI-enabled "Autotracts" feature, the acceptance criteria are less formally quantified in the provided text.

    Acceptance Criterion (Implicit)Reported Device Performance
    System Accuracy (Non-AI Component)
    3D positional accuracy ≤ 2.0 mm (mean error)Demonstrated performance in 3D positional accuracy with a mean error ≤ 2.0 mm under representative worst-case configuration.
    Trajectory angle accuracy ≤ 2.0 degrees (mean error)Demonstrated performance in trajectory angle accuracy with a mean error ≤ 2.0 degrees under representative worst-case configuration.
    Software Functionality (General)
    Product requirements met, device performs as intendedSoftware verification and validation testing verified the product requirements are met, and the device performs as intended.
    Usability for intended users, uses, and use environmentsSummative usability validation was performed by representative users. The summative evaluations demonstrated the Stealth AXiS™ Cranial clinical application to be substantially equivalent for the intended user, uses, and use environments.
    AI-enabled Autotracts Feature Acceptance (Implicit)
    Reliability in generating patient-specific white matter tracts"Performance was assessed leveraging expert review to ensure reliability." (No specific quantitative metrics for reliability are provided in this summary, such as sensitivity, specificity, Dice score, etc., which are common for segmentation or tractography models. It implies expert satisfaction with the output.)
    User control over tract appearance"Users retain control by adjusting tract appearance via probability thresholds, manually cropping tracts as needed, and ultimately verifying tracts before proceeding." (This is a design feature enabling user acceptance, rather than a quantifiable performance metric, but important for clinical integration.)
    Spans normal and pathological cases"Training and validation used hundreds of images from internal studies and public datasets, spanning normal and pathological cases..." (Implicitly, the model is expected to perform adequately across a variety of patient presentations, though specific performance differences between normal/pathological cases are not detailed.)

    2. Sample Size for the Test Set and Data Provenance

    • Test Set Sample Size: Not explicitly stated in the document for either the general system accuracy or the AI Autotracts feature. The document only mentions "withheld datasets" for validation of the AI model.
    • Data Provenance:
      • For System Accuracy: "representative worst-case configuration" implies laboratory testing, not patient data in this context.
      • For AI Autotracts: "hundreds of images from internal studies and public datasets, spanning normal and pathological cases." The country of origin and whether the data was retrospective or prospective are not specified.

    3. Number of Experts and Qualifications for Ground Truth

    • Number of Experts: Not explicitly stated. The document mentions "expert-reviewed gold standard annotations" for the training/validation of the Autotracts and "expert review" to assess performance.
    • Qualifications of Experts: Not explicitly stated. It is implied that these are experts in brain anatomy, neuroimaging, and neuronavigation, but specific qualifications (e.g., "Radiologist with 10 years of experience") are not provided.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not explicitly stated. For the expert-reviewed ground truth, it is unknown if a single expert provided the ground truth, if consensus was reached among multiple experts (e.g., 2+1, 3+1), or if there was no formal adjudication process described beyond "expert-reviewed."

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • MRMC Study: No, a multi-reader multi-case (MRMC) comparative effectiveness study was explicitly NOT performed. The document states, "No clinical testing was performed." The usability validation was done by "representative users," but this is distinct from a clinical MRMC study designed to measure the effect size of AI assistance on human reader performance.
    • Effect Size of AI vs. Without AI Assistance: Since no clinical testing or MRMC study was performed, there is no reported effect size of how much human readers improve with AI vs. without AI assistance in this document.

    6. Standalone (Algorithm Only) Performance

    • Standalone Performance: For the AI-enabled "Autotracts" feature, the description of "Performance was assessed leveraging expert review to ensure reliability" suggests some form of standalone evaluation against expert-derived ground truth. However, specific quantitative standalone metrics (e.g., sensitivity, Dice coefficient for segmentation, average distance error for tractography) are not provided. The phrase "Users retain control by adjusting tract appearance... and ultimately verifying tracts before proceeding" also highlights that the AI's output is intended for human-in-the-loop verification, not necessarily as a standalone diagnostic or planning output without review.

    7. Type of Ground Truth Used

    • For System Accuracy: Physical measurements against known standards in a "worst-case configuration."
    • For AI Autotracts: "expert-reviewed gold standard annotations" for training and validation, and "expert review" for performance assessment. This implies expert consensus on the definition of white matter tracts based on diffusion MRI images. It does not mention pathology or outcomes data for ground truth for this specific AI feature.

    8. Sample Size for the Training Set

    • Training Set Sample Size: Not explicitly stated. The document mentions "Training and validation used hundreds of images from internal studies and public datasets." It does not separate the exact number for training versus validation.

    9. How the Ground Truth for the Training Set Was Established

    • Ground Truth Establishment for Training Set: The ground truth for the AI Autotracts training set was established through "expert-reviewed gold standard annotations." This indicates that human experts manually identified or delineated the white matter tracts on the diffusion MRI images, and their work was considered the "gold standard" for the AI model to learn from.
    Ask a Question

    Ask a specific question about this device

    K Number
    K242054

    Validate with FDA (Live)

    Device Name
    OptimMRI (v2)
    Manufacturer
    Date Cleared
    2024-08-12

    (28 days)

    Product Code
    Regulation Number
    892.2050
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    Reference Devices :

    K213930, DEN210003

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    OptimMRI is a software application intended to aid qualified medical professionals in processing, visualizing, and interpreting anatomical structures from medical images. The software can be used to process pre-operative DICOM compatible MR images to generate 3D annotated models of the brain that aid the user in neurosurgical functional planning. The annotated MR images can further be used in conjunction with other clinical methods as an aid in localization of the Subthalamic Nuclei (STN) and Ventral Intermediate Nucleus (VIM) regions of interest.

    Device Description

    OptimMRI (v2) is a software application for processing medical images of the brain that enables 3D visualization and analysis of anatomical structures. Specifically, the software can be used to read DICOM compatible pre-operative MR images acquired by commercially available imaging devices. These images can be processed to generate 3D markers in specific regions of the brain to allow qualified medical professionals to display, review, analyze, annotate, interpret, export, and plan neurosurgical functional procedures. OptimMRI (v2) is used as an aid to localize regions of the brain such as Subthalamic Nuclei (STN) and Ventral Intermediate Nucleus (VIM) using advanced image processing techniques and machine learning models trained on a proprietary clinical database. The software supports workflow for creating pre-operative plans prior to carrying out the intraoperative procedure. OptimMRI (v2) is configured as web-based software and its output is compatible with neurosurgical planning software supporting 3D DICOM format. Three models have been implemented within OptimMRI (v2) to segment the following regions of interest of the brain: -STN region of interest (STN itself) -Inferior part of the Ventral Intermediate Nucleus (VIM) and Zona Incerta -Inferior-lateral part of the Ventral Intermediate Nucleus (VIM)

    AI/ML Overview

    Acceptance Criteria and Study for OptimMRI (v2)

    1. Table of Acceptance Criteria and Reported Device Performance:

    Acceptance CriteriaReported Device Performance
    At least 90% of surface distances of inferior-lateral regions of VIM structure were not greater than 2.0mm compared to reference devices (Guide XT and SureTune4).The performance evaluation studies demonstrated that at least 90% of surface distances of inferior-lateral regions of VIM structure were not greater than 2.0mm compared to reference devices Guide XT (K213930) and SureTune4 (DEN210003). (This directly matches the acceptance criteria.)

    2. Sample Size for Test Set and Data Provenance:

    The document does not explicitly state the sample size used for the test set in this specific submission for OptimMRI (v2). It mentions using "the same test methods as used for the previously cleared predicate device" and that the "performance evaluation studies demonstrated..." implies a study was conducted but the number of cases is not provided.

    The document does not specify the country of origin of the data or whether it was retrospective or prospective. It only states that the machine learning models were "trained on a proprietary clinical database."

    3. Number of Experts and Qualifications for Ground Truth Establishment (Test Set):

    The document does not specify the number or qualifications of experts used to establish the ground truth for the test set. It refers to comparing the device's output to "reference devices Guide XT (K213930) and SureTune4 (DEN210003)," which are presumably established and validated tools for VIM localization. This implies that the ground truth for the test set was established by these reference devices' outputs, rather than by direct expert consensus on each individual case of the test set for this submission.

    4. Adjudication Method (Test Set):

    The document does not describe a specific adjudication method like 2+1 or 3+1 for the test set. The evaluation was based on a direct comparison of the device's output (3D markers / surface distances) to the "reference devices" (Guide XT and SureTune4).

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    No MRMC comparative effectiveness study is described in the provided text. The submission focuses on the standalone performance of the updated OptimMRI (v2) model against established reference devices, rather than evaluating human reader improvement with or without AI assistance.

    6. Standalone Performance Study:

    Yes, a standalone performance study was done. The document states, "STN and VIM region of interest (ROI) annotation accuracy for the subject device was validated using the same performance test protocol and acceptance criteria as the predicate OptimMRI (K230150)." This indicates that the algorithm's performance was evaluated on its own, comparing its output to that of the reference devices.

    7. Type of Ground Truth Used:

    The ground truth used for the performance evaluation appears to be based on the output of reference devices (Guide XT (K213930) and SureTune4 (DEN210003)). The comparison was made by measuring "surface distances of inferior-lateral regions of VIM structure" against these established tools.

    8. Sample Size for Training Set:

    The document mentions that the machine learning models were "trained on a proprietary clinical database," but it does not specify the sample size of this training set.

    9. How Ground Truth for Training Set Was Established:

    The document states that the machine learning models were "trained on a proprietary clinical database" and used "advanced image processing techniques and machine learning models." However, it does not explicitly detail how the ground truth for this training set was established. While it can be inferred that expert annotations or validated reference methods would have been crucial for training, the specific methodology is not described in this document.

    Ask a Question

    Ask a specific question about this device

    K Number
    K230150

    Validate with FDA (Live)

    Device Name
    OptimMRI
    Manufacturer
    Date Cleared
    2023-07-21

    (183 days)

    Product Code
    Regulation Number
    892.2050
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    Why did this record match?
    Reference Devices :

    K213930, DEN210003

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    OptimMRI is a software application intended to aid qualified medical professionals in processing, visualizing, and interpreting anatomical structures from medical images. The software can be used to process pre-operative DICOM compatible MR images to generate 3D annotated models of the brain that aid the user in neurosurgical functional planning. The annotated MR images can further be used in conjunction with other clinical methods as an aid in localization of the Subthalamic Nuclei (STN) and Ventral Intermediate Nucleus (VIM) regions of interest.

    Device Description

    OptimMRI is a software application for processing medical images of the brain that enables 3D visualization and analysis of anatomical structures. Specifically, the software can be used to read DICOM compatible pre-operative MR images acquired by commercially available imaging devices. These images can be processed to generate 3D markers in specific regions of the anatomy to allow qualified medical professionals to display, review, analyze, annotate, interpret, export, and plan neurosurgical functional procedures. OptimMRI is used as an aid to localize regions of the brain such as Subthalamic Nuclei (STN) and Ventral Intermediate Nucleus (VIM).

    AI/ML Overview

    Here's an analysis of the acceptance criteria and study details for OptimMRI, based on the provided document:

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria (Target Value)Reported Device Performance
    STN Localization Accuracy: At least 90% of surface distances not greater than 2.0 mm (relative to comparable software tools).STN Localization: Performance evaluation studies against reference devices Guide XT (K213930) and SureTune4 (DEN210003) demonstrated that at least 90% of surface distances of STN were not greater than 2.0 mm when using segmentation tools for OptimMRI. This result is stated to be "identical to the predicate SIS Software that used high-resolution 7T MRIs of the brain."
    VIM Localization Accuracy: At least 90% of surface distances not greater than 2.0 mm (relative to comparable software tools).VIM Localization: Performance evaluation studies against reference devices Guide XT (K213930) and SureTune4 (DEN210003) demonstrated that at least 90% of surface distances of VIM were not greater than 2.0 mm when using segmentation tools for OptimMRI. (Note: The document implies this criteria was met, similar to STN, through the phrasing "demonstrated that at least 90% of surface distances of STN or VIM were not greater than 2.0mm when using segmentation tools for OptimMRI.")

    2. Sample Size Used for the Test Set and Data Provenance

    • STN Study Test Set Sample Size: 44 cerebral MRIs (88 hemispheres)
    • VIM Study Test Set Sample Size: 31 cerebral MRIs (62 hemispheres)
    • Data Provenance: Retrospective (specified as "retrospectively annotated"). Country of origin is not specified in the provided text.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • The document states: "Qualified and experienced medical professionals performed the segmentation".
    • The exact number of experts is not specified.
    • The specific qualifications of the experts (e.g., "radiologist with 10 years of experience") are not specified beyond "Qualified and experienced medical professionals."

    4. Adjudication Method for the Test Set

    • The adjudication method is not explicitly stated. The document mentions that "Qualified and experienced medical professionals performed the segmentation, and all validation criteria were met." This suggests consensus or an internal process but doesn't detail the specific method (e.g., 2+1, 3+1).

    5. If a Multi-reader Multi-case (MRMC) Comparative Effectiveness Study Was Done

    • No MRMC comparative effectiveness study involving human readers and AI assistance is described. The study compared OptimMRI's segmentation accuracy against other cleared commercially available comparable software tools (Guide XT and SureTune4), not against human reader performance with or without AI assistance.

    6. If a Standalone Performance (Algorithm Only Without Human-in-the-loop) Was Done

    • Yes, the performance study describes the standalone accuracy of OptimMRI's segmentation tools (which are semi-automatic as mentioned in the "Comparison to Predicate Device" section). The comparison is directly between OptimMRI's output and the outputs of other software (Guide XT and SureTune4), indicating an algorithm-only performance evaluation against established benchmarks. The device is described as having "segmentation process is semi-automatic," implying the algorithm generates the segmentation which is then validated.

    7. The Type of Ground Truth Used

    • The ground truth used was segmentations performed by other cleared commercially available comparable software tools (Guide XT and SureTune4), which themselves are likely validated against more fundamental ground truth. The document states a comparison of "Accuracy of segmentations for OptimMRI was compared to previously cleared commercially available comparable software tools." implying these tools served as the reference for ground truth in this study.

    8. The Sample Size for the Training Set

    • The sample size for the training set is not provided in the document. The text focuses on the performance evaluation study used for 510(k) clearance.

    9. How the Ground Truth for the Training Set Was Established

    • How the ground truth for the training set was established is not provided in the document.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1