K Number
K191285
Manufacturer
Date Cleared
2019-06-11

(29 days)

Product Code
Regulation Number
882.4560
Panel
OR
Reference & Predicate Devices
Predicate For
N/A
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Spine & Trauma Navigation System is intended as an intraoperative image-guided localization system to enable minimally invasive surgery. It links a freehand probe, tracked by a passive marker sensor system to virtual computer image space on a patient's preoperative or Intraoperative 2D or 3D image data.

Spine & Trauma Navigation System enables computer-assisted navigation of medical image data, which can either be acquired preoperatively or intraoperatively by an appropriate image acquisition system.

The software offers screw implant size planning and navigation on rigid bone structures with precalibrated and additional individually-calibrated surgical tools.

The system is indicated for any medical condition in which the use of stereotactic surgery may be appropriate and where a reference to a rigid anatomical structure, such as the skull, the pelvis, a long bone or vertebra can be identified relative to the acquired image (CT, MR, 2D fluoroscopic image or 3D fluoroscopic image reconstruction) and/or an image data based model of the anatomy.

Device Description

This device is an image guided surgery system for navigated treatments in the fields of spine and trauma surgery, whereas the user may use 3D image data based on CT, MR or XT. The Software supports the surgeon in clinical procedures by displaying tracked instruments in patient image data.

AI/ML Overview

Here's a breakdown of the requested information based on the provided text, focusing on the acceptance criteria and the study that proves the device meets them:

1. A table of acceptance criteria and the reported device performance

The provided document describes a "Special 510(k) Corrective Action" related to a software modification to fix a display issue. Therefore, the "acceptance criteria" are implicitly tied to confirming the successful correction of this specific issue and ensuring no new issues were introduced, with the "reported device performance" demonstrating this success.

Acceptance CriteriaReported Device Performance
Correction of Display Issue: The software modification should resolve the erroneous orientation of anatomical slices displayed with a projected instrument representation within axial and coronal/sagittal views when changing navigation workflows.Interactive tests specifically designed for the missing test scenario (the display issue itself) were performed and successfully demonstrated the correction. The issue was no longer observed during testing.
No Introduction of New Issues: The software modification should not negatively impact unchanged software parts or introduce new errors, especially regarding critical safety-related functions and general navigation accuracy.Regression Tests: Interactive tests on unchanged software parts were conducted, ensuring no unintended side effects.
Code Review: A code review of the software change was performed.
Software Memory Leakage Tests: VLD (Visual Leak Detector) was used to check for memory leaks.
Static Code Analysis: Lint was used for static code analysis of changed software parts.
These tests collectively confirmed that the modification did not introduce new issues.
Maintenance of Design & Performance Requirements: The device, with the modification, must continue to meet all existing design and performance requirements.Design verification testing conducted to support the modification "met all design and performance requirements."

2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

The document relates to a software modification and its verification. It does not mention a "test set" in the context of patient data, clinical trial participants, or images for accuracy testing. Instead, the testing described is primarily software verification on the device itself. Therefore, information regarding sample size for a test set and data provenance (country, retrospective/prospective) is not applicable in the context of this submission, as it's not a study involving patient data.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

This information is not applicable as the submission describes software verification of a fix for a display issue, not an assessment of diagnostic accuracy requiring expert interpretation of medical images or patient outcomes. The "ground truth" for the software correction was likely the expected correct display behavior defined by the system's specifications.

4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

This information is not applicable for the same reasons as points 2 and 3. There was no test set requiring adjudication by multiple experts.

5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. The document focuses on a software modification to correct a display issue, not on the comparative effectiveness of the device with or without AI assistance, or its impact on human reader performance.

6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

This document primarily describes software verification of a fix for a display issue within an existing navigation system. It does not discuss "standalone" algorithm performance in the sense of a new AI algorithm's diagnostic capability without human interaction. The device is an image-guided surgery system, inherently involving human operators.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

For this specific software modification, the "ground truth" for the verification was the expected correct behavior of the software display. Specifically, the correct orientation of anatomical slices relative to the instrument representation during different navigation workflows. This "ground truth" would have been established based on the system's design specifications for accurate image-guided navigation.

8. The sample size for the training set

This information is not applicable. The document describes a software correction to an existing device, not the development or training of a new algorithm (e.g., an AI model) that would require a "training set."

9. How the ground truth for the training set was established

This information is not applicable for the same reasons as point 8.

§ 882.4560 Stereotaxic instrument.

(a)
Identification. A stereotaxic instrument is a device consisting of a rigid frame with a calibrated guide mechanism for precisely positioning probes or other devices within a patient's brain, spinal cord, or other part of the nervous system.(b)
Classification. Class II (performance standards).