K Number
K021980
Date Cleared
2002-11-19

(155 days)

Product Code
Regulation Number
882.4560
Panel
NE
Reference & Predicate Devices
N/A
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The StealthStation® System is intended as an aid for precisely locating anatomical structures in either open or percutaneous procedures. The StealthStation® System is indicated for any medical condition in which the use of stereotactic surgery may be appropriate, and where reference to a rigid anatomical structure, such as the skull, a long bone, or vertebra, can be identified relative to a CT or MR based model or fluoroscopy images of the anatomy.

The Hip Module for the StealthStation is intended to precisely position instruments and implants in example procedures such as but not limited to:

Orthopedic Procedures:

Minimally Invasive Orthopedic Procedures

Total Hip Replacement (Primary and Revision)

Tumor Resection and Bone/Joint Reconstruction

Placement of Iliosacral Screws

Femoral Revision

Stabilization and Repair of Pelvic Fractures (Including But Not Limited To Acetabular Fractures)

Device Description

This submission allows a surgeon to utilize a modified version of the FluoroNav™ Software to place hip implants and repair and/or stabilize trauma sustained to the pelvic area. The Hip Module for the StealthStation® System is technically equivalent to the StealthStation® System, and the FluoroNav™ Module for the StealthStation®. All systems use either active or passive infrared markers to track a reference frame attached to the anatomy and to track surgical instruments. This information is correlated to the patient's CT, MR or fluoroscopic images of the anatomy.

AI/ML Overview

This submission, K021980, pertains to the Hip Module Software for the StealthStation® System. The information provided is limited and focuses on establishing substantial equivalence to predicate devices rather than detailing specific performance studies with acceptance criteria for the new module. This is typical for a 510(k) submission where the primary goal is to demonstrate that a new device is as safe and effective as existing legally marketed devices.

Therefore, many of the requested details about acceptance criteria, specific performance metrics, sample sizes, and expert validation are not explicitly present in the provided document. The document primarily states that "all verification and validation activities were performed by designated individual(s) and the results demonstrated substantial equivalence." This is a high-level statement that doesn't provide the granular detail requested.

Here's a breakdown based on the available information and what we can infer:

1. Table of Acceptance Criteria and Reported Device Performance:

Based on the provided document, specific, quantitative acceptance criteria and detailed device performance metrics for the Hip Module software itself are not explicitly stated. The document focuses on demonstrating substantial equivalence to predicate devices. This implies that the performance of the Hip Module is considered acceptable if it is comparable to that of the predicate devices.

Acceptance CriteriaReported Device Performance
Not explicitly defined for the Hip Module in this summary. The primary criteria is "Substantial Equivalence" to predicate devices.The document states, "all verification and validation activities were performed by designated individual(s) and the results demonstrated substantial equivalence." This implies that the functional and safety performance of the Hip Module was deemed comparable to the predicate devices. No specific numerical performance metrics are provided.

2. Sample Size Used for the Test Set and Data Provenance:

  • Sample Size for Test Set: Not explicitly stated.
  • Data Provenance: Not explicitly stated. Given it's a 510(k) submission focused on equivalence, it's possible that internal testing data (e.g., in-house verification and validation) might have been used, but no details are provided about the nature or origin of this data.

3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications:

  • Number of Experts: Not explicitly stated.
  • Qualifications of Experts: Not explicitly stated. The document mentions "designated individual(s)" performed verification and validation, but their specific qualifications (e.g., surgeon experience, engineering expertise) are not detailed.

4. Adjudication Method for the Test Set:

  • Adjudication Method: Not explicitly stated.

5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

  • MRMC Study Conducted: Not stated and highly unlikely for this type of 510(k) submission focused on a software module's equivalence. MRMC studies are generally used for assessing the impact of new diagnostic tools on reader performance, which isn't the primary focus here.
  • Effect Size of Human Readers Improvement with AI vs. Without AI Assistance: Not applicable, as no such study is indicated. This device is a navigation aid, not an AI for image interpretation.

6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study:

  • Standalone Study Conducted: Not explicitly detailed in terms of performance metrics. The submission focuses on device functionality and equivalence within the surgical navigation workflow. It's implied that the software's algorithms function correctly and accurately when used as intended, but no specific "standalone" performance study results (e.g., accuracy of algorithmic calculations against a ground truth without human intervention as a separate study) are provided. The "verification and validation activities" would likely have covered this in a general sense.

7. Type of Ground Truth Used:

  • Type of Ground Truth: Not explicitly stated. For a surgical navigation system, ground truth during verification and validation would typically involve:
    • Phantom studies: Using precisely manufactured anatomical phantoms with known fiducial marker locations and instrument paths to assess accuracy.
    • Cadaveric studies: Using cadavers to simulate surgical scenarios and verify instrument accuracy.
    • Technical specifications: Comparing device output to known or expected engineering specifications and tolerances.
      The document only broadly refers to "verification and validation activities," implying these types of methods were used to confirm its function, but without specific details.

8. Sample Size for the Training Set:

  • Sample Size for Training Set: Not applicable in the context of this document. This is not an AI/machine learning model where a separate training set is typically discussed in this manner. The software is a navigation aid that processes imaging data (CT, MR, fluoroscopy) and tracks instruments. Its functionality is based on established geometric algorithms and image processing, not on learning from a "training set" of cases in the AI sense.

9. How the Ground Truth for the Training Set Was Established:

  • Ground Truth for Training Set Establishment: Not applicable for the reasons stated above.

In summary: The provided document is a 510(k) summary for a software module, primarily focusing on demonstrating substantial equivalence to predicate devices. It does not contain the detailed performance study results, acceptance criteria, sample sizes, or expert validation methods that would be expected for a comprehensive clinical effectiveness or AI algorithm validation study. The statement "all verification and validation activities were performed by designated individual(s) and the results demonstrated substantial equivalence" serves as the overarching conclusion regarding its safety and effectiveness.

§ 882.4560 Stereotaxic instrument.

(a)
Identification. A stereotaxic instrument is a device consisting of a rigid frame with a calibrated guide mechanism for precisely positioning probes or other devices within a patient's brain, spinal cord, or other part of the nervous system.(b)
Classification. Class II (performance standards).