K Number
K233903
Manufacturer
Date Cleared
2024-01-10

(30 days)

Product Code
Regulation Number
882.4560
Panel
NE
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The Zeta Cranial Navigation System is a stereotaxic image guidance system intended for the spatial positioning and orientation of neurosurgical instruments used by surgeons. The device is indicated only for cranial surgery where reference to a rigid anatomical structure can be identified, does not require rigid fixation of the patient, and does not require fixation of a navigated instrument guide to the patient. The system is intended to be used in operating rooms and in less acute surgical settings such as interventional procedure suites.

Device Description

The Zeta Cranial Navigation System is a stereotaxic, image guided planning and intraoperative guidance system enabling computer-assisted cranial interventional procedures. The system assists surgeons with the precise positioning of surgical instruments relative to patient anatomy by displaying the position of navigated surgical instruments relative to 3D preoperative medical scans.

AI/ML Overview

The provided FDA 510(k) summary for the Zeta Cranial Navigation System (K233903) describes its performance data and the studies conducted to demonstrate substantial equivalence to its predicate device (K230661). However, it does not contain the specific statistical acceptance criteria or detailed results of a comparative effectiveness study (like an MRMC study) for human readers, or a standalone AI performance study.

The document primarily focuses on technical performance testing (accuracy, electrical safety, EMC, software V&V, and human factors) rather than clinical performance or AI algorithm-specific metrics.

Based on the provided text, here's what can be extracted and what information is not available:

Acceptance Criteria and Reported Device Performance

The document states that "The device passed all tests" for electrical safety, EMC, and software verification and validation. For bench testing, it indicates "Accuracy testing under different conditions," but does not specify the quantitative acceptance criteria for "accuracy" or the achieved performance values.

Table 1: Acceptance Criteria and Reported Device Performance (as inferred and stated)

Performance MetricAcceptance Criteria (Inferred/Stated)Reported Device Performance
Accuracy TestingSufficient accuracy for neurosurgical instrument spatial positioning and orientation (implied by "demonstrate substantial equivalence").Not explicitly quantified in this document. Described as "passed".
Electrical Safety (IEC 60601-1)Compliance with IEC 60601-1:2005 (3rd ed) + A1:2012Device passed all tests.
Software Life Cycle (IEC 62304)Compliance with IEC 62304:2006+Amd 1:2015Device passed all tests.
EMC (IEC 60601-1-2)Compliance with IEC 60601-1-2:2014+A1:2021Device passed all tests.
Usability (IEC 60601-1-6)Compliance with IEC 60601-1-6, Edition 3.2 2020-07Device passed all tests.
Software Concern Level (V&V)Software considered "Major level of concern" requiring specific V&V documentation.Documentation provided as recommended by FDA guidance.
CybersecurityDocumentation provided as recommended by FDA guidance.Documentation provided as recommended by FDA guidance.
Shelf-LifeLow likelihood of time-dependent product degradation.Not applicable; no shelf-life specified, as degradation is considered low.
Navigation Frames per Second (fps)Likely linked to sufficient update rate for real-time guidance.Uncapped (mean 21fps).

Study Details

The document details various types of testing, but does not describe a study involving human readers or an AI algorithm in the way typically associated with diagnostic AI tools (e.g., for image interpretation). The "Zeta Cranial Navigation System" is a stereotaxic image guidance system, where the "AI" or "machine vision" component likely refers to the system's ability to perform automatic, pinless, and markerless patient registration and instrument tracking, rather than an AI that interprets medical images for diagnostic purposes.

Therefore, many of the questions below related to AI study specifics (training sets, ground truth methodology for AI, MRMC studies) are not applicable (N/A) to the information provided in this 510(k) summary, as it does not describe such a study.

  1. Sample size used for the test set and the data provenance:

    • N/A (for clinical AI performance evaluation): The document mentions "accuracy testing under different conditions" including "simulated clinical procedures using virtual targets" and "dynamic patient motion," but it does not specify a "test set" in the context of patient data or clinical images for an AI diagnostic algorithm. The testing described is bench testing.
    • Data Provenance: The nature of the "virtual targets" and "simulated clinical procedures" means the data is synthetically generated or simulated in a lab setting, not from clinical patients.
  2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • N/A: As this is bench testing of a navigation system's mechanical and software accuracy (e.g., tracking instruments relative to a known position), the "ground truth" is established through engineering and metrology standards rather than expert medical interpretations.
  3. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    • N/A: Not relevant for the type of bench testing described.
  4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No: The document explicitly states "Clinical Data: Not applicable. Clinical studies are not necessary to establish the substantial equivalence of this device." This confirms that an MRMC study comparing human performance with and without AI assistance was not performed or deemed necessary for this 510(k) submission.
  5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • Yes (for specific functions): The document describes "Accuracy testing" and "Software Verification and Validation." The system's intrinsic functions like "automatic, pinless, and markerless" registration and "optical tracking" of instruments would have been tested in a standalone manner to demonstrate their performance against defined metrics. However, "standalone" in this context refers to the system's technical performance characteristics, not an AI interpreting medical images.
  6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • Engineering/Metrology Standards: For accuracy testing of a navigation system, ground truth is typically established through precise measurement devices (e.g., coordinate measuring machines, high-precision optical tracking systems) and simulated conditions. It's not based on medical expert consensus or pathological findings.
  7. The sample size for the training set:

    • N/A: The document does not describe an AI model that requires a "training set" in the context of machine learning for diagnostic image analysis. The "machine vision" and "structured light" technologies employed are likely more akin to traditional computer vision algorithms for feature recognition and localization, which might be "calibrated" or "tuned" but not typically "trained" on large datasets in the way a deep learning model would be.
  8. How the ground truth for the training set was established:

    • N/A: See point 7.

§ 882.4560 Stereotaxic instrument.

(a)
Identification. A stereotaxic instrument is a device consisting of a rigid frame with a calibrated guide mechanism for precisely positioning probes or other devices within a patient's brain, spinal cord, or other part of the nervous system.(b)
Classification. Class II (performance standards).