Search Filters

Search Results

Found 28 results

510(k) Data Aggregation

    K Number
    K252562
    Manufacturer
    Date Cleared
    2025-09-12

    (29 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Spine & Trauma Navigation is intended as an intraoperative image-guided localization system to enable open and minimally invasive surgery. It links a freehand probe, tracked by a passive marker sensor system to virtual computer image space on a patient's preoperative or Intraoperative 2D or 3D image data.

    Spine & Trauma Navigation enables computer assisted navigation of medical image data, which can either be acquired preoperatively or intraoperatively by an appropriate image acquisition system. The software offers screw and interbody device planning and navigation with surgical instruments.

    The system is indicated for any medical condition in which the use of stereotactic surgery may be appropriate and where a reference to a rigid anatomical structure, such as the skull, the pelvis, a long bone or vertebra can be identified relative to the acquired image (CT, MR, 3D fluoroscopic image reconstruction or 2D fluoroscopic image) and/or an image data based model of the anatomy.

    Device Description

    The Spine & Trauma Navigation is an image guided surgery system for navigated treatments in the fields of spine and trauma surgery, whereas the user may use image data based on CT, MR, 3D fluoroscopic image reconstruction (cone beam CT) or 2D fluoroscopic images. It offers
    different patient image registration methods and instrument calibrations to allow surgical navigation by using optical tracking technology. To fulfil this purpose, it consists of software, Image Guided Surgery platforms and surgical instruments.

    Brainlab´s spinal instrument portfolio includes common types of instruments used in image guided surgery procedures including patient referencing instruments, instruments for image registration, drill guides, drill bits, tracking arrays, awls and probes for open surgery, Starlink instrument adapters and arrays to allow the connection of suitable 3rd party instruments, an Instrument Calibration Matrix for calibration of instruments, sets of instruments to be used with the accessories Microscope Navigation and Cirq Arm System, the sterile instruments Disposable Trocar Insert Pedicle Access Needle and Disposable Clip-On Remote Control and Sterilization Trays.

    Modified spine reference clamps and Sterilization Trays have been introduced as part of the Subject Device. The new Patient Reference Rod-Clamp Spine & Trauma belongs to the Patient Reference Clamp Spine & Trauma group, consisting of 3 clamps, two intended to achieve a rigid fixation to the bone and a third one intended to achieve rigid fixation to a rod. This provides a solid interface for stiff reference array fixation, enabling navigated surgery with the Spine & Trauma Navigation software. The novel feature is the possibility to attach the clamp to an existing rod, which is only possible with the Patient Reference Rod-Clamp Spine & Trauma. All clamps are delivered unsterile and require end user sterilization.

    The Sterilization Trays serve as a rigid containment device intended for the repeated use before, during and after sterilization in healthcare facilities to store, organize, identify and transport a set of specified instruments, which require sterilization prior to use. The Sterilization Tray requires an additional sterile barrier system to maintain sterile. There are multiple variants of the Sterilization Trays, each offering designated slots and outlines to a defined set of compatible instruments.

    AI/ML Overview

    N/A

    Ask a Question

    Ask a specific question about this device

    K Number
    K233725
    Manufacturer
    Date Cleared
    2024-07-26

    (248 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Spine Navigation system is intended as an aid for precisely locating anatomical structures in either open or percutaneous spine procedures. Their use is indicated for any medical condition in which the use of stereotactic surgery may be appropriate, and where reference to a rigid anatomical structure, such as the spine, can be identified relative to CT imagery of the anatomy.

    This can include the spinal implant procedures, such as Posterior Pedicle Screw Placement in the sacro-lumbar region.

    The headset of the Spine Navigation system displays 2D stereotaxic screens and a virtual anatomy screen. The stereotaxic screen is indicated for correlating the tracked instrument location to the registered patient imagery. The virtual screen is indicated for displaying the virtual instrument location in relation to the virtual anatomy to assist in percutaneous visualization and trajectory planning.

    The virtual display should not be relied upon solely for absolute positional information and should always be used in conjunction with the displayed stereotaxic information.

    Device Description

    The Medivis Spine Navigation System is an image-guided navigation system designed to assist surgeons in placing pedicle screws accurately during spinal surgery. The Medivis Spine Navigation System is comprised of software, a head-mounted display (HMD), passive reflective markers and reusable components. The device provides various image display options and manipulation features, primarily controlled through the touchscreen monitor. An HMD serves as an adjunct display and functions as an optical 3D tracking component for patient and surgical tool localization. The device registers patient data to the surgical environment through IR tracking.

    AI/ML Overview

    This FDA 510(k) clearance letter for the Medivis Spine Navigation System does not contain the detailed study information required to fully answer your request.

    The document primarily focuses on establishing substantial equivalence to a predicate device (Augmedics Ltd. xvision spine system (XVS) K190929) based on similar intended use and technological features. While it states that the device is intended for precisely locating anatomical structures, it does not provide any performance metrics, acceptance criteria, or details of a specific study proving the device meets these criteria.

    Here's a breakdown of what can and cannot be extracted from the provided text:

    Information that CANNOT be extracted from the document:

    • 1. A table of acceptance criteria and the reported device performance: This document does not list any specific performance criteria (e.g., accuracy in mm) or the reported performance outcomes of the Medivis Spine Navigation System.
    • 2. Sample size used for the test set and the data provenance: No information on any test set, its size, or where the data came from (country, retrospective/prospective) is provided.
    • 3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: As no test set study is detailed, this information is absent.
    • 4. Adjudication method (e.g., 2+1, 3+1, none) for the test set: No adjudication method is mentioned.
    • 5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance: There is no mention of an MRMC study or any comparison of human performance with and without AI assistance.
    • 6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done: No standalone performance study is detailed.
    • 7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.): With no study described, the type of ground truth is not specified.
    • 8. The sample size for the training set: There's no mention of a training set or its size.
    • 9. How the ground truth for the training set was established: No information provided.

    What the document DOES provide, relevant to your questions:

    • Device Name: Medivis Spine Navigation System
    • Intended Use: The Spine Navigation system is intended as an aid for precisely locating anatomical structures in either open or percutaneous spine procedures, including Posterior Pedicle Screw Placement in the sacro-lumbar region. It aims to correlate tracked instrument location to registered patient imagery and display virtual instrument location relative to virtual anatomy for percutaneous visualization and trajectory planning.
    • Key Technology: Uses a head-mounted display (HMD) (specifically Microsoft HoloLens 2), passive reflective markers, IR tracking for patient and surgical tool localization, and augments reality with overlaid navigation information (2D and stereoscopic 3D images from DICOM data).
    • Predicate Device: Augmedics Ltd. xvision spine system (XVS) K190929. The substantial equivalence is based on similar intended use and technological features.

    Conclusion:

    The provided FDA 510(k) clearance letter indicates that the Medivis Spine Navigation System has been found substantially equivalent to a predicate device. However, it does not include the specific performance data, acceptance criteria, study methodologies (like sample size, ground truth establishment, expert qualifications, or adjudication methods), or details of clinical validation studies usually found in a comprehensive technical report or clinical study summary. These details would typically be part of the larger 510(k) submission, not necessarily summarized in the public clearance letter. You would need to access the full 510(k) summary or a separate clinical study report for such information.

    Ask a Question

    Ask a specific question about this device

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    K Number
    K233228
    Manufacturer
    Date Cleared
    2024-06-14

    (260 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Spine Navigation and Robotic-Assistance Device is indicated for the spatial positioning and orientation of compatible surgical instruments, used by surgeons in Freehand Navigation Mode and/ or in Robotic-Assisted Guidance Mode. Spine Navigation and Robotic-Assistance Device is used for spine surgeries, in open or percutaneous procedures, with patient in prone or in lateral position. Freehand Navigation Mode and Robotic-Assisted Guidance Mode are based on a three-dimensional image volume (3D CT or CBCT image), on which the surgeon may perform the Intraoperative Surgical Planning.

    Spine Navigation and Robotic-Assistance Device is indicated to provide quidance for the placement of spinal bone screws relative to bony structures of the spine, in Freehand Navigation Mode and/or in Robotic-Assisted Guidance Mode, and for intervertebral disc access and preparation, including discectomy and bony resection, in Freehand Navigation Mode.

    Device Description

    The Spine Navigation and Robotic-Assistance Device is a medical device that provides stereotaxic quidance for spinal surgery. The Subject Device assists the surgeon to quide and position surgical instruments through two modes: Freehand Navigation and Robotic-Assisted Guidance.

    The Subject Device includes:

    • A Navigation Station

    • A Robotic-Assisted Station

    • A Camera Station

    • Embedded Software

    • Integrated Guiding System components, composed of single use items, such as navigation Arrays, and reusable items, such as Patient Array fixation solutions.

    The navigation of the instruments is based on the standard and established techniques of navigation systems utilizing optical position determination technology. As currently marketed optical tracking navigation systems, the operating principle of the navigation feature is based on the use of an infra-red camera allowing to determine a 3D position of markers, either passive reflective markers or active LED markers. This allows real-time tracking of the navigated instruments.

    The Subject Device is compatible with:

    • Specific Instruments of DePuy Synthes, a Johnson & Johnson company. These Instruments are aimed to be quided with Robotic-Assisted Guidance Mode and/or with Freehand Navigation Mode.

    • Several third-partv 3D Imaging Systems.

    AI/ML Overview

    Based on the provided FDA 510(k) summary, the device is a Spine Navigation and Robotic-Assistance Device. The document focuses on demonstrating substantial equivalence to predicate devices rather than proving the device meets specific acceptance criteria through a standalone study with defined performance metrics for AI algorithms.

    The document does not describe an AI/ML-driven diagnostic or assistive device that requires a study to prove its performance against acceptance criteria typically associated with such technologies (e.g., sensitivity, specificity, AUC). Instead, it describes a robotic surgical assistance system.

    Therefore, the requested information regarding acceptance criteria, specific study designs (like MRMC), expert ground truth establishment, and training/test set details for proving AI/ML model performance is not present in this document.

    The document primarily focuses on:

    • Substantial Equivalence: Demonstrating that the new device is as safe and effective as a legally marketed predicate device.
    • Performance and Safety Testing: This section primarily lists the types of tests and the standards to which they were conducted (e.g., electrical safety, usability, mechanical testing, software verification, sterilization, biocompatibility, packaging validation). It mentions that "All test requirements were met, no new issues of safety or effectiveness were raised, and substantial equivalence was demonstrated." However, it does not provide specific quantitative acceptance criteria or results for the device's accuracy or performance in a clinical scenario (like an AI algorithm's diagnostic performance).

    Therefore, I cannot populate the table or answer the specific questions about AI/ML performance study details based on the provided text. The document does not describe a study that would generate such data.

    If the request was based on a misunderstanding and assumed this was an AI/ML diagnostic device, please clarify.

    However, I can extract information relevant to the types of tests performed and general statements about the device's performance from a regulatory perspective:

    Types of "Performance Data" and "Acceptance" (as detailed in the document):

    The document defines "Performance Data" not as clinical accuracy metrics in the AI sense, but rather as data demonstrating compliance with various engineering, safety, and quality standards. The "acceptance criteria" are implied by the successful completion of these tests in accordance with the listed international standards (e.g., IEC 60601-1, ASTM F2554-22).

    Acceptance Criteria (Implied by Standards)Reported Device Performance
    Design Verification: Conducted in accordance with ISO 14971 (risk management) and product requirements.All design verification tests "were performed based on risk management activities... and product requirements." "All test requirements were met."
    Electrical Safety and EMC: Compliance withIEC 60601-1 Ed 3.2 and IEC 60601-1-2 Ed 4.1.Testing "were conducted in accordance with" the specified IEC standards. "All test requirements were met."
    Usability: Compliance with IEC 60601-1-6 Ed 3.2 and IEC 62366-1 Ed 1.1."Usability was evaluated in accordance with" the specified IEC standards and FDA guidance. "All test requirements were met."
    Safety and Performance: Compliance with IEC 80601-2-77 Ed 1.0 (for robotic surgical equipment)."Safety and performance tests were performed in accordance with" the specified IEC standard. "All test requirements were met."
    Software Verification and Validation Testing: Compliance with IEC 62304 Ed 1.1 and FDA guidances."Software development and testing activities were conducted in accordance with" the specified IEC standard and FDA guidances. "All test requirements were met."
    Mechanical Testing: Compliance with ASTM F2554-22 (Positional Accuracy of Computer Assisted Surgical Systems)."Bench testings were conducted in accordance with" the specified ASTM standard. "All test requirements were met." (Note: Specific quantitative accuracy results are not provided in this summary, only that the test was done to the standard.)
    Sterilization & Shelf-life Testing: Compliance with ISO 11137-1, ISO 11137-2, ISO 17665-1, ANSI AAMI ST98."Sterilization testing activities were conducted in accordance with" specified ISO standards. "Reprocessing validation activities were evaluated and testing in accordance with" ANSI AAMI ST98. "All test requirements were met."
    Biocompatibility Testing: Compliance with ISO 10993-1, -5, -10, -11, -17, -18, -23."Biological evaluation was evaluated and testing in accordance with" the specified ISO standards. "All test requirements were met."
    Packaging Validation: Compliance with ISO 11607-1, ISO 11607-2."Packaging validation tests were performed in accordance with" the specified ISO standards. "All test requirements were met."
    Clinical Performance: Not required for safety/effectiveness."Clinical testing was not required to demonstrate the safety and effectiveness of the Device."
    Animal Study: Not required for safety/effectiveness."Animal performance testing was not required to demonstrate safety and effectiveness of the Device."
    Overall: Device should not raise new issues of safety or effectiveness."Performance and Safety Testing activities have demonstrated that the Spine Navigation and Robotic-Assistance Device does not raise any concern of safety or effectiveness." "All test requirements were met, no new issues of safety or effectiveness were raised, and substantial equivalence was demonstrated."

    Regarding the specific questions about an AI/ML performance study:

    1. A table of acceptance criteria and the reported device performance: As explained above, the "acceptance criteria" are compliance with various engineering and safety standards, and the "reported performance" is that these tests were conducted and "all test requirements were met." No specific quantifiable performance metrics for an AI algorithm (e.g., sensitivity, specificity, accuracy) are provided or indicated as part of the evaluation.
    2. Sample sizes used for the test set and the data provenance: Not applicable. The document describes compliance with engineering and safety standards, not a data-driven AI performance study.
    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable. Ground truth establishment is relevant for AI/ML performance studies, which this document does not describe.
    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set: Not applicable.
    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: Not applicable. This type of study is for evaluating the impact of AI on human reader performance, which is not described for this robotic guidance device.
    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Not applicable.
    7. The type of ground truth used (expert concensus, pathology, outcomes data, etc): Not applicable.
    8. The sample size for the training set: Not applicable.
    9. How the ground truth for the training set was established: Not applicable.

    In summary, the provided text describes a regulatory submission for a robotic surgical assistance system, focusing on its functional, safety, and quality characteristics, and its substantial equivalence to predicate devices, rather than a data-driven performance evaluation of a diagnostic AI/ML algorithm.

    Ask a Question

    Ask a specific question about this device

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Q Guidance System, when used with the Spine Guidance Software, is intended as a planning and intraoperative guidance system to enable open or percutaneous computer assisted surgery in adult and pediatric (adolescent) patients.

    The system is indicated for any surgical procedure on the spine in which the use of computer assisted planning and surgery may be appropriate. The system can be used for intraoperative guidance where a reference to a rigid anatomical structure such as the skull, pelvis or spine can be identified.

    The system assists in the positioning of instruments for procedures on the pelvis and spine, including:
    • Screw and Needle placement in the spine or pelvis.

    Device Description

    The Spine Guidance 5.0 system is an image guided stereotaxic, planning, and intraoperative guidance system intended to enable open or percutaneous computer-assisted surgery. It assists the surgeon in precisely positioning manual and powered instruments and locating patient anatomy during spinal surgery.

    The system is comprised of the Spine Guidance 5.0 Software, the Q Guidance System (computer platform), navigated accessories/ instruments (e.g., powered drills, pointers), and various system components (e.g., calibration devices, navigation adaptors, patient/ instrument trackers, etc.). The system provides intraoperative guidance to the surgeon using passive and active wireless optical tracking technologies. The computer platform consists of a computer, camera, big touchscreen monitor, and a small touchscreen monitor. The Spine Guidance 5.0 software functionality is described in terms of its capabilities that feature planning, registration, and navigation of medical devices.

    AI/ML Overview

    The provided document is an FDA 510(k) summary for the Stryker Spine Guidance Software (version 5.0) and associated instruments. It details the device's characteristics, intended use, and comparison to predicate devices, as well as the non-clinical testing performed to establish substantial equivalence.

    Here's a breakdown of the requested information based on the provided text:

    1. A table of acceptance criteria and the reported device performance

    Acceptance CriteriaReported Device Performance
    AccuracySystem demonstrated performance in 3D positional accuracy with a mean error ≤ 2.0 mm and in trajectory angle accuracy with a mean error < 2.0 degrees. This performance was determined using anatomically representative phantoms and utilizing a subset of system components and features that represent the worst-case combinations of all potential system components.
    SoftwareSoftware verification and validation testing was conducted as required by IEC 62304 and FDA Guidance on General Principles of Software Validation, January 11, 2002. All requirements were met, and no new issues of safety or effectiveness were raised.
    BiocompatibilityThe biocompatibility of all patient contact materials was verified according to ISO 10993-1:2018 and the FDA guidance Use of International Standard ISO 10993-1, "Biological evaluation of medical devices - Part 1: Evaluation and testing within a risk management process, September 2023. No new issues of safety or effectiveness were raised.
    Electrical Safety and Electromagnetic CompatibilityVerified conformance to IEC 60601-1:2005. IEC 60601-1:2005/AMD1:2012. IEC 60601-1:2005/AMD2:2020. Verified conformance to IEC 60601-1-2:2014, IEC 60601-1-2:2014/AMD1:2020, CISPR 11 Group 1, Class A requirements as well as additional testing to verify compatibility with RFID devices operating in the 125-134 kHz and 13.56 MHz frequency band.
    SterilizationThe reusable subject devices underwent a steam sterilization to demonstrate that they can be expected to be sterile and have a sterility assurance level (SAL) of 10-6 or greater after processing. The Tracking Instrument Battery underwent gamma sterilization validation per ISO 11137-2: 2013/(R) 2019 to demonstrate that they can be expected to be sterile and have a sterility assurance level (SAL) of 10-6.

    2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)

    The document does not specify the exact sample size for the test set used in the accuracy testing. It mentions "anatomically representative phantoms" and "a subset of system components and features that represent the worst-case combinations of all potential system components." The data provenance (country of origin, retrospective/prospective) is not mentioned.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    The document does not mention the use of experts to establish a ground truth for the test set. The accuracy testing was performed using "anatomically representative phantoms," implying a technical or engineering validation rather than clinical or expert-based ground truth.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

    No adjudication method for a test set is described, as the testing focused on technical accuracy using phantoms, not on human interpretation or clinical outcomes adjudicated by experts.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    The document explicitly states: "No clinical testing was required to support this submission." Therefore, no MRMC comparative effectiveness study was performed or reported. The device is a guidance system, not an AI diagnostic tool requiring human reader improvement comparison.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

    The document describes non-clinical testing for system accuracy and software validation. The reported accuracy of "mean error ≤ 2.0 mm for positional displacement and < 2.0 degrees for trajectory angle displacement" refers to the standalone performance of the system as measured against known phantom parameters. This suggests that the algorithm's performance in guiding instruments was assessed directly against a predefined ground truth in a controlled environment.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    For the accuracy testing, the ground truth was established by the precise measurements of "anatomically representative phantoms." This implies a technical or engineered ground truth rather than expert consensus, pathology, or outcomes data.

    8. The sample size for the training set

    The document does not provide information about a training set. This is a 510(k) summary focusing on a device that is substantially equivalent to a predicate, not necessarily a de novo submission for a novel AI algorithm requiring a specific training set size to establish performance. The software verification and validation would have involved testing against requirements, but not necessarily a "training set" in the machine learning sense.

    9. How the ground truth for the training set was established

    Since no training set is mentioned, the method for establishing its ground truth is not provided.

    Ask a Question

    Ask a specific question about this device

    K Number
    K223553
    Manufacturer
    Date Cleared
    2023-08-02

    (250 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Spine Planning is intended for pre- and intraoperative planning of open and minimally invasive spinal procedures. It displays digital patient images (CT, Cone Beam CT, MR, X-ray) and allows measurement and planning of spinal implants like screws and rods.

    Device Description

    The Spine Planning software allows the user to plan spinal surgery pre-operatively or intra-operatively. The software is able to display 2D X-Ray images and 3D datasets (e.g. CT or MR scans). The software consists of features for automated labelling of vertebrae and proposals for screw and rod implants, proposals for measurement of spinal parameters.

    The device can be used in combination with spinal navigation software during surgery, where preplanned or intra-operatively created information can be displayed, or solely as a pre-operative tool to prepare the surgery.

    AI/ML algorithms are used in Spine Planning for

    • . Detection of landmarks on 2D images for vertebrae labeling and measurement and
    • . Vertebra detection on Digitally Reconstructed Radiograph (DRR) images of 3D datasets for atlas reqistration (labeling of the vertebra).

    The AI/ML algorithm is a Convolutional Network (CNN) developed using a Supervised Learning approach. The algorithm was developed using a controlled internal process that defines from the inspection of input data to the training and verification of the algorithm.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the Spine Planning 2.0 device, based on the provided document:

    Acceptance Criteria and Device Performance

    The document does not explicitly present a table of acceptance criteria with corresponding performance metrics in a pass/fail format. However, based on the Performance Data section, we can infer the areas of assessment and general performance claims. The "Reported Device Performance" column will reflect the general findings described in the text.

    Acceptance Criteria (Inferred from Performance Data)Reported Device Performance
    Software Verification:Requirements met through integration and unit tests, including SOUP items and cybersecurity. Newly added components underwent integration tests.
    AI/ML Detected X-Ray Landmarks Assessment:- Quantified object detection. - Quantified quality of vertebra level assignment. - Quantified quality of landmark predictions. - Quantified performance of observer view direction for 2D X-rays.
    Screw Proposal Algorithm Evaluation (Comparison to Predicate):Thoracic and lumbar pedicle screw proposals generated by the new algorithm were found to be similar to those generated by the predicate algorithm.
    Usability Evaluation:No critical use-related problems identified.

    Study Details

    The provided text describes several evaluations rather than a single, unified study with a comprehensive design. Information for some of the requested points is not explicitly stated in the document.

    2. Sample size used for the test set and the data provenance:

    • AI/ML Detected X-Ray Landmarks Assessment:
      • Sample Size: Not explicitly stated.
      • Data Provenance: "2D X-rays from the Universal Atlas Transfer Performer 6.0." This suggests either a curated dataset or potentially synthetic data within a software environment, but specific origin (e.g., country, hospital) or whether it was retrospective/prospective is not provided.
    • Screw Proposal Algorithm Evaluation:
      • Sample Size: Not explicitly stated.
      • Data Provenance: Not explicitly stated, but implies the use of test cases to generate screw proposals for comparison.
    • Usability Evaluation:
      • Sample Size: Not explicitly stated.
      • Data Provenance: Not explicitly stated.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • AI/ML Detected X-Ray Landmarks Assessment: Not explicitly stated in the provided text. The document mentions "quantifying" various quality aspects, which implies a comparison to a known standard or expert annotation, but details are missing.
    • Screw Proposal Algorithm Evaluation: Not explicitly stated. The comparison is "to the predicate and back-up algorithms," suggesting an algorithmic ground truth or comparison standard rather than human expert ground truth for individual screw proposals.
    • Usability Evaluation: Not applicable, as usability testing focuses on user interaction rather than ground truth for clinical accuracy.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    • Not explicitly stated for any of the described evaluations.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No MRMC comparative effectiveness study involving human readers with and without AI assistance is described in the provided text. The evaluations focus on the algorithm's performance or similarity to a predicate, and usability for the intended user group.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • Yes, the AI/ML Detected X-Ray Landmarks Assessment and the Screw Proposal Algorithm Evaluation appear to be standalone algorithm performance assessments.
      • The "AI/ML Detected X-Ray Landmarks Assessment" explicitly evaluates the AI/ML detected landmarks.
      • The "Screw Proposal Algorithm Evaluation" compares the new algorithm's proposals to existing algorithms, indicating a standalone algorithmic assessment.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • AI/ML Detected X-Ray Landmarks Assessment: Inferred to be a form of expert-defined or algorithm-defined gold standard given the quantification of object detection, quality of labeling, and landmark predictions. The source "Universal Atlas Transfer Performer 6.0" might imply a reference standard built within that system.
    • Screw Proposal Algorithm Evaluation: The ground truth used for comparison was the output of the "predicate and back-up algorithms," implying an algorithmic gold standard.
    • Usability Evaluation: Ground truth is not applicable in the sense of clinical accuracy; rather, the measure is the identification of "critical use-related problems" by users during testing.

    8. The sample size for the training set:

    • Not explicitly stated for the AI/ML algorithms mentioned. The document only mentions that the "AI/ML algorithm is a Convolutional Network (CNN) developed using a Supervised Learning approach" and "developed using a controlled internal process that defines from the inspection of input data to the training and verification of the algorithm."

    9. How the ground truth for the training set was established:

    • Not explicitly stated. Given it's a "Supervised Learning approach," it would imply that the training data was meticulously labeled, likely by experts (e.g., radiologists, orthopedic surgeons) or through a highly curated process, but the document does not elaborate on this.
    Ask a Question

    Ask a specific question about this device

    K Number
    K223424
    Device Name
    Spine Auto Views
    Date Cleared
    2023-07-13

    (241 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Spine Auto Views is a non-invasive image analysis software package which may be used in conjunction with CT images to aid in the automatic generation of anatomically focused multi-planar reformats and automatically export results to predetermined DICOM destinations.

    Spine Auto Views assists clinicians by providing anatomically focused reformats of the spine, with the ability to apply anatomical labels of the vertebral bodies and intervertebral disc spaces.

    Spine Auto Views may be used for multiple care areas and is not specific to any disease state. It can be utilized for the review of various types of exams including trauma, oncology, and routine body.

    Device Description

    Spine Auto Views is a software analysis package designed to generate batch reformats and apply labels to the spine. It is intended to streamline the process of generating clinically relevant batch reformat outputs that are requested for many CT exam types.

    Spine Auto Views can generate, automatically, patient specific, anatomically focused spine reformats. Spine Auto Views brings a state-of-the-art deep learning algorithm that generates oblique axial reformats, appropriately angled through each disc space without the need for a user interface and human interaction. 3D curved coronal and curved sagittal views of the spine as well as traditional reformat planes can all be generated with Spine Auto Views, no manual interaction required. Vertebral bodies and disc spaces can be labeled, and all series networked to desired DICOM destination(s), ready to read. The automated reformats may help in providing a consistent output of anatomically orientated images, labeled, and presented to the interpreting physician ready to read.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and study proving the device meets them, based on the provided text:


    Acceptance Criteria and Device Performance Study for Spine Auto Views

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria DefinitionReported Device Performance (Spine Auto Views)
    Algorithm Capability to Automatically Detect Intervertebral Discs: Successful detection of position and orientation of intervertebral discs.Passed. The algorithm successfully passed the defined acceptance criteria for automatically detecting the position and orientation of intervertebral discs using a database of retrospective CT exams.
    User Acceptance of Oblique Axial Reformats: Reader acceptance of automatically generated oblique axial reformats.Greater than 95% of the time for all readers. The reader evaluation concluded that Spine Auto Views oblique axial reformats generated user-acceptable results over 95% of the time for all readers.
    User Acceptance of Curved Coronal and Curved Sagittal Reformats: Reader assessment of automatically generated curved coronal and curved sagittal batch reformats compared to standard views.Assessed by readers. The batch reformats of curved coronal and curved sagittal views were assessed by readers by comparing them with corresponding standard coronal and sagittal views. (Specific success rate not quantified in the provided text, but implied as satisfactory given the overall conclusion).

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: A database of retrospective CT exams for algorithm validation and a sample of clinical CT images for the reader study. The exact number of exams/images for each is not specified in the document.
    • Data Provenance: The document states that the database of exams for algorithm validation was "representative of the clinical scenarios where Spine Auto Views is intended to be used, with consideration of acquisition parameters and patient characteristics." No specific country of origin is mentioned, but the mention of "retrospective CT exams" confirms the nature of the data.

    3. Number of Experts and Qualifications for Ground Truth

    The document does not explicitly state the number of experts or their specific qualifications (e.g., "radiologist with 10 years of experience") used to establish the ground truth for the test set.

    4. Adjudication Method for the Test Set

    The document does not specify an adjudication method (e.g., 2+1, 3+1). For the reader study, it indicates that readers assessed the reformats, but it doesn't detail how discrepancies or consensus were handled if multiple readers were involved in rating the same case.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? A reader study was performed to assess user acceptance, which is a form of MRMC study in that multiple readers evaluated cases. However, it was not explicitly a "comparative effectiveness study" with and without AI assistance as described in the prompt.
    • Effect Size (AI vs. No AI): The study focused on the acceptance of the AI-generated reformats, implying a comparison against traditional manual generation (which would be "without AI assistance" for reformats). The text states that "Spine Auto Views oblique axial reformats generates user acceptable results greater than 95% of the time for all readers." This indicates a high level of acceptance for the AI-generated images, suggesting a positive effect compared to the traditional manual process, which the AI aims to streamline and automate. However, a quantitative effect size in terms of clinical improvement or efficiency gain compared to a "without AI" baseline is not provided. The device's primary benefit is automation ("no manual interaction required"), implying an improvement in workflow efficiency and consistency over manual methods.

    6. Standalone (Algorithm Only) Performance

    • Was standalone performance done? Yes, an engineering validation of the Spine Auto Views algorithm's capability to automatically detect the position and orientation of intervertebral discs was performed as a standalone assessment. This evaluated the algorithm's performance independent of human interaction or interpretation in a clinical setting.

    7. Type of Ground Truth Used

    • Algorithm Validation: The ground truth for the algorithm validation (disc detection) seems to have been established through a reference standard derived from the "database of retrospective CT exams." While the exact method (e.g., manual expert annotation, pathology correlation) is not explicitly stated, it implies a reliable, established truth against which the algorithm's detections were compared.
    • Reader Study: For the reader study, the "user acceptable results" and comparison with "corresponding standard coronal and standard sagittal views" suggest a ground truth based on expert consensus/clinical acceptability by the evaluating readers.

    8. Sample Size for the Training Set

    The document does not specify the sample size for the training set used for the deep learning algorithm. It only mentions a "database of retrospective CT exams" for validation.

    9. How Ground Truth for Training Set Was Established

    The document does not provide details on how the ground truth for the training set was established. It only refers to the validation set and its ground truth.

    Ask a Question

    Ask a specific question about this device

    K Number
    K231668
    Date Cleared
    2023-07-07

    (30 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Spine CAMP™ is a fully-automated software that analyzes X-ray images of the spine to produce reports that contain static and/or motion metrics. Spine CAMP™ can be used to obtain metrics from sagittal plane radiographs of the lumbar and/or cervical spine and it can be used to visualize intervertebral motion via an image registration method referred to as "stabilization." The radiographic metrics can be used to characterize and assess spinal health in accordance with established quidance. For example, common clinical uses include assessing spinal stability, alignment, degeneration, fusion, motion preservation, and implant performance. The metrics produced by Spine CAMP are intended to be used to support qualified and licensed professional healthcare practitioners in clinical decision making for skeletally mature patients of age 18 and above.

    Device Description

    Spine CAMP™ is a fully-automated image processing software device. It is designed to be used with X-ray images and is intended to aid medical professionals in the measurement and assessment of spinal parameters. Spine CAMP™ is capable of calculating distances, angles, linear displacements, angular displacements, and mathematical combinations of these metrics to characterize the morphology, alignment, and motion of the spine. These analysis results are presented in the form of reports, annotated images, and visualizations of intervertebral motion to support their interpretation.

    AI/ML Overview

    The provided text describes the Spine CAMP™ (1.1) device, an automated software for analyzing X-ray images of the spine, and refers to performance data used to demonstrate its substantial equivalence to a predicate device. However, the text does not contain a detailed table of acceptance criteria nor a comprehensive study report with specific performance metrics (e.g., accuracy, sensitivity, specificity etc.) compared against those criteria. It primarily focuses on the comparison to a predicate device and general claims of equivalence.

    Based on the information provided, here's what can be extracted and inferred regarding the acceptance criteria and study:

    1. Table of Acceptance Criteria and Reported Device Performance

    The text does not explicitly provide a table of acceptance criteria with specific quantitative thresholds (e.g., "accuracy > X%, sensitivity > Y%") nor detailed reported device performance against such criteria. Instead, it states that "Statistical correlations and equivalence tests were performed by directly comparing vertebral landmark coordinates, image calibration, and intervertebral measurements between Spine CAMP™ v1.1 and the predicate device as well as spinopelvic measurements between Spine CAMP™ v1.1 and the reference device. This analysis demonstrated correlation and statistical equivalence for all variables evaluated."

    This implies that the acceptance criteria were based on demonstrating statistical equivalence or strong correlation to the predicate device (Spine CAMP™ v1.0) and a "reference device" (QMA for spinopelvic measurements) for the identified variables. The exact metrics and their thresholds for establishing "statistical equivalence" are not detailed.

    What is present regarding "performance":

    • Performance Goal: "demonstrated correlation and statistical equivalence for all variables evaluated."
    • Variables Evaluated: Vertebral landmark coordinates, image calibration, intervertebral measurements, spinopelvic measurements.
    • Result: "This analysis demonstrated correlation and statistical equivalence for all variables evaluated."

    Without an explicit table, we cannot populate one.

    2. Sample Size for the Test Set and Data Provenance

    • Sample Size for Test Set:
      • 215 lateral cervical spine radiographs
      • 232 lateral lumbar spine radiographs
    • Data Provenance: The text does not explicitly state the country of origin. It indicates that the dataset "had not been used to train any of the AI models," implying it was a test set. The term "retrospective or prospective" is not specified, but the use of an "existing" dataset (previously analyzed by Spine CAMP™ v1.0) suggests it was retrospective.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of those Experts

    • Number of Experts: Five experienced operators.
    • Qualifications of Experts: Described as "experienced operators." No specific qualifications like "radiologist with 10 years of experience" are provided. It's implied they were trained professionals capable of using the "reference device, QMA, for spinopelvic measurements."

    4. Adjudication Method for the Test Set

    The text states: "this dataset was analyzed by five experienced operators using the reference device, QMA, for spinopelvic measurements." This implies that the measurements from these five operators using the QMA device were used to establish the reference standard (ground truth) for spinopelvic measurements. It does not specify an adjudication method like 2+1 or 3+1 if there were discrepancies among the operators, or if their measurements were averaged/concatenated to form the ground truth. It simply states they "analyzed" the data.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done

    • MRMC Study: Yes, an indirect form of comparative effectiveness was conducted by having "five experienced operators" analyze the dataset using a "reference device" (QMA) to provide a ground truth for comparison with the AI's spinopelvic measurements. The primary comparison in the study, however, was between Spine CAMP™ v1.1 and the predicate device (Spine CAMP™ v1.0), and between Spine CAMP™ v1.1 and the human-generated "reference device" (QMA) data.
    • Effect Size of Human Readers Improving with AI vs. Without AI Assistance: This specific metric is not provided in the text. The study did not appear to be designed as an MRMC study comparing human reader performance with and without AI assistance; rather, it compared the AI's performance to established methods (predicate device and human-operated reference device).

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    Yes, the testing described appears to be a standalone evaluation of the Spine CAMP™ v1.1 algorithm. It mentions "evaluating Spine CAMP™ v1.1 performance on a large dataset" and comparing its outputs to those of the predicate device and the reference device (QMA operated by humans). This focuses on the algorithm's output directly, rather than how it changes human workflow or decision-making.

    7. The Type of Ground Truth Used

    • For Intervertebral & Image Calibration Measurements: The ground truth appears to be implicitly established by the predicate device's (Spine CAMP™ v1.0) outputs. The study performed "statistical correlations and equivalence tests... between Spine CAMP™ v1.1 and the predicate device."
    • For Spinopelvic Measurements: The ground truth was established by the measurements from five experienced operators using a "reference device" (QMA). This can be considered a form of "expert consensus" or "expert measurement" acting as the reference standard.
    • No mention of pathology or outcomes data as ground truth.

    8. The Sample Size for the Training Set

    The text states: "Spine CAMP's primary component, the AI Engine, was updated by retraining its AI models with more imaging for improved generalization and performance." However, the specific sample size for the training set is not provided.

    9. How the Ground Truth for the Training Set Was Established

    The text mentions that the AI models were "retrained." It does not describe how the ground truth for this training data was established. It only states that the test dataset "had not been used to train any of the AI models," implying separate data for training and testing.

    Ask a Question

    Ask a specific question about this device

    K Number
    K221632
    Device Name
    Spine CAMP™
    Date Cleared
    2022-10-18

    (134 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Spine CAMP™ is a fully-automated software that analyzes X-ray images of the spine to produce reports that contain static and/or motion metrics. Spine CAMP™ can be used to obtain metrics from sagittal plane radiographs of the lumbar and/or cervical spine and it can be used to visualize intervertebral motion via an image registration method referred to as "stabilization". The radiographic metrics can be used to characterize and assess spinal health in accordance with established guidance. For example, common clinical uses include assessing spinal stability, alignment, degeneration, fusion, motion preservation, and implant performance. The metrics produced by Spine CAMP™ are intended to be used to support qualified and licensed professional healthcare practitioners in clinical decision-making for skeletally mature patients of age 18 and above.

    Device Description

    Spine CAMP™ is a fully-automated image processing software device. It is designed to be used with X-ray images and is intended to aid medical professionals in the measurement and assessment of spinal parameters. Spine CAMP™ is capable of calculating distances, angles, linear displacements, angular displacements, and mathematical combinations of these metrics to characterize the morphology, alignment, and motion of the spine. These analysis results are presented in the form of reports, annotated images, and visualizations of intervertebral motion to support their interpretation.

    AI/ML Overview

    The Spine CAMP™ device uses automated software to analyze X-ray images of the spine and produce reports containing static and/or motion metrics. It can be used to obtain metrics from sagittal plane radiographs of the lumbar and/or cervical spine and visualize intervertebral motion. The metrics characterize and assess spinal health, including stability, alignment, degeneration, fusion, motion preservation, and implant performance. These metrics support clinical decision-making for skeletally mature patients aged 18 and above.

    Here's an analysis of the acceptance criteria and study proving its efficacy:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state quantitative acceptance criteria in terms of thresholds (e.g., "accuracy > X%"). Instead, it focuses on demonstrating statistical correlation and equivalence with the predicate device, KIMAX QMA®. The performance is evaluated by comparing the outputs of Spine CAMPTM with those of the predicate device.

    Acceptance Criteria (Implied)Reported Device Performance
    Functional Equivalence: Device functions as intended."The software functioned as intended and all results observed were as expected."
    Correlation and Statistical Equivalence with Predicate Device:"Statistical correlations and equivalence tests were performed by directly comparing vertebral landmark coordinates, image calibration, and radiographic metrics between Spine CAMP™ and the predicate device. This analysis demonstrated correlation and statistical equivalence for all variables evaluated." This indicates that the Spine CAMP™'s automated measurements (vertebral landmark coordinates, image calibration, and radiographic metrics) are highly consistent with and statistically indistinguishable from those produced manually by experienced operators using the predicate device. The "acceptance" is implicitly that these correlations and equivalences meet appropriate statistical thresholds for clinical interchangeability.
    No New or Different Safety/Effectiveness Questions:"The minor differences between the subject and predicate devices (i.e., methods by which the inputs to the results calculator are produced) do not raise new or different questions regarding safety and effectiveness when used as labeled."

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: 215 lateral cervical spine radiographs and 232 lateral lumbar spine radiographs.
    • Data Provenance: The document does not explicitly state the country of origin. It indicates that the data was previously analyzed by experienced operators using the predicate device, suggesting it is retrospective data.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Number of Experts: Five experienced operators.
    • Qualifications: The document states "experienced operators" without further specific qualifications (e.g., radiologist, years of experience). However, their role was to generate ground truth using the predicate device.

    4. Adjudication Method for the Test Set

    The document does not specify an adjudication method like 2+1 or 3+1. The ground truth for the test set was established by having "five experienced operators using the predicate device." It's implied that these outputs from the predicate device (manual analysis) were directly used as the ground truth. There's no mention of a consensus process among the five operators to define a single ground truth per case if their results differed.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    No, a traditional MRMC comparative effectiveness study where human readers improve with AI vs. without AI assistance was not done. The study focused on demonstrating that the device itself performs comparably to the predicate device, which is operated manually by humans. The comparison is between the Spine CAMPTM (AI-driven automated measurements) and the predicate device (human-driven manual measurements). Therefore, an "effect size of how much human readers improve with AI vs without AI assistance" is not applicable to the design of this particular study.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    Yes, the primary study described is a standalone performance evaluation. The Spine CAMPTM is a "fully-automated software." The study directly compared its automated outputs to the manually generated ground truth from the predicate device. While the device's output is intended to "support qualified and licensed professional healthcare practitioners in clinical decision-making," the performance assessment itself is of the algorithm's standalone capabilities.

    7. The Type of Ground Truth Used

    The ground truth used was expert assessment using a predicate device. Specifically, it was derived from "experienced operators using the predicate device" (KIMAX QMA®) who manually obtained measurements.

    8. The Sample Size for the Training Set

    The document does not specify the sample size for the training set. It mentions that "The data labels used to train Spine CAMP™'s AI models were derived directly from the KIMAX QMA® technology," but no numbers are provided for this training data.

    9. How the Ground Truth for the Training Set Was Established

    The ground truth for the training set was established by manual analysis using the predicate device, KIMAX QMA®. "The data labels used to train Spine CAMP™'s AI models were derived directly from the KIMAX QMA® technology." This implies that experienced users of the predicate device generated the "ground truth" labels that were then used to train the AI models within Spine CAMPTM.

    Ask a Question

    Ask a specific question about this device

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Spine & Trauma Navigation is intended as an intraoperative image-guided localization system to enable open and minimally invasive surgery. It links a freehand probe, tracked by a passive marker sensor system to virtual computer image space on a patient's preoperative or intraoperative 2D or 3D image data.

    Spine & Trauma Navigation enables computer-assisted navigation of medical image data, which can either be acquired preoperatively or intraoperatively by an appropriate image acquisition system. The software offers screw and interbody device planning and navigation with surgical instruments.

    The system is indicated for any medical condition in which the use of stereotactic surgery may be appropriate and where a reference to a rigid anatomical structure, such as the skull, the pelvis, a long bone or vertebra can be identified relative to the acquired image (CT, MR, 3D fluoroscopic image reconstruction or 2D fluoroscopic image) and/or an image databased model of the anatomy.

    As an accessory to the Spine & Trauma Navigation, the Alignment System Spine is intended to support the surgeon to achieve a pre-defined screw with surgical instruments during the surgical procedure. It is used for spinal screw placement procedures.

    Device Description

    The Spine & Trauma Navigation is an image guided surgery system for navigated treatments in the fields of spine and trauma surgery, whereas the user may use image data based on CT, MR, 3D fluoroscopic image reconstruction (cone beam CT) or 2D fluoroscopic images coming from the compatible imaging device LoopX. It offers different patient image registration methods and instrument selection and calibration to allow surgical navigation by using optical tracking technology.

    The software is installed on a mobile or fixed Image Guided Surgery (IGS) platform to support the surgeon in clinical procedures by displaying tracked instruments in patient's image data. The IGS platforms comprise of a mobile Monitor Cart or a fixed ceiling mounted display and an infrared camera for image guided surgery purposes.

    The Spine &Trauma Navigation consists of the following components:

    • . Software enabling instrument selection, different registration methods (e.g. surface matching) as well as navigation in different types of images.
    • . IGS platforms
    • . Surgical instruments for navigation, patient referencing and registration

    The Alignment System Spine is an accessory to the Spine & Trauma Navigation. It serves as a holding and positioning system to support the surgeon in reaching a pre-defined screw with surgical instruments. The device needs to be first manually pre-aligned to the region of interest by opening the brakes of the Cirq Arm System. Then tracking information provided by the optical camera is used by the Alignment Software Spine 2.0 to control the movement of the Cirq Robotic Motor Unit to perform the final automatic fine alignment. Once the alignment to the planned screw is done, the Alignment System Spine maintains its position during the rest of the procedure and Spine & Trauma 3D Navigation takes over the navigation of the instruments. The Alignment System Spine consists of the following components:

    • Alignment Software Spine 2.0 ●
    • Cirg Arm System with multiple structural hardware components ●
    • Cirq Robotic Motor Unit
    • Cirq robotic instruments
    AI/ML Overview

    The provided document, K221618, describes the clearance of "Spine & Trauma Navigation" and "Alignment System Spine" devices. The performance data section focuses on system accuracy testing, which serves as the primary study proving the device meets acceptance criteria.

    Here's a breakdown of the requested information based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance CriteriaReported Device Performance
    Mean Positional Error (instrument's tip)≤ 2 mm
    Mean Angular Error (instrument's axis)≤ 2°

    The document states: "The results show the following acceptance criteria are fulfilled:

    • Mean Positional Error of the placed instrument's tip ≤ 2 mm
    • Mean Angular Error of the placed instrument's axis ≤ 2°"

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not explicitly state the specific number of cases or sample size used for the "System accuracy testing." It mentions, "The 2D and 3D positional and angular navigation accuracy of the Spine & Trauma Navigation including the software, the platforms and the instruments was evaluated considering a realistic clinical setup and representative worst-case scenarios."

    Data Provenance: The document does not specify the country of origin for the data. The testing appears to be pre-clinical/laboratory-based performance testing rather than a study involving patient data. It is explicitly stated: "No clinical testing was needed for the Subject Device." Therefore, it's not a retrospective or prospective clinical study.

    3. Number of Experts Used to Establish Ground Truth and Qualifications

    Not applicable. This was a system accuracy test, not an evaluation of diagnostic performance requiring expert ground truth establishment for a test set. The ground truth for positional and angular accuracy would be established by the precise measurements of the testing apparatus itself.

    4. Adjudication Method for the Test Set

    Not applicable. As this was a system accuracy test, there was no human interpretation or subjective assessment requiring adjudication.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No. The document explicitly states: "No clinical testing was needed for the Subject Device." Therefore, an MRMC comparative effectiveness study involving human readers with and without AI assistance was not performed.

    6. Standalone (Algorithm Only) Performance

    Yes, in essence. The "System accuracy testing" evaluates the performance of the navigation system (including its software, platforms, and instruments) in a simulated environment, independent of a human surgeon's real-time decision-making abilities. While it’s not an AI algorithm in the sense of image interpretation, it's the performance of the automated/assisted navigation system.

    7. Type of Ground Truth Used

    The ground truth used for the system accuracy testing would be based on physical measurement standards. This involves precisely measured and known positions and angles of the instruments relative to the anatomical models or phantoms used in the "realistic clinical setup and representative worst-case scenarios." The positional and angular errors are calculated against these known, precise ground truth values.

    8. Sample Size for the Training Set

    The document does not mention the existence of a machine learning or AI algorithm that would require a "training set" in the traditional sense. The device is described as an "image-guided localization system" and "computer-assisted navigation," suggesting deterministic algorithms rather than trainable machine learning models. Therefore, the concept of a training set and its size as typically understood for AI/ML devices is not applicable or provided in this document.

    9. How the Ground Truth for the Training Set Was Established

    Not applicable, as a training set for an AI/ML algorithm is not described or implied for this device.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 3