Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K211254
    Date Cleared
    2022-01-14

    (263 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    ARAI Surgical Navigation System

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The ARAI™ System is intended as an aid for precisely locating anatomical structures in either open or percutaneous orthopedic procedures in the lumbosacral spine region. Their use is indicated for any medical condition of the lumbosacral spine in which the use of stereotactic surgery may be appropriate, and where reference to a rigid anatomical structure, such as the iliac crest, can be identified relative to intraoperative CT images of the anatomy.

    The ARAI System simultaneously displays 2D stereotaxic data along with a 3D virtual anatomy model over the patient during surgery. The stereotaxic display is indicated for continuously tracking instrument position and orientation to the registered patient anatomy while the 3D display is indicated for localizing the virtual instrument to the virtual anatomy model over the patient during surgery. The 3D display should not be relied upon solely for absolute positional information and should always be used in conjunction with the displayed 2D stereotaxic information.

    Device Description

    The ARAI™ System is a combination of hardware and software that provides visualization of the patient's internal boney anatomy and surgical guidance to the surgeon based on registered patient-specific digital imaging.

    ARAI™ is a navigation system for surgical planning and/or intraoperative guidance during stereotactic surgical procedures. The ARAI™ system consists of two mobile devices: 1) the surgeon workstation, which includes the display unit and the augmented reality visor (optional), and 2) the control workstation, which houses the optical navigation tracker and the computer. The optical navigation tracker utilizes infrared cameras and active infrared lights to triangulate the 3D location of passive markers attached to each system component to determine their 3D positions and orientations in real time. The 3D scanned data is displayed with both 2D images and 3D virtual models along with tracking information on computer mounted on workstations near the patient bed and a dedicated projection display mounted over the patient. Augmented reality is accomplished with the 3D virtual models being viewed with dedicated headset(s).

    Software algorithms combine tracking information and high-resolution 3D anatomical models to display representations of patient anatomy.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and study details for the ARAI™ Surgical Navigation System based on the provided FDA 510(k) summary:

    The document does not explicitly present a table of acceptance criteria. Instead, it presents the results of performance validation for positional and angular errors. Therefore, the reported device performance is used directly to infer the implied acceptance criteria.


    1. Table of Acceptance Criteria and Reported Device Performance

    Performance Validation MetricImplied Acceptance Criteria (Upper Bound)Reported Device Performance
    Positional Error [mm]$\leq$ 2.49 mm (99% CI Upper Bound)2.16 mm (Mean)
    $\leq$ 2.41 mm (95% CI Upper Bound)1.00 mm (Standard deviation)
    Angular Error [degrees]$\leq$ 1.74 degrees (99% CI Upper Bound)1.49 degrees (Mean)
    $\leq$ 1.68 degrees (95% CI Upper Bound)0.73 degrees (Standard deviation)
    Display LuminanceMet requirementsDemonstrated via testing
    Image ContrastMet requirementsDemonstrated via testing
    Latency and FramerateMet requirementsDemonstrated via testing
    Stereoscopic Crosstalk and ContrastMet requirementsDemonstrated via testing
    AR Shutter FrequencyMet requirementsDemonstrated via testing
    Spatial Accuracy (AR)Met requirementsDemonstrated via testing
    User Interface and System Display UsabilityMet requirementsEvaluated via Human Factors and Usability Testing
    Software Segmentation QualityCompared favorably to manual segmentationDetermined by comparing with manual segmentations (mean Sørensen-Dice coefficient - DSC)
    BiocompatibilityMet requirementsEvaluation confirms compliance
    Electrical SafetyCompliant with IEC 60601-1:2012Testing assures compliance
    Electromagnetic CompatibilityCompliant with IEC 60601-1-2:2014Testing assures compliance
    Software Verification and ValidationCompliant with FDA GuidancePerformed

    2. Sample Sizes Used for the Test Set and Data Provenance

    • Positional and Angular Error Validation (Surgical Simulations):
      • Sample Size: Not explicitly stated in the provided text. The terms "overall 3D positional error" and "overall 3D angular error" are used, but they do not reveal the number of screws measured or the number of cadavers.
      • Data Provenance: Prospective, real-world simulation using cadavers ("Surgical simulations conducted on cadavers were performed for system validation."). The country of origin is not specified.
    • Software Segmentation Quality:
      • Sample Size: A "set of test samples presenting lumbosacral spine, extracted from stationary and intraoperative Computed Tomography scans" was used. The exact number of samples is not provided.
      • Data Provenance: CT scans (both stationary and intraoperative) of the lumbosacral spine. It is unclear if these were retrospective or prospective, or their country of origin.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Positional and Angular Error Validation: The document describes the ground truth as the "real implants." It does not mention experts establishing the ground truth for this measurement directly, as it's a direct comparison between the virtual and physically placed surgical artifacts.
    • Software Segmentation Quality: The ground truth was established by "manual segmentations prepared by trained analysts." The number of analysts and their specific qualifications (e.g., years of experience, specific medical specialty) are not provided.

    4. Adjudication Method for the Test Set

    • Positional and Angular Error Validation: Not applicable, as the ground truth derivation is not a subjective consensus process. It's a measurement against a physical reference.
    • Software Segmentation Quality: The ground truth was established by "manual segmentations prepared by trained analysts." The document does not specify an adjudication method (like 2+1 or 3+1) if multiple analysts were involved or if a single analyst's segmentation was considered the ground truth.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size

    • The provided document does not describe a Multi-Reader Multi-Case (MRMC) comparative effectiveness study and therefore does not report an effect size for human readers improving with AI vs. without AI assistance. The performance testing focuses on the device's accuracy in tracking and displaying anatomical structures and instruments.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Study Was Done

    • Yes, a standalone performance assessment of the algorithm appears to have been conducted, particularly for:
      • Positional and Angular Error Validation: This directly quantifies the system's accuracy in representing physical instrument and screw positions relative to the anatomical model, which is an algorithm-driven output.
      • Software Segmentation Quality: The "autonomous spine segmentation process" was compared against manual segmentations, indicating a standalone evaluation of the algorithm's performance in this task.

    7. The Type of Ground Truth Used

    • Positional and Angular Error Validation: The ground truth was the "real implants" positioned in cadavers. This is a form of direct physical measurement/outcome data.
    • Software Segmentation Quality: The ground truth was expert manual segmentation ("manual segmentations prepared by trained analysts").

    8. The Sample Size for the Training Set

    • The document does not specify the sample size used for the training set for any of the algorithms (e.g., for spine segmentation or tracking). It only mentions test samples.

    9. How the Ground Truth for the Training Set Was Established

    • The document does not provide information on how the ground truth for the training set was established, as it does not describe the training process or the dataset used for training. It only details the establishment of ground truth for certain test sets.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1