Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K232189
    Device Name
    OrionXR
    Date Cleared
    2023-09-14

    (52 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    OrionXR

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    OrionXR is a software device for display, manipulation, and evaluation of externally-generated 3D models of patient anatomy through an Augmented Reality Head Mounted Display (HMD) to assist in visualization, planning and communication of treatment options.

    OrionXR is indicated for use by qualified healthcare professionals including but not restricted to radiologists, nonradiology specialists, physicians, and technologists.

    Digital models viewed through the HMD are for informational purposes only and not intended for diagnostic use. OrionXR is not intended to guide surgical instrumentation and it is not to be used for stereotactic procedures or surgical navigation.

    OrionXR software is designed for use with performance-tested hardware specified in the User Manual.

    Device Description

    OrionXR includes a server for uploading pre-acquired 3D annotations of patient anatomy and the Microsoft Hololens 2 head mounted display for visualizing the models via a Mixed reality platform. The components of the device include:

      1. Web Server Users can load 3D annotations of anatomy to OrionXR web server which can then be accessed on the Head mounted display.
      1. Head Mounted Display OrionXR is compatible with the Microsoft Hololens 2. A user is able to access 3D digital models on the headset. User can manipulate the model in three dimensions of translational and rotational space.
    AI/ML Overview

    The OrionXR device is a software device intended for display, manipulation, and evaluation of externally-generated 3D models of patient anatomy through an Augmented Reality Head Mounted Display (HMD) to assist in visualization, planning, and communication of treatment options. It is indicated for use by qualified healthcare professionals.

    Here's a breakdown of the acceptance criteria and supporting study details:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided text doesn't explicitly state quantitative acceptance criteria with pass/fail thresholds for the performance tests. Instead, it describes general design verification and validation activities conducted. The "Reported Device Performance" column reflects the successful execution and meeting of design input requirements.

    Acceptance Criteria CategoryReported Device Performance
    Dimensional Accuracy of 3D ModelsDemonstrated successful performance to ensure output specifications meet design input requirements.
    Optical Performance of Headset Display:
    • Contrast ratio
    • Resolution
    • Field of View
    • Luminance Uniformity
    • Eyebox
    • Distortion
    • Frame Rate | Demonstrated successful performance across these optical parameters to ensure output specifications meet design input requirements. The display frame rate is specified as 60 fps, matching the predicate. |
      | Qualitative Assessment of 3D Anatomic Models | Successfully conducted to ensure output specifications meet design input requirements. |
      | Human Factors and Usability Engineering | Human factors and usability engineering testing was performed. No additional use-related risks to the safety or effectiveness of the device were identified. This included simulated use replicative of both the intended use and the intended environment of use. |
      | Overall Safety and Effectiveness | Performance data demonstrate that the OrionXR is as safe and effective as the predicate device (IntraOpVSP, K213128), and does not raise new issues of safety or effectiveness. The device is capable of accurately uploading and visualizing 3D anatomic models on an HMD. |

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not specify a distinct "test set" sample size in terms of the number of patient cases or specific instances used for validation, nor does it provide details on data provenance (e.g., country of origin, retrospective/prospective). The performance data section refers to "design verification and validation" generally.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    The document does not provide details on the number of experts used or their qualifications for establishing ground truth for any specific test set. The device is a "Medical Image Management And Processing System" that displays externally-generated 3D models for visualization, planning, and communication, rather than performing diagnostic analysis that would typically require expert ground truth labeling.

    4. Adjudication Method for the Test Set

    No adjudication method is described, as the document does not detail specific expert evaluations of a test set in the manner of diagnostic AI devices.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance was not explicitly described or performed. The device's indications for use are for visualization, planning, and communication, not for primary diagnostic interpretation or as an AI aid in a traditional diagnostic workflow.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The performance data described focuses on the device's technical capabilities, such as dimensional accuracy of 3D models and optical performance of the HMD. This can be considered "standalone" in the sense that these are objective measurements of the device's functionality. However, the device itself is an Augmented Reality Head Mounted Display system, inherently designed for human-in-the-loop interaction by qualified healthcare professionals for visualization and planning, not autonomous decision-making. No isolated "algorithm only" performance for diagnostic or analytical tasks is presented, as the device is not intended for such standalone functions.

    7. The Type of Ground Truth Used

    Given the device's function (display and manipulation of 3D models for visualization and planning), "ground truth" would likely relate to the accuracy of the displayed 3D models against their source data, and the optical performance of the display. The document mentions "Dimensional Accuracy of 3D Models" and "Qualitative Assessment of 3D Anatomic Models" as part of design verification, implying that the accuracy of the digital models and their representation are the primary "ground truths" being assessed against established specifications or source data. It does not mention pathology, outcomes data, or expert consensus in the context of diagnostic "ground truth," as it is not a diagnostic device.

    8. The Sample Size for the Training Set

    The document does not provide any information regarding a "training set" or its sample size. This is consistent with the device's function as a display and manipulation tool for pre-existing 3D models, rather than a machine learning or AI algorithm that is trained on a dataset. The device receives "externally-generated 3D models," suggesting it doesn't perform internal model generation that would require a dedicated training phase.

    9. How the Ground Truth for the Training Set Was Established

    As no training set is mentioned or applicable given the device's nature, the method for establishing ground truth for a training set is not provided.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1