K Number
K172418
Device Name
OpenSight
Date Cleared
2018-09-21

(407 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

OpenSight is intended to enable users to display, manipulate, and evaluate 2D, 3D, and 4D digital images acquired from CR, DX, CT, MR, and PT sources. It is intended to visualize 3D imaging holograms of the patient, for preoperative localization and pre-operative planning of surgical options. OpenSight is designed for use only with performance-tested hardware specified in the user documentation.

OpenSight is intended to enable users to segment previously acquired 3D datasets, overlay, and register these 3D segmented datasets with the same anatomy of the patient in order to support pre-operative analysis.

OpenSight is not intended for intraoperative use. It is not to be used for stereotactic procedures.

OpenSight is intended for use by trained healthcare professionals, including surgeons, radiologists, chiropractors, physicians, cardiologists, technologists, and medical educators. The device assists doctors to better understand anatomy and pathology of patient.

Device Description

OpenSight is the combination of Microsoft HoloLens and Novarad's medical imaging software to create threedimensional holograms of scanned images from different modalities including CR, DX, CT, MR, and PT. This combination of augmented reality glasses and imaging software allows the user to see and manipulate hologram images with the swipe of a finger.

OpenSight uses the HoloLens technology to register scanned images over the patient when user has OpenSight headset on and in use. This allows the user to both see the patient and through them, with dynamic holograms of the patient's internal anatomy. OpenSight tools/features include window level, segmentation and rendering, registration, motion correction, virtual tools, alignment, and the capability to measure distance and image intensity values, such as standardized uptake value. OpenSight displays measurement lines, and regions of interest. 3D images include but not limited to tumors, masses, appendices, heart, kidney, bladder, stomach, blood vessels, arteries, and nerves.

The OpenSight Augmented Reality system uses the Microsoft HoloLens hardware and the Microsoft 10 Operating System as the platform on which this system runs. The OpenSight technology is written specifically for this hardware. NovaPACS contributes to the process by creating annotations and providing the preoperative analysis of images that are fed to the OpenSight device.

The 3D holograms are created by a refractory system in the OpenSight device, using a combination of the Microsoft HoloLens hardware and the OpenSight technology for 3D image display and rendering. Images are actual visible rendered of the object in the OpenSight device. Images are streamed in a 2D format from the Novarad server via wireless communication. The communication is encrypted with 256 encryption.

Registration of the patient (reality) to another image data set such as MRI or CT (augmented reality) are performed by the OpenSight device which contains infrared ranging cameras which can map the surface geometry of an object creating a mesh of triangles conforming to whatever the object is. This can include the patient, the surrounding room, the table, etc. The resolution of the mesh is controlled by the device. For mapping a large object such as a room, a larger mesh would be utilized. Surface geometry mapping of a patient's anatomy utilizes the maximum resolution of the device while the user may walk around the object in a 360° circle mapping the object from many views in order to obtain the best localization in space.

The camera device from the OpenSight headset has ranging and localizing technology, which maps the surrounding environment, including the patient. It knows where objects are and mesh surface maps of these objects are created for determination of their 3D positioning. The 3D radiologic images are then rendered and surface shells of the patient's skin are matched to the augmented reality device when user has OpenSight headset on and in use. The advantage of this is if the patient moves this can be compensated for. The registration does not require expensive infrared tracking devices or other fiducials in order to perform registration. The anatomy and the correct patient will only register if there is a match of the data, thus diminishing the potentive use on the wrong patient with the wrong images.

The patient's anatomy can be displayed in 2D, 3D, or 4D mode. The OpenSight technology allows for virtual screens in space, which are manipulated by finger movement or from voice commands. These images are superimposed on the patient's anatomy and one can either scroll through the images or rotate three dimensionally. Because the holographic system has mapped the space of the room and patient, it "knows" where this is and therefore as one rotates around the patient or the anatomy in question, the images are automatically rotated with the device.

The actual visible rendering of the object in the OpenSight device (i.e. how fast can the hologram be updated as ones position relative to the patient changes) has no discernible time lag with the object rendering is in excess of 30 frames per second for standard image rendering). If one turns on advanced lighting and shadowing, cubic spine interpolation of the image and utilizes a large image dataset (in excess of 200 images) then there is a visible time lag between the holographic rendering and the projection on to the patient. See attached video (Motion.MOV) that demonstrates this. It is still less than a second under the worst-case scenario.

The rendering tools are derived from technology created in the NovaPACS system for allowing 3D tools, including simple image manipulation such as window/leveling as well as more advanced technologies of segmentation, rendering, registration and motion correction. Virtual tools as well as 3D annotations can be created and displayed in the holographic image. These might include lines, distance measurements, etc. They could also be volumetric measurements or outlines of tumors, anatomic structures, etc. The operating principles of these tools are similar to those with other 3D PACS devices, including technology that has already undergone 510K approval by Novarad Corporation.

The OpenSight Augmented Reality system is a device that allows the user to more quickly and more accurately define both anatomy and pathology by using mixed reality. One can see through this device the actual patient but also superimposed on this are holographic images of the patient's anatomy, which have been previously taken through MRI, CT, or other imaging techniques.

The following is a description of pre-operative use cases for OpenSight:

  • . Ability to mark the appropriate entrance point, or angle, trajectory, and location for placement of a needle into the body, to extract a foreign body such as a piece of glass, to place a pedicle screw, etc. Being able to preoperatively identify the anatomy and expected trajectory for device insertion, could greatly aid in facilitating the speed and safety of procedures. Provided are images from three different preoperative interventions; a Percutaneous Discectomy, a Facet injection, and a Sacroiliac Joint. In each case, the OpenSight facilitates the positioning of the best trajectory for entrance into one of these structures.

  • Ability to aid the operating physician to localize anatomy prior to intervention. This can be used as an aid to . augment, and correlate with the location of a patient's injury. For example, rib fractures can be difficult to localize in the operating room and frequently incisions will be larger than needed in order to plate a displaced rib fracture. Virtually all patients with acute appendicitis in the United States receive a CT scan prior to operative intervention for diagnostic purposes. With this technology, the location of the appendix could be identified and the surgeon would be able to see variations in the anatomy prior to making an incision in an area that may or may not have the appendix. Another example would be the location of masses, lymph nodes, or tumors that may be difficult to find due to body habitus or location. For example, the abilize a disc or vertebral body prior to operative intervention would save valuable surgical time and fluoroscopy.

  • Ability to superimpose an anatomic atlas upon the patients' anatomy, allowing one to more readily identify structures that would either need to be treated or need to be avoided for a surgical procedure. This could be invaluable for example for a neurosurgeon to understand preoperatively, the best approach for cranial surgery. It could allow a head and neck surgeon to have a better understanding of the skull base in threedimensional detail. This internal visualization can be achieved without the surgeon ever making an incision on the patient. He/she of course can be guided by their best judgment, experience and training as to the ultimate approach and performance of any given procedure. OpenSight is intended simply as a guide.

  • Ability for surgical trainees to visualize both the internal anatomy from cross sectional imaging such as CT, ● MRI, or PET scanning super imposed on a patient prior to actual operation providing invaluable 3-D understanding of a surgical approach. Such rendering can be performed just prior to the surgery allowing them to see the anatomy and orientation that would be encountered during the surgery. It is much less expensive and complicated than trying to print a 3-D model, which often is not available onsite and can take days to achieve. It also allows the trainee to interrogate in a virtual manner the anatomy of a given area and understand the structural relationships, critical structures that may complicate or interfere with surgery, as well as the unique size/position/orientation of a given patient's anatomy.

  • Some operations are exceedingly complex and require a much greater depth of understanding in order to ● execute. Such is the case with congenital heart malformations where complex three-dimensional vascular anatomy makes surgical treatment difficult at best. Users are able to visualize this anatomy preoperatively in OpenSight before surgically opening the patient's chest and could potentially speed the operation and allow the surgeons to be better equipped to perform the procedure. Currently these types of procedures are performed after a surgeon has done complex and time consuming 3-D printing of models in order to better understand the anatomy. OpenSight allows one to render this in 2-D, 3-D and 4-D. In this use case, the images do not need to be on the patient. The doctor can rotate and magnify the anatomy free of the patient to get a better visual picture.

  • As part of the preoperative experience, the target organs can be colored, outlined, or annotated in the medical images using the Novarad 3-D viewer. The annotated-holographic images can be shown to the patient or family superimposed on the patient. This would make the interpretation of the images much clearer. This will improve a patients understanding of the risks and the complexities of a surgical procedure.

  • Surgeons in general, do not have the same degree of training in imaging and image processing as radiologists and it is often difficult for them to take 2-dimensional anatomy and apply this to their 3-dimensional world. OpenSight will allow Surgeons to better understand complex anatomy and disease processes by taking the data rich information, which they already have, and providing this in a more accessible format through holographic imaging. The value of the OpenSight is that it not only allows one to see the 3-dimensional data sets but also it can be co-localized to the patient and gives the 3-dimensional understanding of what he is attempting to do. Holographic augmented reality allows one to see with better understanding because the images are co-localized to the patient. The system with its mapping cameras, maps both the patient and the surrounding environment; from above, to the side, behind or even underneath the patient.

One possible example scenario of using OpenSight for preoperative planning is described in Appendix D.

OpenSight is not designed as a primary tool for disease detection or diagnosis.

OpenSight integrates with NovaPACS software.

OpenSight contains wireless technology using Wi-Fi 802.11ac networking standard. The wreless technology is used to stream images in a 2D format from a Novarad server onto the OpenSight headset. Images are actual visible rendered of the object in the OpenSight device with reliable and accurate information. The wireless information transfer is encrypted with 256 encryption for data security.

AI/ML Overview

Here's an analysis of the acceptance criteria and study detailed in the provided text:

Acceptance Criteria and Device Performance

Acceptance CriteriaReported Device Performance
Imaging software requirements (functional)All 42 test cases for imaging software requirements passed (case ID 55391).
Pre-operative localization accuracy (Sphere Test)Difference between physical diameter (~329.769 mm) and virtual diameter (328.78 mm) was ~0.989 mm.
Pre-operative localization accuracy (Box and BB testing)Average offset between physical BB and hologram BB:
  • Highest average: 1.67 mm
  • Lowest average: 0 mm
    Mean offsets (by angle and distance):
  • 0 degrees, 6 inches: 0.8596 mm (SD 1.7189 mm)
  • 0 degrees, 1 foot: 0.9861 mm (SD 1.1305 mm)
  • 90 degrees, 6 inches: 1.5213 mm (SD 2.0604 mm)
  • 90 degrees, 1 foot: 0.8050 mm (SD 1.1602 mm)
  • 135 degrees, 6 inches: 2.0825 mm (SD 1.8636 mm)
  • 135 degrees, 1 foot: 1.6913 mm (SD 2.3016 mm)
    The angle with the lowest mean offset was 90 degrees and one foot away (0.8050 mm). The least amount of deviation was at 0 degrees and one foot away (SD 1.1305 mm). The confidence interval for 0 degrees and 6 inches away contained zero, suggesting no significant offset. |
    | Graphic Rendering Frame Rate | - Standard image rendering: In excess of 30 frames per second (fps).
  • Advanced lighting/shadowing, cubic spine interpolation, large dataset: Visible time lag, less than a second.
  • Images displayed: 60 fps (normal mode 30 fps).
  • Volume mode geometry recomputing: 6 fps.
  • Alignment mode geometry recomputing: 13-15 fps.
  • Slice mode geometry recomputing: 40-50 fps. |
    | Surface Geometry Mapping (registration) | The device uses infrared ranging cameras to map surface geometry, creating a mesh of triangles. It can compensate for patient movement and does not require fiducials. Registration only occurs if there is a match of data, preventing use of wrong images on the wrong patient. |
    | Environmental Robustness | Ambient light levels (3743, 407.8, & 157.9 LUX) in various room settings (including OR and office) did not affect the field of view or object appearance. |

Study Details

  1. Sample sizes used for the test set and data provenance:

    • Sphere Test: One MRI calibration sphere. Scanned by CT modality.
    • Box and BB testing: One box with copper BBs. Scanned by CT modality.
    • Data Provenance: Not explicitly stated, but the objects ("Riverwood's imaging") seem to be test phantoms, suggesting internally generated data rather than patient data. The study describes physical objects and their manipulation, implying prospective testing in a controlled environment directly with the device.
  2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Sphere Test: The "ground truth" (physical circumference/diameter) was established by direct physical measurement using a sewing measuring tape. No human experts were involved in establishing this physical ground truth beyond the person performing the measurement.
    • Box and BB testing: The "ground truth" (physical BB locations) was established by direct physical measurement using an H&H 6' Dial Caliper. No human experts were involved in establishing this physical ground truth.
    • Qualifications: Not applicable for establishing the physical ground truth measurements.
  3. Adjudication method for the test set:

    • No adjudication method was described as the ground truth was based on direct physical measurements of the test objects.
  4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No MRMC comparative effectiveness study was conducted or described in this submission. The testing focused on the device's accuracy in rendering and registration, not on its impact on human reader performance.
  5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • Yes, performance tests such as the "Sphere Test," "Box and BB testing," "Frame Rates," and "Surface Geometry Mapping" primarily evaluated the OpenSight software and hardware in a standalone manner, assessing its ability to render, register, and display images accurately against physical ground truth. While a user operates the device, the metrics measured are characteristics of the device's output rather than the user's interpretive performance.
  6. The type of ground truth used:

    • Physical Measurements/Phantom Ground Truth: For the "Sphere Test" and "Box and BB testing," the ground truth was established by direct physical measurements of the test objects (sphere and box with BBs).
    • Device Specifications: For frame rates and image quality, the ground truth is implicitly defined by expected or acceptable performance specifications for the device's rendering capabilities.
  7. The sample size for the training set:

    • The document does not explicitly mention a "training set" size for the OpenSight device. This submission primarily focuses on verification and validation testing of the commercialized product, not on the development or training of AI/machine learning models. The device's operation involves rendering existing medical images, not learning from a new dataset in the typical ML sense.
  8. How the ground truth for the training set was established:

    • As no "training set" in the context of AI/ML was discussed, this question is not applicable. The device processes pre-acquired medical images (CR, DX, CT, MR, PT) and renders them as holograms based on its programmed algorithms, not through a learning phase in the context of this submission.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).