Search Results
Found 1 results
510(k) Data Aggregation
(283 days)
The XR90 (XR90-SYS) is a medical display workstation intended for 3D image visualization and image interaction in conjunction with traditional imaging and monitors. The virtual images are generated from tracked Ultrasound, tracked interventional device, and 3D volumetric data acquired from CT sources and stereoscopically projected such that the proximity of the virtual interventional device is displayed relative to live ultrasound and 3D models from previously acquired CT. The device is intended to provide visual information and reference to be used by the health care professionals for analysis of surgical options during pre-operative planning, and the heads-up, intra-operative display of the images during ultrasound-guided needle-based procedures. Virtual images on the heads-up display should always be used in conjunction with traditional monitors.
The XR90 (XR90-SYS) system is intended to be used as an adjunct to the interpretation of images performed using diagnostic imaging systems and is not intended for primary diagnosis.
The XR90 (XR90-SYS) system is intended to be used as a reference display for consultation and guidance to assist the clinician who is responsible for making all final patient management decisions.
During system use, the position and orientation tracking of the interventional instruments should always be available to the clinician on traditional imaging and monitors.
The MediView™ XR90 (XR90-SYS) system is an augmented reality-based medical device to be used adjunctively to clinical ultrasound (US) systems, with the ability to stereoscopically project and fuse standard-of-care US with digital anatomical models based on pre-procedural computed tomography (CT) imaging in biopsies and percutaneous ablations to overcome the limitations of two-dimensional image fusion. The XR90 (XR90-SYS) system provides visual information and remote collaboration features.
XR90 (XR90-SYS) and cleared image fusion devices spatially register and project virtual representations of a) tracked interventional instruments and b) imaged patient anatomy in a common coordinate system. Accordingly, the use of XR90 (XR90-SYS) involves the co-registration of virtual objects (tracked device, US, and CT) for visual information and does not involve use of stereoscopic projection to physical (i.e., real-world) anatomy for navigation, consistent with predicate devices. XR90 (XR90-SYS) spatially registers and stereoscopically co-projects three types of virtual objects: (1) Holographic Light Ray (HLR), (2) CT-based virtual anatomy, and (3) live ultrasound b-sector (Flashlight) with the HUD ultrasound display/augmented reality user interface, while maintaining the same principle of operation compared to predicate devices. Accordingly, the paired registration of holographic entities are:
- (1) HLR and virtual US-sector (Flashlight),
- (2) CT-based virtual anatomy and virtual US-sector (Flashlight), and
- (3) HLR and CT-based virtual anatomy.
The system is comprised of a commercial, off-the-shelf augmented reality head-mounted display, wirelessly connected to a streamer which interfaces with a GE Vivid iq ultrasound system and an electromagnetic (EM) field generator. The US signal is transmitted from the streamer to the headmounted display, where a virtual display of the US image is stereoscopically projected into the user's field-of-view in conjunction with pre-acquired CT-based images and tracked instrumentation.
The XR90 (XR90-SYS) system is capable of teleprocedural collaboration through the head-mounted display using Microsoft Dynamics 365 Remote Assist, allowing for other healthcare professionals to securely connect remotely to the head-mounted display, viewing the US signal and communicating (both through voice and needle annotation on the screen) in real-time with the local proceduralist. The remote collaborator may interact with the proceduralist via mobile device, laptop, desktop, or head-mounted display but the collaborator participates as an observer and should not make care decisions. The combination of teleprocedure communication and Holographic Needle Guide features provide workflow and ergonomics to the user for pre-operative planning and intra-operative display of virtual images. XR90 (XR90-SYS) is intended to be used adjunctively to standard of care imaging and provides guidance to the user. Proceduralists must refer to standard of care (conventional monitors) and prioritize clinical experience and/or judgement when using the XR90 (XR90-SYS) system.
Here's a breakdown of the acceptance criteria and study details for the MediView XR90 (XR90-SYS), based on the provided FDA 510(k) summary:
Overview
The MediView XR90 (XR90-SYS) is an augmented reality-based medical visualization system intended to be used as an adjunct to clinical ultrasound systems. It projects and fuses live ultrasound data with 3D volumetric data from CT scans and tracked interventional devices into a stereoscopic heads-up display. It is for visual information and reference during pre-operative planning and intra-operative guidance for ultrasound-guided needle-based procedures. The system is not for primary diagnosis.
1. Table of Acceptance Criteria and Reported Device Performance
The provided document primarily focuses on the safety and performance aspects through non-clinical testing rather than specific "acceptance criteria" against which a statistical hypothesis test was performed in a clinical study. However, we can infer performance metrics that served as benchmarks. The key performance indicators for accuracy are Mean Target Registration Error (TRE) and 95% Upper Bound for TRE, along with Mean Angular Error and 95% Upper Bound for Angular Error. There are no explicit pass/fail thresholds stated, but the results demonstrate the device performs within clinically acceptable ranges for image-guided procedures.
Performance Metric | Acceptance Criteria (Implicit) | Reported Device Performance |
---|---|---|
Target Registration Error (TRE) | ||
Phantom Study | Within acceptable limits for registration/fusion | Mean TRE: 2.543 mm; 95% Upper Bound: 2.726 mm (at 7.1 cm needle depth) |
Cadaver Study | Within acceptable limits for registration/fusion | Mean TRE: 2.293 mm; 95% Upper Bound: 2.825 mm (at 8.5 cm needle depth) |
Animal Study | Within acceptable limits for registration/fusion | Mean TRE: 2.9 mm; 95% Upper Bound: 3.4 mm (at 7.6 cm needle depth) |
Angular Errors (Animal Study) | ||
In-plane angular errors | Within acceptable limits | Mean: 7.08°; 95% Upper Bound: 8.77° |
Out-of-plane angular errors | Within acceptable limits | Mean: 4.79°; 95% Upper Bound: 6.50° |
2. Sample Size Used for the Test Set and Data Provenance
The document describes non-clinical performance testing in different models:
- Phantom Study: Sample size not specified, but typically involves a series of measurements on a physical phantom.
- Cadaver Study: Sample size not specified.
- Animal Study: Sample size not specified, though it mentions a GLP porcine study. The data is prospective as it was collected specifically for this performance study. The animal study took place in a GLP (Good Laboratory Practice) environment, implying a controlled setting, but the specific country of origin is not stated.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
The document does not specify the number or qualifications of experts involved in establishing ground truth for the non-clinical performance tests. For phantom studies, ground truth is typically established by the known geometric properties of the phantom and precise measurement tools. For cadaver and animal studies, ground truth for accuracy metrics (like TRE) would likely be established by precise physical measurements by trained personnel.
4. Adjudication Method for the Test Set
The document does not describe any specific adjudication method (e.g., 2+1, 3+1 consensus) for the non-clinical test sets. For accuracy measurements on phantoms and animal models, the ground truth is established objectively through physical and imaging measurements, so a multi-reader adjudication process as seen in clinical image interpretation studies is typically not applicable.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not reported in this 510(k) summary. The performance studies focused on the device's accuracy in a standalone or augmented setting, not on comparing human reader performance with and without AI assistance. The device is referred to as an "adjunct," implying it assists the clinician, but no quantitative measure of this assistance's effect on human reader performance is provided.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
The performance testing (e.g., TRE and angular error measurements) implicitly evaluates the algorithm's accuracy in fusing and displaying images and tracked instruments. While a human operator uses the device, the reported metrics (TRE, angular error) reflect the system's inherent accuracy in aligning virtual objects with the physical world, which is a form of standalone performance for the core algorithmic functionality related to image registration and tracking. The "system accuracy verification" indicates an evaluation of the device's output accuracy independent of user interpretation skills.
7. The Type of Ground Truth Used
The ground truth for the non-clinical performance studies was established through:
- Known physical properties/measurements: For the phantom study.
- Precise physical measurements: For cadaver and animal models, likely using known target points or tracked instruments on the anatomy to verify the system's displayed position against the true physical position. The term "measurement validation of distances measured in the system against a ground truth" explicitly points to this.
8. The Sample Size for the Training Set
The document does not provide information about the training set size or methodology for any machine learning components. It's possible that the "system" itself relies on traditional image processing and tracking algorithms rather than deep learning that would require a distinct "training set." If there are machine learning components, details are not disclosed in this summary.
9. How the Ground Truth for the Training Set Was Established
As no training set details are provided, the method for establishing its ground truth is also not described.
Ask a specific question about this device
Page 1 of 1