K Number
K210831
Device Name
OnQ Neuro
Manufacturer
Date Cleared
2021-11-19

(245 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

OnQ Neuro is a fully automated post-processing medical device software intended for analyzing and evaluating neurological MR image data.

  • OnQ Neuro is intended to provide automatic segmentation, quantification, and reporting of derived image metrics.
    OnQ Neuro is additionally intended to provide automatic fusion of derived parametric maps with anatomical MRI data.
    OnQ Neuro is intended for use on brain tumors, which are known/confirmed to be pathologically diagnosed cancer.
    OnQ Neuro is intended for comparison of derived image metrics from multiple time-points.
    The physician retains the ultimate responsibility for making the final diagnosis and treatment decision.
Device Description

OnQ Neuro is a fully automated post-processing medical device software that is used by radiologists, oncologists, and other clinicians to assist with analysis and interpretation of neurological MR images. It accepts DICOM images using supported protocols and performs 1) automatic segmentation and volumetric quantification of brain tumors, which are known/confirmed to be pathologically diagnosed cancer, 2) automatic post-acquisition analysis of diffusion-weighted magnetic resonance imaging (DWI) data and optional automated fusion of derived image data with anatomical MR images, and 3) comparison of derived image metrics from multiple time-points.
Output of the software provides values as numerical volumes, and images of derived data as grayscale intensity maps and as graphical color overlays on top of the anatomical image. OnQ Neuro output is provided in standard DICOM format as image series and reports that can be displayed on most third-party commercial DICOM workstations.
The OnQ Neuro is a stand-alone medical device software package that is designed to be installed in the cloud or within a hospital's IT infrastructure on a server or PC-based workstation. Once installed and configured, the OnQ Neuro software automatically processes images sent from the originating system (MRI scanner or PACS). The software is configured at installation to receive input DICOM files from a network location, and output DICOM to a network destination.
The software is designed without the need for a user interface after installation. Any processing errors are reported either in the output series error report, or system log files.
OnQ Neuro software is intended to be used by trained personnel only and is to be installed by trained technical personnel.
Quantitative reports and derived image data sets are intended to be used as complementary information in the review of a case.
The OnQ Neuro software does not have any accessories or patient contacting components.

AI/ML Overview

Here's a breakdown of the acceptance criteria and study details for the OnQ Neuro device, based on the provided text:

Device: OnQ Neuro
Indications for Use: Fully automated post-processing medical device software for analyzing and evaluating neurological MR image data, providing automatic segmentation, quantification, and reporting of derived image metrics, automatic fusion of parametric maps with anatomical MRI data, and comparison of derived image metrics from multiple time-points. Intended for use on brain tumors, which are known/confirmed to be pathologically diagnosed cancer.


1. Acceptance Criteria and Reported Device Performance

Acceptance CriteriaReported Device Performance
OnQ Neuro v1.1 model performance is consistent (95% percent performance) with expert rater manual segmentation performance.Passed. OnQ Neuro v1.1.0 segments brain tumor ROIs with an accuracy that passed the product's acceptance criteria.
OnQ Neuro v1.1 model meets minimum clinically acceptable levels.Passed. Segmentation performance is consistent across scanner manufacturers, field strengths, tumor types, and patient sexes.
Accuracy of automated segmentation compared to manual radiologist segmentations, quantified using:
- Dice similarity coefficient (extent of software-derived vs. ground truth overlap)Not explicitly quantified with a specific numeric value for performance, but stated that it "passed the product's acceptance criteria."
- Squared correlation coefficient (R2) of segmented region of interest volumesNot explicitly quantified with a specific numeric value for performance, but stated that it "passed the product's acceptance criteria."
Clinical validation testing demonstrates that the Tumor Segmentation RGB Overlay and Tumor Segmentation Report are correct, meet clinical expectations, and are safe and effective.Passed. Not explicitly quantified with specific metrics, but stated as a successful outcome of clinical validation testing.
Clinical validation testing demonstrates that the Restricted Signal Map and ADC map are correct, meet clinical expectations, and are safe and effective.Passed. Not explicitly quantified with specific metrics, but stated as a successful outcome of clinical validation testing.

2. Sample Size Used for the Test Set and Data Provenance

  • Test Set Sample Size: Not explicitly stated. The text mentions "an independent test dataset" for segmentation performance testing.
  • Data Provenance: Not explicitly stated. It is not specified if the data was retrospective or prospective or the country of origin.

3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

  • Number of Experts: Not explicitly stated. The text refers to "expert-labeled segmentations" and "expert rater manual segmentation performance," implying multiple experts, but the exact number isn't quantified.
  • Qualifications of Experts: Not explicitly stated beyond "expert" and "radiologist" (in the context of manual segmentations). Specific details like years of experience or board certification are not provided.

4. Adjudication Method for the Test Set

The adjudication method is not explicitly stated. The text mentions "expert-labeled segmentations" as the ground truth, but does not detail how disagreements between experts were resolved (e.g., 2+1, 3+1).


5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

A multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly stated to have been performed where human readers improve with AI vs. without AI assistance. The performance testing focuses on the accuracy of the automated segmentation against expert-labeled ground truth, indicating a standalone or comparative study with human performance as the ground truth, rather than human performance aided by AI.

  • Effect Size: Not applicable, as a comparative effectiveness study with human readers was not described.

6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

Yes, a standalone (algorithm only) performance assessment was done. The "Performance Testing Summary" directly addresses the device's automatic segmentation accuracy ("OnQ Neuro automatic segmentation performance is evaluated by comparing the software-derived segmentations to expert-labeled segmentations"). The device is described as "fully automated" and not having a user interface for manual manipulation after installation. The primary comparison is the AI's output against human expert ground truth.


7. The Type of Ground Truth Used

The type of ground truth used is primarily expert consensus/manual segmentations. The text specifies "expert-labeled segmentations of brain tumors" and "expert rater manual segmentation performance" as the basis for comparison for the segmentation accuracy.


8. The Sample Size for the Training Set

The sample size for the training set is not explicitly stated. The document focuses on the validation of the device, not its training process.


9. How the Ground Truth for the Training Set Was Established

How the ground truth for the training set was established is not explicitly stated. The document describes how the ground truth for the test set was established (expert-labeled segmentations), but not for the data used to train the algorithm.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).