K Number
K243350
Device Name
Rapid Neuro3D
Manufacturer
Date Cleared
2025-01-22

(86 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Rapid Neuro3D (RN3D) is an image analysis software for imaging datasets acquired with conventional CT Angiography (CTA) from the aortic arch to the vertex of the head. The module removes bone, tissue, and venous vessels, providing a 3D and 2D visualization of the neurovasculature supplying arterial blood to the brain.

Outputs of the device include 3D rotational maximum intensity projections (MIPS), volume renders (VR), along with the curved planar reformation (CPR) of the isolated left and right internal carotid and vertebral arteries.

Rapid Neuro3D is designed to support the physician in confirming the presence or absence of physician-identified lesions and evaluation, documentation, and follow-up of any such lesion and treatment planning.

Its results are not intended to be used on a stand-alone basis for clinical decision-making or otherwise preclude clinical assessment.

RN3D is indicated for adults.

Precautions/Exclusions:

o Series containing excessive patient motion or metal implants may impact module output quality.

o The RN3D module will not process series that meet the following module exclusion criteria:

• Series containing inadequate contrast agent ( 400mm.
- 2) Z FOV (cranio-caudal transverse anatomical coverage) 1.0 mm.
- 4) Z slice spacing of 1.25 mm.
- 5) slice thickness > 1.5mm.
- 6) data acquired at x-ray tube voltage 150kVp.

Device Description

Rapid Neuro 3D (RN3D) is a Software as a Medical Device (SaMD) image processing module and is part of the Rapid Platform. It allows for visualization of arterial vessels of the head and neck and identifies and segments arteries of interest in patient CTA exams.

The Rapid Platform provides common functions and services to support image processing modules such as DICOM filtering and job and interface management. The Rapid Platform can be installed on-premises within customer's infrastructure behind their firewall or in a hybrid on-premises/cloud configuration. The software can be installed on dedicated hardware or a virtual machine. The Rapid Platform accepts DICOM images and, upon processing, returns the processed DICOM images to the source imaging modality or PACS.

The RN3D image processing module is based on pre-trained artificial intelligence / machinelearning models and facilitates a 3D visualization of the neurovasculature supplying arterial blood to the brain. The module analyzes input CTA images in DICOM format and provides a corresponding DICOM series output that can be used by a DICOM viewer, clinical workstations. and PACS systems.

AI/ML Overview

Here's a breakdown of the acceptance criteria and study details for the Rapid Neuro3D device, extracted from the provided text:


1. Table of Acceptance Criteria and Reported Device Performance

The acceptance criteria are implicitly defined by the primary endpoints of the studies.

Metric / EndpointAcceptance CriteriaReported Device Performance
Segmentation Quality Study
Clinical Accuracy (MIP images)Passed99.8% agreement with expert consensus for MIP images
Clinical Accuracy (VR images)Passed98.6% agreement with expert consensus for VR images
Clinical Accuracy (SSE images)Passed100.0% agreement with expert consensus for SSE images
Clinical Accuracy (CPR images)Passed100.0% agreement with expert consensus for CPR images
Labeling Accuracy100% of anatomical labels applied found to be accurate100% of the anatomical labels applied found to be accurate for the vessels visualized.
Segmentation Accuracy Study
Extracranial Region
Average Dice Coefficient (Extracranial)Met0.89
Average Hausdorff Distance (Extracranial)Met0.44 mm
Intracranial Region
Average Dice Coefficient (Intracranial)Substantial equivalence (presumably to predicate)0.97 (between the module and the predicate device)
Average Hausdorff Distance (Intracranial)Substantial equivalence (presumably to predicate)0.44 mm (between the module and the predicate device)
CPR Visualizations
Average Hausdorff Distance (CPR centerline)Met0.31 mm (between the module and ground truth)
Ground Truth ReproducibilityWithin case variance of expert segmentations (for segmentation accuracy study) demonstrating strong reproducibility of ground truth segmentations.1% within case variance, demonstrating strong reproducibility of ground truth segmentations. (This isn't a direct device performance metric but confirms the reliability of the ground truth used for evaluation).

2. Sample Sizes and Data Provenance for the Test Set

  • Segmentation Quality Study:

    • Sample Size: 120 CTA cases from 115 patients (65 female; 50 male; aged from 27 to 90+).
    • Data Provenance: 104 US, 16 OUS (Outside US).
    • Retrospective/Prospective: Not explicitly stated, but the mention of a "test dataset was independent from the data used during model training" suggests a retrospective nature.
  • Segmentation Accuracy Study:

    • Sample Size: 50 CTA cases from 48 patients (24 female; 24 male; aged from 27 to 90+).
    • Data Provenance: 43 US, 7 OUS.
    • Retrospective/Prospective: Not explicitly stated, but the mention of a "test dataset was independent from the data used during model training" suggests a retrospective nature.

3. Number and Qualifications of Experts for Ground Truth

  • Number of Experts: Up to three clinical experts (for the segmentation quality study). The document does not specify if the same number of experts were used for the segmentation accuracy study's ground truth.
  • Qualifications: "Clinical experts." No further specific qualifications (e.g., years of experience, subspecialty) are provided in the text.

4. Adjudication Method for the Test Set

  • Method: "Consensus of up to three clinical experts" was used to determine clinical accuracy in the segmentation quality study. For the segmentation accuracy study, "ground truth" was established, and for reproducibility it mentions "reproducibility (of ground truths)" implying a process, but a specific adjudication method like 2+1 or 3+1 isn't explicitly detailed for the accuracy study.

5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

  • Was it done? No, a multi-reader multi-case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance was not explicitly described or reported. The studies described focus on the standalone performance of the AI device against expert consensus or defined ground truth.

6. Standalone (Algorithm Only) Performance Study

  • Was it done? Yes. Both the "Segmentation Quality Study" and the "Segmentation Accuracy Study" evaluated the standalone performance of the Rapid Neuro3D algorithm. The outputs were compared against source DICOM images and established ground truth, respectively, without mentioning human-in-the-loop performance improvement.

7. Type of Ground Truth Used

  • Segmentation Quality Study: Expert consensus against source DICOM images.
  • Segmentation Accuracy Study: For the extracranial region and CPR, it was compared against "ground truth" (presumably expert annotated regions). For the intracranial region, it was compared to the "predicate device" performance, implying the predicate served as a reference for substantial equivalence in that specific context. The document also mentions "reproducibility (of ground truths)," indicating expert delineations.

8. Sample Size for the Training Set

  • The document states, "The test dataset was independent from the data used during model training," but it does not provide the specific sample size for the training set.

9. How Ground Truth for the Training Set Was Established

  • The document does not provide details on how ground truth was established for the training set. It only mentions that the test set data was independent from the training data.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).