K Number
K243933
Device Name
Ceevra Reveal 3+
Manufacturer
Date Cleared
2025-03-04

(74 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Ceevra Reveal 3+ is intended as a medical imaging system that allows the processing, review, analysis, communication and media interchange of multi-dimensional digital images acquired from CT or MR imaging devices and that such processing may include the generation of preliminary segmentations of normal anatomy using software that employs machine learning and other computer vision algorithms. It is also intended as software for preoperative surgical planning, and as software for the intraoperative display of the aforementioned multi-dimensional digital images. Ceevra Reveal 3+ is designed for use by health care professionals and is intended to assist the clinician who is responsible for making all final patient management decisions.

The machine learning algorithms in use by Ceevra Reveal 3+ are for use only for adult patients (22 and over). Three-dimensional images for patients under the age of 22 or of unknown age will be generated without the use of any machine learning algorithms.

Device Description

Ceevra Reveal 3+, as modified, ("Modified Reveal 3+"), manufactured by Ceevra, Inc. (the "Company"), is a software as a medical device with two main functions: (1) it is used by Company personnel to generate three-dimensional (3D) images from existing patient CT and MR imaging, and (2) it is used by clinicians to view and interact with the 3D images during preoperative planning and intraoperatively.

Clinicians view 3D images via the Mobile Image Viewer software application which runs on compatible mobile devices, and the Desktop Image Viewer software application which runs on compatible computers. The 3D images may also be displayed on compatible external displays, or in virtual reality (VR) format with a compatible off-the-shelf VR headset.

Modified Reveal 3+ includes features that enable clinicians to interact with the 3D images including rotating, zooming, panning, selectively showing or hiding individual anatomical structures, and viewing measurements of or between anatomical structures.

AI/ML Overview

Here's a breakdown of the acceptance criteria and study details for the Ceevra Reveal 3+ device, based on the provided text:

1. Table of Acceptance Criteria and Reported Device Performance

The acceptance criteria are implied by the reported performance metrics. The study evaluated the accuracy of segmentations generated by the machine learning models. The performance metrics reported are the Sørensen-Dice coefficient (DSC) for volume-based segmentation accuracy and the Hausdorff distance metric at the 95th percentile (HD-95) for surface distance accuracy.

Anatomical StructureImaging ModalityMetricReported Device Performance
ProstateMR prostate imagingDSC0.90
BladderMR prostate imagingDSC0.93
Neurovascular bundlesMR prostate imagingHD-956.6 mm
KidneyCT abdomen imagingDSC0.92
KidneyMR abdomen imagingDSC0.89
ArteryCT abdomen imagingDSC0.90
ArteryMR abdomen imagingDSC0.87
VeinCT abdomen imagingDSC0.88
VeinMR abdomen imagingDSC0.82
Pulmonary arteryCT chest imagingDSC0.82
Pulmonary veinCT chest imagingDSC0.83
AirwaysCT chest imagingDSC0.82
Bronchopulmonary segmentsCT chest imagingDSC0.86

2. Sample Size Used for the Test Set and Data Provenance

  • Sample Size: A total of 133 imaging studies were used to evaluate the device.
  • Data Provenance: The text does not explicitly state the country of origin. However, it indicates that the device's machine learning algorithms are for use with adults (22 and over) and that "Ethnicity of patients in the datasets was reasonably correlated to the overall US population," implying the data is likely from the United States or at least representative of the US population. It was retrospective data, sourced from various scanning institutions. Independence of training and testing data was enforced at the institution level.

3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

The text states: "Performance was verified by comparing segmentations generated by the machine learning models against segmentations generated by medical professionals from the same imaging study."
The specific number of experts is not mentioned.
Their qualifications are broadly described as "medical professionals," without further detail on their experience level or subspecialty (e.g., radiologist with X years of experience).

4. Adjudication Method for the Test Set

The text implies a direct comparison between the AI's segmentation and the "medical professionals'" segmentation. It does not specify an adjudication method (e.g., 2+1, 3+1 consensus with multiple readers) for establishing the ground truth if there were discrepancies among medical professionals. It simply states "segmentations generated by medical professionals." This might imply a single expert's ground truth, or a pre-established consensus for each case, but no specific method is detailed.

5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

No, a MRMC comparative effectiveness study was not explicitly stated or described. The study focused on the performance of the AI model itself (standalone) compared to human-generated ground truth. There is no mention of comparing human readers with AI assistance versus human readers without AI assistance. Therefore, no effect size of how much human readers improve with AI vs. without AI assistance is provided.

6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

Yes, a standalone performance evaluation was done. The study specifically verified the performance of the "machine learning models" by comparing their generated segmentations directly against ground truth established by medical professionals.

7. The Type of Ground Truth Used

The ground truth used was expert consensus / expert-generated segmentations. The text states it was established by "segmentations generated by medical professionals."

8. The Sample Size for the Training Set

The document does not provide the exact sample size for the training set. It only states that "No imaging study used to verify performance was used for training; independence of training and testing data were enforced at the level of the scanning institution, namely, studies sourced from a specific institution were used for either training or testing but could not be used for both." It also mentions that "The data used in the device validation ensured diversity in patient population and scanner manufacturers."

9. How the Ground Truth for the Training Set Was Established

The document does not explicitly state how the ground truth for the training set was established. However, given that the evaluation for the test set used segmentations generated by "medical professionals," it is highly probable that the ground truth for the training set was established in a similar manner, likely through manual segmentation by medical experts.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).