K Number
K222676
Device Name
Ceevra Reveal 3
Manufacturer
Date Cleared
2023-04-25

(231 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Ceevra Reveal 3 is intended as a medical imaging system that allows the processing, review, and media interchange of multi-dimensional digital images acquired from CT or MR imaging devices and that such processing may include the generation of preliminary segmentations of normal anatomy using software that employs machine learning and other computer vision algorithms. It is also intended as software for preoperative surgical planning, and as software for the intraoperative display of the aforementioned multi-dimensional digital images. Ceevra Reveal 3 is designed for use by health care professionals and is intended to assist the clinician who is responsible for making all final patient management decisions.

The machine learning algorithms in use by Ceevra Reveal 3 are for use only for adult patients (22 and over). Three-dimensional images for patients under the age of 22 or of unknown age will be generated without the use of any machine learning algorithms.

Device Description

Ceevra Reveal 3 ("Reveal 3"), manufactured by Ceevra, Inc. (the "Company"), is a software as a medical device with two main functions: (1) it is used by Company personnel to generate three-dimensional (3D) images from existing patient CT and MR imaging, and (2) it is used by clinicians to view and interact with the 3D images during preoperative planning and intraoperatively.

Clinicians view 3D images via the Reveal 3 Mobile Image Viewer software application which runs on compatible mobile devices, and the Reveal 3 Desktop Image Viewer software application which runs on compatible computers. The 3D images may also be displayed on compatible external displays, or in virtual reality (VR) format with a compatible off-the-shelf VR headset.

Reveal 3 includes additional features that enable clinicians to interact with the 3D images including rotating, zooming, panning, and selectively showing or hiding individual anatomical structures.

AI/ML Overview

Here's a breakdown of the acceptance criteria and study details for the Ceevra Reveal 3, based on the provided FDA 510(k) summary:

Acceptance Criteria and Device Performance

Acceptance Criteria (Metric)Reported Device Performance
Prostate (from MR prostate imaging)0.87 Sørensen-Dice coefficient (DSC)
Bladder (from MR prostate imaging)0.90 Sørensen-Dice coefficient (DSC)
Neurovascular bundles (from MR prostate imaging)7.8 mm Hausdorff distance at 95th percentile (HD-95)
Kidney (from CT abdomen imaging)0.89 Sørensen-Dice coefficient (DSC)
Kidney (from MR abdomen imaging)0.87 Sørensen-Dice coefficient (DSC)
Artery (from CT abdomen imaging)0.87 Sørensen-Dice coefficient (DSC)
Artery (from MR abdomen imaging)0.83 Sørensen-Dice coefficient (DSC)
Vein (from CT abdomen imaging)0.86 Sørensen-Dice coefficient (DSC)
Vein (from MR abdomen imaging)0.81 Sørensen-Dice coefficient (DSC)
Artery (from CT chest imaging)0.85 Sørensen-Dice coefficient (DSC)
Vein (from CT chest imaging)0.81 Sørensen-Dice coefficient (DSC)

Note: The document explicitly states "Performance was verified by comparing segmentations generated by the machine learning models against segmentations generated by medical professionals from the same imaging study." This implies that the acceptance criteria for each metric were met if the reported performance values were achieved or exceeded. However, specific numerical thresholds for acceptance criteria (e.g., "must be ≥ 0.85 DSC") are not explicitly stated in the provided text, only the reported performance. The presented table assumes the reported performance values themselves serve as the basis for demonstrating compliance.

Study Details:

  1. Sample Size used for the test set and the data provenance:

    • Sample Size: 141 imaging studies.
    • Data Provenance: Actual CT or MR imaging studies of patients.
      • No dataset contained more than one imaging study from any particular patient.
      • Independence of training and testing data was enforced at the level of the scanning institution (studies from a specific institution were used for either training or testing but not both).
      • Diversity in patient population was ensured across patient age, patient sex, and scanner manufacturers.
      • Subgroup analysis was performed for patient age, patient sex, and scanner manufacturers.
        • Non-prostate related datasets: 40% female, 60% male.
        • Across all datasets by age: 32% under 60, 32% 60-70, 30% over 70, 6% unknown age.
        • Scanner manufacturers included GE Medical Systems, Siemens, Toshiba, and Philips Medical Systems.
        • Ethnicity of patients was generally correlated to the overall US population.
  2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • The document states "segmentations generated by medical professionals." It does not specify the number of medical professionals or their specific qualifications (e.g., radiologist with X years of experience).
  3. Adjudication method for the test set:

    • The document does not explicitly describe an adjudication method (e.g., 2+1, 3+1) for resolving disagreements among medical professionals if multiple experts were used to create the ground truth. It simply states "segmentations generated by medical professionals."
  4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done:

    • No, an MRMC comparative effectiveness study comparing human readers with and without AI assistance was not mentioned or described. The study focused on the performance of the machine learning models in comparison to ground truth established by medical professionals.
  5. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    • Yes, performance was verified by comparing the segmentations generated by the machine learning models against the ground truth. This indicates a standalone performance evaluation of the algorithm.
  6. The type of ground truth used:

    • Expert consensus/manual segmentation by medical professionals. The document states: "Performance was verified by comparing segmentations generated by the machine learning models against segmentations generated by medical professionals from the same imaging study."
  7. The sample size for the training set:

    • The exact sample size for the training set is not explicitly stated. It only mentions that "No imaging study used to verify performance was used for training; independence of training and testing data were enforced at the level of the scanning institution, namely, studies sourced from a specific institution were used for either training or testing but could not be used for both."
  8. How the ground truth for the training set was established:

    • The document does not explicitly detail how the ground truth for the training set was established. However, given that the ground truth for the test set was established by "medical professionals," it is highly probable that the training set also used ground truth established by medical professionals or similar expert annotations.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).