K Number
K213140
Device Name
Claritas iPET
Date Cleared
2021-12-22

(86 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
Predicate For
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Claritas iPET is an image processing software intended for use by radiologists and nuclear medicine physicians for noise reduction, sharpening, and resolution improvement of PET images (including PET/CT and PET/MRI) obtained with any kind of radionuclides, e.g. fluorodeoxyglucose (FDG). Enhanced images will be saved in DICOM files and exist in conjunction with original images.

Device Description

Claritas iPET v1.0 Image Enhancement System is a medical image enhancement software, i.e., a Software as a Medical Device (SaMD), that can be used to increase image quality by implementation of an image processing and image fusion algorithm.

Claritas iPET can be used to enhance Positron Emission Tomography (PET) images with optional simultaneous Magnetic Resonance Imaging (MRI) or Computerized Tomography (CT) scans of the same subject. Claritas iPET takes as input DICOM [Digital Imaging and Communications in Medicine] files of PET, MRI, and CT images, and produces an enhanced image of the same file. The objective is to enhance the DICOM files that are obscured and not clearly visible, to be more visible, sharper, and clearer through the Claritas iPET image enhancement process. Claritas iPET is intended to be used by radiologists and nuclear medicine physicians in hospitals, radiology centres and clinics, as well as by medical universities and research intuitions.

The image improvement includes noise reduction, sharpening of organ boundaries, and achieving super-resolution. With the help of Claritas iPET software, high quality PET scans can be produced. The Claritas iPET algorithm computes the fusion of functional (from PET) and anatomic (from MR or CT) information, and is based on Non-Local Means filtering. The goal of the software is to process and visualize the content of DICOM files storing 3D voxel arrays, i.e. a uniformly spaced sequence of slices of a PET scan. The processing algorithm may also input another 3D voxel array storing the density values obtained by a CT or MRI scan. The PET and CT/MR volumes should at least partially overlap to exploit the additional anatomic information. The CT or MR volume is expected to have a higher resolution than the PET volume in order for effective improvement. The sharpness. style and the detail of the visualization can be controlled by the user and can be compared to the visualization of the raw image data. During this process, no new feature is introduced that did not exist in the PET data, just the existing features are emphasized if they are also supported by the anatomy or suppressed if they are in the noise region and are not supported by the anatomy.

AI/ML Overview

This document does not contain the level of detail necessary to fully answer all aspects of your request, particularly regarding the specific acceptance criteria for regulatory clearance and the detailed methodology of human reader studies (MRMC). However, based on the provided text, here's a breakdown of the available information:


1. A table of acceptance criteria and the reported device performance

The document describes the device's performance in terms of improving image quality metrics (RMSE and SNR), but it does not explicitly state pre-defined acceptance criteria in a table format that would have been submitted to the FDA for regulatory clearance. Instead, it describes observed improvements.

Implied Acceptance Criteria (Performance Metric Improvements, based on the text):

Performance MetricAcceptance Criteria (Implied)Reported Device Performance
RMSE ReductionAt least 10% (for high dosage/longer scans)Decreased by at least 10% (for high dosage/longer scans)
50% (for low dosage/short scans)Decreased by 50% (for low dosage/short scans)
SNR IncreaseAt least 20% (for high dosage/longer scans)Increased by at least 20% (for high dosage/longer scans)
4-5 times (for low dosage/short scans)Increased by 4-5 times (for low dosage/short scans)

Note: The document states "All tests have passed," indicating that these performance levels met internal acceptance thresholds.


2. Sample size used for the test set and the data provenance

  • Test Set Sample Size: The document does not specify the exact number of cases/patients used for the test set. It mentions "real full body human PET scans" in the first option for ground truth, and the "Zubal mathematical phantom" in the second option. The number of scans for each of these is not quantified.
  • Data Provenance:
    • Real Human Data: "real full body human PET scans" (retrospective, as they were "enhanced" after acquisition). The country of origin is not specified.
    • Synthetic Data: "Zubal mathematical phantom" (synthetic/simulated data).

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

The document does not mention the use of experts to establish ground truth for performance testing. Instead, it defines ground truth through:

  1. Long scan / high dosage PET: "We executed a long scan and accepted the reconstructed results as ground truth."
  2. Mathematical phantom: "we took the Zubal mathematical phantom, and considered it as the ground truth."

4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

No adjudication method involving experts is mentioned for the test set. Ground truth was established via the long scan/high dosage PET or the mathematical phantom.


5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

The document does not indicate that an MRMC comparative effectiveness study was performed or submitted as part of this 510(k) summary. The performance testing described focuses solely on the algorithm's effect on image metrics (RMSE, SNR) compared to a defined ground truth, not on human reader performance.


6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

Yes, the performance testing described is precisely a standalone (algorithm only) performance evaluation. The improvements in RMSE and SNR are calculated based on the algorithm's output compared to the ground truth, without human intervention or interpretation as part of the study design.


7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

Two types of ground truth were used:

  1. "Golden Standard" Imaging: In the first option, a "long scan" and "reconstructed results" of real full-body human PET scans were accepted as ground truth. This is akin to using a high-fidelity scan as the reference.
  2. Mathematical Phantom: In the second option, the "Zubal mathematical phantom" was used as the ground truth for simulated data.

8. The sample size for the training set

The document does not specify the sample size for the training set. It describes the algorithm as a "modification of the non-local means algorithm" and states it "finds the weights using the statistical analysis of the PET data and the data of additional modalities (MRI / CT)." This implies a more traditional image processing approach rather than a purely deep learning model requiring a distinct, large training dataset. The predicate device does use a CNN, implying a training set, but the subject device (Claritas iPET) is explicitly described as different in its core technology.


9. How the ground truth for the training set was established

Since the document does not specify a distinct "training set" in the context of a machine learning model, it also does not describe how ground truth was established for a training set. The algorithm's mechanism (non-local means guided by other modalities and statistical analysis of PET data) suggests a design that may not require a labeled training set in the same way a deep learning model would.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).