K Number
K251839
Date Cleared
2025-07-17

(31 days)

Product Code
Regulation Number
892.1200
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The uMI Panvivo is a PET/CT system designed for providing anatomical and functional images. The PET provides the distribution of specific radiopharmaceuticals. CT provides diagnostic tomographic anatomical information as well as photon attenuation information for the scanned region. PET and CT scans can be performed separately. The system is intended for assessing metabolic (molecular) and physiologic functions in various parts of the body. When used with radiopharmaceuticals approved by the regulatory authority in the country of use, the uMI Panvivo system generates images depicting the distribution of these radiopharmaceuticals. The images produced by the uMI Panvivo are intended for analysis and interpretation by qualified medical professionals. They can serve as an aid in detection, localization, evaluation, diagnosis, staging, re-staging, monitoring, and/or follow-up of abnormalities, lesions, tumors, inflammation, infection, organ function, disorders, and/or diseases, in several clinical areas such as oncology, cardiology, neurology, infection and inflammation. The images produced by the system can also be used by the physician to aid in radiotherapy treatment planning and interventional radiology procedures.

The CT system can be used for low dose CT lung cancer screening for the early detection of lung nodules that may represent cancer. The screening must be performed within the established inclusion criteria of programs / protocols that have been approved and published by either a governmental body or professional medical society.

Device Description

The proposed device uMI Panvivo combines a 295/235 mm axial field of view (FOV) PET and 160-slice CT system to provide high quality functional and anatomical images, fast PET/CT imaging and better patient experience. The system includes PET system, CT system, patient table, power distribution unit, control and reconstruction system (host, monitor, and reconstruction computer, system software, reconstruction software), vital signal module and other accessories.

The uMI Panvivo has been previously cleared by FDA via K243538. The main modifications performed on the uMI Panvivo (K243538) in this submission are due to the addition of Deep MAC(also named AI MAC), Digital Gating(also named Self-gating), OncoFocus(also named uExcel Focus and RMC), NeuroFocus(also named HMC), DeepRecon.PET (also named as HYPER DLR or DLR), uExcel DPR (also named HYPER DPR or HYPER AiR)and uKinetics. Details about the modifications are listed as below:

  • Deep MAC, Deep Learning-based Metal Artifact Correction (also named AI MAC) is an image reconstruction algorithm that combines physical beam hardening correction and deep learning technology. It is intended to correct the artifact caused by metal implants and external metal objects.

  • Digital Gating (also named Self-gating, cleared via K232712) can automatically extract a respiratory motion signal from the list-mode data during acquisition which called data-driven (DD) method. The respiratory motion signal was calculated by tracking the location of center-of-distribution(COD) in body cavity mask. By using the respiratory motion signal, system can perform gate reconstruction without respiratory capture device.

  • OncoFocus (also named uExcel Focus and RMC, cleared via K232712) is an AI-based algorithm to reduce respiratory motion artifacts in PET/CT images and at the same time reduce the PET/CT misalignment.

  • NeuroFocus (also named HMC) is head motion correction solution, which employs a statistics-based head motion correction method that correct motion artifacts automatically using the centroid-of-distribution (COD) without manual parameter tuning to generate motion free images.

  • DeepRecon.PET (also named as HYPER DLR or DLR, cleared via K193210) uses a deep learning technique to produce better SNR (signal-to-noise-ratio) image in post-processing procedure.

  • uExcel DPR (also named HYPER DPR or HYPER AiR, cleared via K232712) is a deep learning-based PET reconstruction algorithm designed to enhance the SNR of reconstructed images. High-SNR images improve clinical diagnostic efficacy, particularly under low-count acquisition conditions (e.g., low-dose radiotracer administration or fast scanning protocols).

  • uKinetics(cleared via K232712) is a kinetic modeling toolkit for indirect dynamic image parametric analysis and direct parametric analysis of multipass dynamic data. Image-derived input function (IDIF) can be extracted from anatomical CT images and dynamic PET images. Both IDIF and populated based input function (PBIF) can be used as input function of Patlak model to generate kinetic images which reveal biodistribution map of the metabolized molecule using indirect and direct methods.

AI/ML Overview

The provided FDA 510(k) clearance letter describes the uMI Panvivo PET/CT System and mentions several new software functionalities (Deep MAC, Digital Gating, OncoFocus, NeuroFocus, DeepRecon.PET, uExcel DPR, and uKinetics). The document includes performance data for four of these functionalities: DeepRecon.PET, uExcel DPR, OncoFocus, and DeepMAC.

The following analysis focuses on the acceptance criteria and study details for these four AI-based image processing/reconstruction algorithms as detailed in the document. The document presents these as "performance verification" studies.


Overview of Acceptance Criteria and Device Performance (for DeepRecon.PET, uExcel DPR, OncoFocus, DeepMAC)

The document details the evaluation of four specific software functionalities: DeepRecon.PET, uExcel DPR, OncoFocus, and DeepMAC. Each of these has its own set of acceptance criteria and reported performance results, detailed below.

1. Table of Acceptance Criteria and Reported Device Performance

Software FunctionalityEvaluation ItemEvaluation MethodAcceptance CriteriaReported Performance
DeepRecon.PETImage consistencyMeasuring mean SUV of phantom background and liver ROIs (regions of interest) and calculating bias. Used to evaluate image bias.The bias is less than 5%.Pass
Image background noisea) Background variation (BV) in the IQ phantom.
b) Liver and white matter signal-to-noise ratio (SNR) in the patient case. Used to evaluate noise reduction performance.DeepRecon.PET has lower BV and higher SNR than OSEM with Gaussian filtering.Pass
Image contrast to noise ratioa) Contrast to noise ratio (CNR) of the hot spheres in the IQ phantom.
b) Contrast to noise ratio of lesions. CNR is a measure of the signal level in the presence of noise. Used to evaluate lesion detectability.DeepRecon.PET has higher CNR than OSEM with Gaussian filtering.Pass
uExcel DPRQuantitative evaluationContrast recovery (CR), background variability (BV), and contrast-to-noise ratio (CNR) calculated using NEMA IQ phantom data reconstructed with uExcel DPR and OSEM methods under acquisition conditions of 1 to 5 minutes per bed.

Coefficient of Variation (COV) calculated using uniform cylindrical phantom data on images reconstructed with both uExcel DPR and OSEM methods. | The averaged CR, BV, and CNR of the uExcel DPR images should be superior to those of the OSEM images.

uExcel DPR requires fewer counts to achieve a matched COV compared to OSEM. | Pass.

  • NEMA IQ Phantom Analysis: an average noise reduction of 81% and an average SNR enhancement of 391% were observed.
  • Uniform cylindrical Analysis: 1/10 of the counts can obtain the matching noise level. |
    | | Qualitative evaluation | uExcel DPR images reconstructed at lower counts qualitatively compared with full-count OSEM images. | uExcel DPR reconstructions with reduced count levels demonstrate comparable or superior image quality relative to higher-count OSEM reconstructions. | Pass.
  • 1.72.5 MBq/kg radiopharmaceutical injection conditions, combined with 23 minutes whole-body scanning (4~6 bed positions), achieves comparable diagnostic image quality.
  • Clinical evaluation by radiologists showed images sufficient for clinical diagnosis, with uExcel DPR exhibiting lower noise, better contrast, and superior sharpness compared to OSEM. |
    | OncoFocus | Volume relative to no motion correction (∆Volume). | Calculate the volume relative to no motion correction images. | The ∆Volume value is less than 0%. | Pass |
    | | Maximal standardized uptake value relative to no motion correction (∆SUVmax) | Calculate the SUVmax relative to no motion correction images. | The ∆SUVmax value is larger than 0%. | Pass |
    | DeepMAC | Quantitative evaluation | For PMMA phantom data, the average CT value in the affected area of the metal substance and the same area of the control image before and after DeepMAC was compared. | After using DeepMAC, the difference between the average CT value in the affected area of the metal substance and the same area of the control image does not exceed 10HU. | Pass |

2. Sample Sizes Used for the Test Set and Data Provenance

  • DeepRecon.PET:

    • Phantoms: NEMA IQ phantoms.
    • Clinical Patients: 20 volunteers.
    • Data Provenance: "collected from various clinical sites" and explicitly stated to be "different from the training data." The document does not specify country of origin or if it's retrospective/prospective, but "volunteers were enrolled" suggests prospective collection for the test set.
  • uExcel DPR:

    • Phantoms: Two NEMA IQ phantom datasets, two uniform cylindrical phantom datasets.
    • Clinical Patients: 19 human subjects.
    • Data Provenance: "derived from uMI Panvivo and uMI Panvivo S," "collected from various clinical sites and during separated time periods," and "different from the training data." "Study cohort" and "human subjects" imply prospective collection for the test set.
  • OncoFocus:

    • Clinical Patients: 50 volunteers.
    • Data Provenance: "collected from general clinical scenarios" and explicitly stated to be "on cases different from the training data." "Volunteers were enrolled" suggests prospective collection for the test set.
  • DeepMAC:

    • Phantoms: PMMA phantom datasets.
    • Clinical Patients: 20 human subjects.
    • Data Provenance: "from uMI Panvivo and uMI Panvivo S," "collected from various clinical sites" and explicitly stated to be "different from the training data." "Volunteers were enrolled" suggests prospective collection for the test set.

3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

The document does not explicitly state that experts established "ground truth" for the quantitative metrics (e.g., SUV, CNR, BV, CR, ∆Volume, ∆SUVmax, HU differences) for the test sets. These seem to be derived from physical measurements on phantoms or calculations from patient image data using established methods.

  • For qualitative evaluation/clinical diagnosis assessment:

    • DeepRecon.PET: Two American Board of Radiologists certified physicians.
    • uExcel DPR: Two American board-certified nuclear medicine physicians.
    • OncoFocus: Two American Board of Radiologists-certified physicians.
    • DeepMAC: Two American Board of Radiologists certified physicians.

    The exact years of experience for these experts are not provided, only their board certification status.

4. Adjudication Method for the Test Set

The document states that the radiologists/physicians evaluated images "independently" (uExcel DPR) or simply "were evaluated by" (DeepRecon.PET, OncoFocus, DeepMAC). There is no mention of an adjudication method (such as 2+1 or 3+1 consensus) for discrepancies between reader evaluations for any of the functionalities. The evaluations appear to be separate assessments, with no stated consensus mechanism.

5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

  • The document describes qualitative evaluations by radiologists/physicians comparing the AI-processed images to conventionally processed images (OSEM/no motion correction/no MAC). These are MRMC comparative studies in the sense that multiple readers evaluated multiple cases.
  • However, these studies were designed to evaluate the image quality (e.g., diagnostic sufficiency, noise, contrast, sharpness, lesion detectability, artifact reduction) of the AI-processed images compared to baseline images, rather than to measure an improvement in human reader performance (e.g., diagnostic accuracy, sensitivity, specificity, reading time) when assisted by AI vs. without AI.
  • Therefore, the studies were not designed as comparative effectiveness studies measuring the effect size of human reader improvement with AI assistance. They focus on the perceived quality of the AI-processed images themselves.

6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

  • Yes, for DeepRecon.PET, uExcel DPR, OncoFocus, and DeepMAC, quantitative (phantom and numerical) evaluations were conducted that represent the standalone performance of the algorithms in terms of image metrics (e.g., SUV bias, BV, SNR, CNR, CR, COV, ∆Volume, ∆SUVmax, HU differences). These quantitative results are directly attributed to the algorithm's output without human intervention for the measurement/calculation.
  • The qualitative evaluations by the physicians (described in point 3 above) also assess the output of the algorithm, but with human interpretation.

7. The Type of Ground Truth Used

  • For Quantitative Evaluations:

    • Phantoms: The "ground truth" for phantom studies is implicitly the known physical properties and geometry of the NEMA IQ and PMMA phantoms, allowing for quantitative measurements (e.g., true SUV, true CR, true signal-to-noise).
    • Clinical Data (DeepRecon.PET, uExcel DPR): For these reconstruction algorithms, "ground-truth images were reconstructed from fully-sampled raw data" for the training set. For the test set, comparisons seem to be made against OSEM with Gaussian filtering or full-count OSEM images as reference/comparison points, rather than an independent "ground truth" established by an external standard.
    • Clinical Data (OncoFocus): Comparisons are made relative to "no motion correction images" (∆Volume and ∆SUVmax), implying these are the baseline for comparison, not necessarily an absolute ground truth.
    • Clinical Data (DeepMAC): Comparisons are made to a "control image" without metal artifacts for quantitative assessment of HU differences.
  • For Qualitative Evaluations:

    • The "ground truth" is based on the expert consensus / qualitative assessment by the American Board-certified radiologists/nuclear medicine physicians, who compared images for attributes like noise, contrast, sharpness, motion artifact reduction, and diagnostic sufficiency. This suggests a form of expert consensus, although no specific adjudication is described. There's no mention of pathology or outcomes data as ground truth.

8. The Sample Size for the Training Set

The document provides the following for the training sets:

  • DeepRecon.PET: "image samples with different tracers, covering a wide and diverse range of clinical scenarios." No specific number provided.
  • uExcel DPR: "High statistical properties of the PET data acquired by the Long Axial Field-of-View (LAFOV) PET/CT system enable the model to better learn image features. Therefore, the training dataset for the AI module in the uExcel DPR system is derived from the uEXPLORER and uMI Panorama GS PET/CT systems." No specific number provided.
  • OncoFocus: "The training dataset of the segmentation network (CNN-BC) and the mumap synthesis network (CNN-AC) in OncoFocus was collected from general clinical scenarios. Each subject was scanned by UIH PET/CT systems for clinical protocols. All the acquisitions ensure whole-body coverage." No specific number provided.
  • DeepMAC: Not explicitly stated for the training set. Only validation dataset details are given.

9. How the Ground Truth for the Training Set Was Established

  • DeepRecon.PET: "Ground-truth images were reconstructed from fully-sampled raw data. Training inputs were generated by reconstructing subsampled data at multiple down-sampling factors." This implies that the "ground truth" for training was derived from high-quality, fully-sampled (and likely high-dose) PET data.
  • uExcel DPR: "Full-sampled data is used as the ground truth, while corresponding down-sampled data with varying down-sampling factors serves as the training input." Similar to DeepRecon.PET, high-quality, full-sampled data served as the ground truth.
  • OncoFocus:
    • For CNN-BC (body cavity segmentation network): "The input data of CNN-BC are CT-derived attenuation coefficient maps, and the target data of the network are body cavity region images." This suggests the target (ground truth) was pre-defined body cavity regions.
    • For CNN-AC (attenuation map (umap) synthesis network): "The input data are non-attenuation-corrected (NAC) PET reconstruction images, and the target data of the network are the reference CT attenuation coefficient maps." The ground truth was "reference CT attenuation coefficient maps," likely derived from actual CT scans.
  • DeepMAC: Not explicitly stated for the training set. The mention of pre-trained neural networks suggests an established training methodology, but the specific ground truth establishment is not detailed.

§ 892.1200 Emission computed tomography system.

(a)
Identification. An emission computed tomography system is a device intended to detect the location and distribution of gamma ray- and positron-emitting radionuclides in the body and produce cross-sectional images through computer reconstruction of the data. This generic type of device may include signal analysis and display equipment, patient and equipment supports, radionuclide anatomical markers, component parts, and accessories.(b)
Classification. Class II.