Search Results
Found 2 results
510(k) Data Aggregation
(31 days)
The uMI Panvivo is a PET/CT system designed for providing anatomical and functional images. The PET provides the distribution of specific radiopharmaceuticals. CT provides diagnostic tomographic anatomical information as well as photon attenuation information for the scanned region. PET and CT scans can be performed separately. The system is intended for assessing metabolic (molecular) and physiologic functions in various parts of the body. When used with radiopharmaceuticals approved by the regulatory authority in the country of use, the uMI Panvivo system generates images depicting the distribution of these radiopharmaceuticals. The images produced by the uMI Panvivo are intended for analysis and interpretation by qualified medical professionals. They can serve as an aid in detection, localization, evaluation, diagnosis, staging, re-staging, monitoring, and/or follow-up of abnormalities, lesions, tumors, inflammation, infection, organ function, disorders, and/or diseases, in several clinical areas such as oncology, cardiology, neurology, infection and inflammation. The images produced by the system can also be used by the physician to aid in radiotherapy treatment planning and interventional radiology procedures.
The CT system can be used for low dose CT lung cancer screening for the early detection of lung nodules that may represent cancer. The screening must be performed within the established inclusion criteria of programs / protocols that have been approved and published by either a governmental body or professional medical society.
The proposed device uMI Panvivo combines a 295/235 mm axial field of view (FOV) PET and 160-slice CT system to provide high quality functional and anatomical images, fast PET/CT imaging and better patient experience. The system includes PET system, CT system, patient table, power distribution unit, control and reconstruction system (host, monitor, and reconstruction computer, system software, reconstruction software), vital signal module and other accessories.
The uMI Panvivo has been previously cleared by FDA via K243538. The main modifications performed on the uMI Panvivo (K243538) in this submission are due to the addition of Deep MAC(also named AI MAC), Digital Gating(also named Self-gating), OncoFocus(also named uExcel Focus and RMC), NeuroFocus(also named HMC), DeepRecon.PET (also named as HYPER DLR or DLR), uExcel DPR (also named HYPER DPR or HYPER AiR)and uKinetics. Details about the modifications are listed as below:
-
Deep MAC, Deep Learning-based Metal Artifact Correction (also named AI MAC) is an image reconstruction algorithm that combines physical beam hardening correction and deep learning technology. It is intended to correct the artifact caused by metal implants and external metal objects.
-
Digital Gating (also named Self-gating, cleared via K232712) can automatically extract a respiratory motion signal from the list-mode data during acquisition which called data-driven (DD) method. The respiratory motion signal was calculated by tracking the location of center-of-distribution(COD) in body cavity mask. By using the respiratory motion signal, system can perform gate reconstruction without respiratory capture device.
-
OncoFocus (also named uExcel Focus and RMC, cleared via K232712) is an AI-based algorithm to reduce respiratory motion artifacts in PET/CT images and at the same time reduce the PET/CT misalignment.
-
NeuroFocus (also named HMC) is head motion correction solution, which employs a statistics-based head motion correction method that correct motion artifacts automatically using the centroid-of-distribution (COD) without manual parameter tuning to generate motion free images.
-
DeepRecon.PET (also named as HYPER DLR or DLR, cleared via K193210) uses a deep learning technique to produce better SNR (signal-to-noise-ratio) image in post-processing procedure.
-
uExcel DPR (also named HYPER DPR or HYPER AiR, cleared via K232712) is a deep learning-based PET reconstruction algorithm designed to enhance the SNR of reconstructed images. High-SNR images improve clinical diagnostic efficacy, particularly under low-count acquisition conditions (e.g., low-dose radiotracer administration or fast scanning protocols).
-
uKinetics(cleared via K232712) is a kinetic modeling toolkit for indirect dynamic image parametric analysis and direct parametric analysis of multipass dynamic data. Image-derived input function (IDIF) can be extracted from anatomical CT images and dynamic PET images. Both IDIF and populated based input function (PBIF) can be used as input function of Patlak model to generate kinetic images which reveal biodistribution map of the metabolized molecule using indirect and direct methods.
The provided FDA 510(k) clearance letter describes the uMI Panvivo PET/CT System and mentions several new software functionalities (Deep MAC, Digital Gating, OncoFocus, NeuroFocus, DeepRecon.PET, uExcel DPR, and uKinetics). The document includes performance data for four of these functionalities: DeepRecon.PET, uExcel DPR, OncoFocus, and DeepMAC.
The following analysis focuses on the acceptance criteria and study details for these four AI-based image processing/reconstruction algorithms as detailed in the document. The document presents these as "performance verification" studies.
Overview of Acceptance Criteria and Device Performance (for DeepRecon.PET, uExcel DPR, OncoFocus, DeepMAC)
The document details the evaluation of four specific software functionalities: DeepRecon.PET, uExcel DPR, OncoFocus, and DeepMAC. Each of these has its own set of acceptance criteria and reported performance results, detailed below.
1. Table of Acceptance Criteria and Reported Device Performance
Software Functionality | Evaluation Item | Evaluation Method | Acceptance Criteria | Reported Performance |
---|---|---|---|---|
DeepRecon.PET | Image consistency | Measuring mean SUV of phantom background and liver ROIs (regions of interest) and calculating bias. Used to evaluate image bias. | The bias is less than 5%. | Pass |
Image background noise | a) Background variation (BV) in the IQ phantom. | |||
b) Liver and white matter signal-to-noise ratio (SNR) in the patient case. Used to evaluate noise reduction performance. | DeepRecon.PET has lower BV and higher SNR than OSEM with Gaussian filtering. | Pass | ||
Image contrast to noise ratio | a) Contrast to noise ratio (CNR) of the hot spheres in the IQ phantom. | |||
b) Contrast to noise ratio of lesions. CNR is a measure of the signal level in the presence of noise. Used to evaluate lesion detectability. | DeepRecon.PET has higher CNR than OSEM with Gaussian filtering. | Pass | ||
uExcel DPR | Quantitative evaluation | Contrast recovery (CR), background variability (BV), and contrast-to-noise ratio (CNR) calculated using NEMA IQ phantom data reconstructed with uExcel DPR and OSEM methods under acquisition conditions of 1 to 5 minutes per bed. |
Coefficient of Variation (COV) calculated using uniform cylindrical phantom data on images reconstructed with both uExcel DPR and OSEM methods. | The averaged CR, BV, and CNR of the uExcel DPR images should be superior to those of the OSEM images.
uExcel DPR requires fewer counts to achieve a matched COV compared to OSEM. | Pass.
- NEMA IQ Phantom Analysis: an average noise reduction of 81% and an average SNR enhancement of 391% were observed.
- Uniform cylindrical Analysis: 1/10 of the counts can obtain the matching noise level. |
| | Qualitative evaluation | uExcel DPR images reconstructed at lower counts qualitatively compared with full-count OSEM images. | uExcel DPR reconstructions with reduced count levels demonstrate comparable or superior image quality relative to higher-count OSEM reconstructions. | Pass. - 1.7
2.5 MBq/kg radiopharmaceutical injection conditions, combined with 23 minutes whole-body scanning (4~6 bed positions), achieves comparable diagnostic image quality. - Clinical evaluation by radiologists showed images sufficient for clinical diagnosis, with uExcel DPR exhibiting lower noise, better contrast, and superior sharpness compared to OSEM. |
| OncoFocus | Volume relative to no motion correction (∆Volume). | Calculate the volume relative to no motion correction images. | The ∆Volume value is less than 0%. | Pass |
| | Maximal standardized uptake value relative to no motion correction (∆SUVmax) | Calculate the SUVmax relative to no motion correction images. | The ∆SUVmax value is larger than 0%. | Pass |
| DeepMAC | Quantitative evaluation | For PMMA phantom data, the average CT value in the affected area of the metal substance and the same area of the control image before and after DeepMAC was compared. | After using DeepMAC, the difference between the average CT value in the affected area of the metal substance and the same area of the control image does not exceed 10HU. | Pass |
2. Sample Sizes Used for the Test Set and Data Provenance
-
DeepRecon.PET:
- Phantoms: NEMA IQ phantoms.
- Clinical Patients: 20 volunteers.
- Data Provenance: "collected from various clinical sites" and explicitly stated to be "different from the training data." The document does not specify country of origin or if it's retrospective/prospective, but "volunteers were enrolled" suggests prospective collection for the test set.
-
uExcel DPR:
- Phantoms: Two NEMA IQ phantom datasets, two uniform cylindrical phantom datasets.
- Clinical Patients: 19 human subjects.
- Data Provenance: "derived from uMI Panvivo and uMI Panvivo S," "collected from various clinical sites and during separated time periods," and "different from the training data." "Study cohort" and "human subjects" imply prospective collection for the test set.
-
OncoFocus:
- Clinical Patients: 50 volunteers.
- Data Provenance: "collected from general clinical scenarios" and explicitly stated to be "on cases different from the training data." "Volunteers were enrolled" suggests prospective collection for the test set.
-
DeepMAC:
- Phantoms: PMMA phantom datasets.
- Clinical Patients: 20 human subjects.
- Data Provenance: "from uMI Panvivo and uMI Panvivo S," "collected from various clinical sites" and explicitly stated to be "different from the training data." "Volunteers were enrolled" suggests prospective collection for the test set.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document does not explicitly state that experts established "ground truth" for the quantitative metrics (e.g., SUV, CNR, BV, CR, ∆Volume, ∆SUVmax, HU differences) for the test sets. These seem to be derived from physical measurements on phantoms or calculations from patient image data using established methods.
-
For qualitative evaluation/clinical diagnosis assessment:
- DeepRecon.PET: Two American Board of Radiologists certified physicians.
- uExcel DPR: Two American board-certified nuclear medicine physicians.
- OncoFocus: Two American Board of Radiologists-certified physicians.
- DeepMAC: Two American Board of Radiologists certified physicians.
The exact years of experience for these experts are not provided, only their board certification status.
4. Adjudication Method for the Test Set
The document states that the radiologists/physicians evaluated images "independently" (uExcel DPR) or simply "were evaluated by" (DeepRecon.PET, OncoFocus, DeepMAC). There is no mention of an adjudication method (such as 2+1 or 3+1 consensus) for discrepancies between reader evaluations for any of the functionalities. The evaluations appear to be separate assessments, with no stated consensus mechanism.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance
- The document describes qualitative evaluations by radiologists/physicians comparing the AI-processed images to conventionally processed images (OSEM/no motion correction/no MAC). These are MRMC comparative studies in the sense that multiple readers evaluated multiple cases.
- However, these studies were designed to evaluate the image quality (e.g., diagnostic sufficiency, noise, contrast, sharpness, lesion detectability, artifact reduction) of the AI-processed images compared to baseline images, rather than to measure an improvement in human reader performance (e.g., diagnostic accuracy, sensitivity, specificity, reading time) when assisted by AI vs. without AI.
- Therefore, the studies were not designed as comparative effectiveness studies measuring the effect size of human reader improvement with AI assistance. They focus on the perceived quality of the AI-processed images themselves.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- Yes, for DeepRecon.PET, uExcel DPR, OncoFocus, and DeepMAC, quantitative (phantom and numerical) evaluations were conducted that represent the standalone performance of the algorithms in terms of image metrics (e.g., SUV bias, BV, SNR, CNR, CR, COV, ∆Volume, ∆SUVmax, HU differences). These quantitative results are directly attributed to the algorithm's output without human intervention for the measurement/calculation.
- The qualitative evaluations by the physicians (described in point 3 above) also assess the output of the algorithm, but with human interpretation.
7. The Type of Ground Truth Used
-
For Quantitative Evaluations:
- Phantoms: The "ground truth" for phantom studies is implicitly the known physical properties and geometry of the NEMA IQ and PMMA phantoms, allowing for quantitative measurements (e.g., true SUV, true CR, true signal-to-noise).
- Clinical Data (DeepRecon.PET, uExcel DPR): For these reconstruction algorithms, "ground-truth images were reconstructed from fully-sampled raw data" for the training set. For the test set, comparisons seem to be made against OSEM with Gaussian filtering or full-count OSEM images as reference/comparison points, rather than an independent "ground truth" established by an external standard.
- Clinical Data (OncoFocus): Comparisons are made relative to "no motion correction images" (∆Volume and ∆SUVmax), implying these are the baseline for comparison, not necessarily an absolute ground truth.
- Clinical Data (DeepMAC): Comparisons are made to a "control image" without metal artifacts for quantitative assessment of HU differences.
-
For Qualitative Evaluations:
- The "ground truth" is based on the expert consensus / qualitative assessment by the American Board-certified radiologists/nuclear medicine physicians, who compared images for attributes like noise, contrast, sharpness, motion artifact reduction, and diagnostic sufficiency. This suggests a form of expert consensus, although no specific adjudication is described. There's no mention of pathology or outcomes data as ground truth.
8. The Sample Size for the Training Set
The document provides the following for the training sets:
- DeepRecon.PET: "image samples with different tracers, covering a wide and diverse range of clinical scenarios." No specific number provided.
- uExcel DPR: "High statistical properties of the PET data acquired by the Long Axial Field-of-View (LAFOV) PET/CT system enable the model to better learn image features. Therefore, the training dataset for the AI module in the uExcel DPR system is derived from the uEXPLORER and uMI Panorama GS PET/CT systems." No specific number provided.
- OncoFocus: "The training dataset of the segmentation network (CNN-BC) and the mumap synthesis network (CNN-AC) in OncoFocus was collected from general clinical scenarios. Each subject was scanned by UIH PET/CT systems for clinical protocols. All the acquisitions ensure whole-body coverage." No specific number provided.
- DeepMAC: Not explicitly stated for the training set. Only validation dataset details are given.
9. How the Ground Truth for the Training Set Was Established
- DeepRecon.PET: "Ground-truth images were reconstructed from fully-sampled raw data. Training inputs were generated by reconstructing subsampled data at multiple down-sampling factors." This implies that the "ground truth" for training was derived from high-quality, fully-sampled (and likely high-dose) PET data.
- uExcel DPR: "Full-sampled data is used as the ground truth, while corresponding down-sampled data with varying down-sampling factors serves as the training input." Similar to DeepRecon.PET, high-quality, full-sampled data served as the ground truth.
- OncoFocus:
- For CNN-BC (body cavity segmentation network): "The input data of CNN-BC are CT-derived attenuation coefficient maps, and the target data of the network are body cavity region images." This suggests the target (ground truth) was pre-defined body cavity regions.
- For CNN-AC (attenuation map (umap) synthesis network): "The input data are non-attenuation-corrected (NAC) PET reconstruction images, and the target data of the network are the reference CT attenuation coefficient maps." The ground truth was "reference CT attenuation coefficient maps," likely derived from actual CT scans.
- DeepMAC: Not explicitly stated for the training set. The mention of pre-trained neural networks suggests an established training methodology, but the specific ground truth establishment is not detailed.
Ask a specific question about this device
(88 days)
The uMI Panorama is a diagnostic imaging system that combines two existing imaging modalities PET and CT. The quantitative distribution information of PET radiopharmaceuticals within the patient body measured by PET can assist healthcare providers in assessing metabolic and physiological functions. CT provides diagnostic tomographic anatomical information as well as photon attenuation information for the scanned region. The accurate registration and fusion of PET and CT images provides anatomical reference for the findings in the PET images.
This system is intended to be operated by qualified healthcare professionals to assist in the detection, localization, diagnosis, staging, restaging, treatment planning and treatment response evaluation for diseases, inflammation, infection and disorders in, but not limited to oncology, cardiology and neurology. The system maintains independent functionality of the CT device, allowing for single modality CT diagnostic imaging.
This CT system can be used for low dose CT lung cancer screening for the early detection of lung nodules that may represent cancer. The screening must be performed within the established inclusion criteria of programs / protocols that have been approved and published by either a governmental body or professional medical society.
The proposed device uMI Panorama GS combines a 148 cm axial field of view (FOV) PET and multi-slice CT system to provide high quality functional and anatomical images, fast PET/CT imaging and better patient experience. The system includes PET gantry, CT gantry, patient table, power supply cabinet, console and reconstruction system, chiller, vital signal module.
The uMI Panorama GS has been previously cleared by FDA via K231572. The mainly modifications performed on the uMI Panorama GS (K231572) in this submission are due to the algorithm update of AIIR, the addition of HYPER Iterative, uExcel DPR, RMC, AIEFOV, Motion Management, CT-less AC, uKinetics, Retrospective Respiratory-gated Scan, uExcel Unity and uExcel iQC.
The provided text describes the performance data for the uMI Panorama device, focusing on the AIEFOV algorithm. Here's a breakdown based on your request:
Acceptance Criteria and Reported Device Performance for AIEFOV Algorithm
Acceptance Criteria | Reported Device Performance |
---|---|
Bench Tests: | Bench Tests: |
1. AI EFOV shall improve the accuracy of CT value, and improve the accuracy and uniformity of PET image SUV by performing attenuation correction with CT generated with AIEFOV algorithm when scanned object exceed CT field of view. | Bench tests showed that performing attenuation correction with AIEFOV can improve the CT number and the accuracy of SUV, in cases where the scanned object exceeds the CT field of scan-FOV. |
2. AI EFOV shall have consistent CT value, and PET image SUV by performing attenuation correction with CT generated with AIEFOV algorithm when scanned object does not exceed the CT field of view. | Meanwhile, when the scanned object did not exceed the CT scan-FOV, either AIEFOV or EFOV results in consistent SUV and CT number. |
Clinical Evaluation: | Clinical Evaluation: |
Image quality of PET images attenuated with AIEFOV should provide sufficient diagnostic confidence, with blind comparison regarding image Artifacts and homogeneity of same tissue by qualified clinical experts. | Clinical evaluation concluded the image quality of PET attenuated with AIEFOV provides sufficient diagnostic confidence. (Implied that artifacts and homogeneity were acceptable, as confidence was sufficient). |
Overall Summary: The performing attenuation correction with AIEFOV CT can improve the accuracy of image SUV in cases where the scanned object exceeds the CT field of view. | Based on the bench tests and clinical evaluation, the performing attenuation correction with AIEFOV CT can improve the accuracy of image SUV, in cases where the scanned object exceeds the CT field of view. |
Study Details Proving Device Meets Acceptance Criteria:
-
Sample Size and Data Provenance for Test Set:
- Test Set Sample Size: 9303 images from 13 patients.
- Data Provenance: Not explicitly stated regarding country of origin, but described as "clinical images" scanned in uMI Panorama GS. The study appears retrospective or a controlled prospective study for validation.
- Patient Characteristics (N=13):
- Age: 62 ± 14 years (range: 35-79)
- Sex: 7 male, 6 female
- BMI: 25.0 ± 3.5 kg/m² (range: 21.2-31.4)
- Injected activity: 0.10 ± 0.01 mCi/kg (range: 0.04-0.11)
-
Number of Experts and Qualifications for Ground Truth for Test Set:
- Number of Experts: Two (2)
- Qualifications: "American Board qualified clinical experts"
-
Adjudication Method for Test Set:
- The experts performed a "blind comparison" regarding image Artifacts, homogeneity of same tissue, and diagnostic confidence in PET images. Details of how disagreements were resolved (e.g., 2+1, 3+1, or if consensus was required) are not specified.
-
Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- Not explicitly stated as a formal MRMC study comparing human readers with AI vs. without AI assistance. The clinical evaluation involved two experts reviewing images generated with AIEFOV for diagnostic confidence, rather than a comparative trial measuring improvement in human reader performance aided by AI. Therefore, an effect size of human reader improvement with AI vs. without AI assistance is not provided.
-
Standalone (Algorithm Only) Performance:
- Yes, the "Bench tests" portion of the performance evaluation appears to assess the algorithm's performance directly on quantitative metrics (CT value, SUV accuracy and uniformity) using phantoms and patient studies in different truncation situations. The clinical evaluation also assessed the quality of images produced by the algorithm, implying a standalone assessment of its output for diagnostic confidence.
-
Type of Ground Truth Used:
- For bench tests: Quantitative measurements from phantom scans and potentially patient studies where the "true" CT values and SUV could be established or inferred relative to known conditions (e.g., non-truncated scans serving as reference).
- For clinical evaluation: Expert consensus/assessment by "American Board qualified clinical experts" regarding subjective image quality metrics (artifacts, homogeneity, diagnostic confidence).
-
Sample Size for Training Set:
- The training data for the AIEFOV algorithm contained 506,476 images.
-
How Ground Truth for Training Set was Established:
- "All data were manually quality controlled before included for training." This suggests a process of human review and verification to ensure the accuracy and suitability of the training images. Further details on the specific criteria or expert involvement for this manual QC are not provided.
- It is explicitly stated that "The training dataset used for the training of AIEFOV algorithm was independent of the dataset used to test the algorithm."
Ask a specific question about this device
Page 1 of 1