Search Results
Found 2 results
510(k) Data Aggregation
(31 days)
uMI Panvivo (uMI Panvivo); uMI Panvivo (uMI Panvivo S)
The uMI Panvivo is a PET/CT system designed for providing anatomical and functional images. The PET provides the distribution of specific radiopharmaceuticals. CT provides diagnostic tomographic anatomical information as well as photon attenuation information for the scanned region. PET and CT scans can be performed separately. The system is intended for assessing metabolic (molecular) and physiologic functions in various parts of the body. When used with radiopharmaceuticals approved by the regulatory authority in the country of use, the uMI Panvivo system generates images depicting the distribution of these radiopharmaceuticals. The images produced by the uMI Panvivo are intended for analysis and interpretation by qualified medical professionals. They can serve as an aid in detection, localization, evaluation, diagnosis, staging, re-staging, monitoring, and/or follow-up of abnormalities, lesions, tumors, inflammation, infection, organ function, disorders, and/or diseases, in several clinical areas such as oncology, cardiology, neurology, infection and inflammation. The images produced by the system can also be used by the physician to aid in radiotherapy treatment planning and interventional radiology procedures.
The CT system can be used for low dose CT lung cancer screening for the early detection of lung nodules that may represent cancer. The screening must be performed within the established inclusion criteria of programs / protocols that have been approved and published by either a governmental body or professional medical society.
The proposed device uMI Panvivo combines a 295/235 mm axial field of view (FOV) PET and 160-slice CT system to provide high quality functional and anatomical images, fast PET/CT imaging and better patient experience. The system includes PET system, CT system, patient table, power distribution unit, control and reconstruction system (host, monitor, and reconstruction computer, system software, reconstruction software), vital signal module and other accessories.
The uMI Panvivo has been previously cleared by FDA via K243538. The main modifications performed on the uMI Panvivo (K243538) in this submission are due to the addition of Deep MAC(also named AI MAC), Digital Gating(also named Self-gating), OncoFocus(also named uExcel Focus and RMC), NeuroFocus(also named HMC), DeepRecon.PET (also named as HYPER DLR or DLR), uExcel DPR (also named HYPER DPR or HYPER AiR)and uKinetics. Details about the modifications are listed as below:
-
Deep MAC, Deep Learning-based Metal Artifact Correction (also named AI MAC) is an image reconstruction algorithm that combines physical beam hardening correction and deep learning technology. It is intended to correct the artifact caused by metal implants and external metal objects.
-
Digital Gating (also named Self-gating, cleared via K232712) can automatically extract a respiratory motion signal from the list-mode data during acquisition which called data-driven (DD) method. The respiratory motion signal was calculated by tracking the location of center-of-distribution(COD) in body cavity mask. By using the respiratory motion signal, system can perform gate reconstruction without respiratory capture device.
-
OncoFocus (also named uExcel Focus and RMC, cleared via K232712) is an AI-based algorithm to reduce respiratory motion artifacts in PET/CT images and at the same time reduce the PET/CT misalignment.
-
NeuroFocus (also named HMC) is head motion correction solution, which employs a statistics-based head motion correction method that correct motion artifacts automatically using the centroid-of-distribution (COD) without manual parameter tuning to generate motion free images.
-
DeepRecon.PET (also named as HYPER DLR or DLR, cleared via K193210) uses a deep learning technique to produce better SNR (signal-to-noise-ratio) image in post-processing procedure.
-
uExcel DPR (also named HYPER DPR or HYPER AiR, cleared via K232712) is a deep learning-based PET reconstruction algorithm designed to enhance the SNR of reconstructed images. High-SNR images improve clinical diagnostic efficacy, particularly under low-count acquisition conditions (e.g., low-dose radiotracer administration or fast scanning protocols).
-
uKinetics(cleared via K232712) is a kinetic modeling toolkit for indirect dynamic image parametric analysis and direct parametric analysis of multipass dynamic data. Image-derived input function (IDIF) can be extracted from anatomical CT images and dynamic PET images. Both IDIF and populated based input function (PBIF) can be used as input function of Patlak model to generate kinetic images which reveal biodistribution map of the metabolized molecule using indirect and direct methods.
The provided FDA 510(k) clearance letter describes the uMI Panvivo PET/CT System and mentions several new software functionalities (Deep MAC, Digital Gating, OncoFocus, NeuroFocus, DeepRecon.PET, uExcel DPR, and uKinetics). The document includes performance data for four of these functionalities: DeepRecon.PET, uExcel DPR, OncoFocus, and DeepMAC.
The following analysis focuses on the acceptance criteria and study details for these four AI-based image processing/reconstruction algorithms as detailed in the document. The document presents these as "performance verification" studies.
Overview of Acceptance Criteria and Device Performance (for DeepRecon.PET, uExcel DPR, OncoFocus, DeepMAC)
The document details the evaluation of four specific software functionalities: DeepRecon.PET, uExcel DPR, OncoFocus, and DeepMAC. Each of these has its own set of acceptance criteria and reported performance results, detailed below.
1. Table of Acceptance Criteria and Reported Device Performance
Software Functionality | Evaluation Item | Evaluation Method | Acceptance Criteria | Reported Performance |
---|---|---|---|---|
DeepRecon.PET | Image consistency | Measuring mean SUV of phantom background and liver ROIs (regions of interest) and calculating bias. Used to evaluate image bias. | The bias is less than 5%. | Pass |
Image background noise | a) Background variation (BV) in the IQ phantom. | |||
b) Liver and white matter signal-to-noise ratio (SNR) in the patient case. Used to evaluate noise reduction performance. | DeepRecon.PET has lower BV and higher SNR than OSEM with Gaussian filtering. | Pass | ||
Image contrast to noise ratio | a) Contrast to noise ratio (CNR) of the hot spheres in the IQ phantom. | |||
b) Contrast to noise ratio of lesions. CNR is a measure of the signal level in the presence of noise. Used to evaluate lesion detectability. | DeepRecon.PET has higher CNR than OSEM with Gaussian filtering. | Pass | ||
uExcel DPR | Quantitative evaluation | Contrast recovery (CR), background variability (BV), and contrast-to-noise ratio (CNR) calculated using NEMA IQ phantom data reconstructed with uExcel DPR and OSEM methods under acquisition conditions of 1 to 5 minutes per bed. |
Coefficient of Variation (COV) calculated using uniform cylindrical phantom data on images reconstructed with both uExcel DPR and OSEM methods. | The averaged CR, BV, and CNR of the uExcel DPR images should be superior to those of the OSEM images.
uExcel DPR requires fewer counts to achieve a matched COV compared to OSEM. | Pass.
- NEMA IQ Phantom Analysis: an average noise reduction of 81% and an average SNR enhancement of 391% were observed.
- Uniform cylindrical Analysis: 1/10 of the counts can obtain the matching noise level. |
| | Qualitative evaluation | uExcel DPR images reconstructed at lower counts qualitatively compared with full-count OSEM images. | uExcel DPR reconstructions with reduced count levels demonstrate comparable or superior image quality relative to higher-count OSEM reconstructions. | Pass. - 1.7
2.5 MBq/kg radiopharmaceutical injection conditions, combined with 23 minutes whole-body scanning (4~6 bed positions), achieves comparable diagnostic image quality. - Clinical evaluation by radiologists showed images sufficient for clinical diagnosis, with uExcel DPR exhibiting lower noise, better contrast, and superior sharpness compared to OSEM. |
| OncoFocus | Volume relative to no motion correction (∆Volume). | Calculate the volume relative to no motion correction images. | The ∆Volume value is less than 0%. | Pass |
| | Maximal standardized uptake value relative to no motion correction (∆SUVmax) | Calculate the SUVmax relative to no motion correction images. | The ∆SUVmax value is larger than 0%. | Pass |
| DeepMAC | Quantitative evaluation | For PMMA phantom data, the average CT value in the affected area of the metal substance and the same area of the control image before and after DeepMAC was compared. | After using DeepMAC, the difference between the average CT value in the affected area of the metal substance and the same area of the control image does not exceed 10HU. | Pass |
2. Sample Sizes Used for the Test Set and Data Provenance
-
DeepRecon.PET:
- Phantoms: NEMA IQ phantoms.
- Clinical Patients: 20 volunteers.
- Data Provenance: "collected from various clinical sites" and explicitly stated to be "different from the training data." The document does not specify country of origin or if it's retrospective/prospective, but "volunteers were enrolled" suggests prospective collection for the test set.
-
uExcel DPR:
- Phantoms: Two NEMA IQ phantom datasets, two uniform cylindrical phantom datasets.
- Clinical Patients: 19 human subjects.
- Data Provenance: "derived from uMI Panvivo and uMI Panvivo S," "collected from various clinical sites and during separated time periods," and "different from the training data." "Study cohort" and "human subjects" imply prospective collection for the test set.
-
OncoFocus:
- Clinical Patients: 50 volunteers.
- Data Provenance: "collected from general clinical scenarios" and explicitly stated to be "on cases different from the training data." "Volunteers were enrolled" suggests prospective collection for the test set.
-
DeepMAC:
- Phantoms: PMMA phantom datasets.
- Clinical Patients: 20 human subjects.
- Data Provenance: "from uMI Panvivo and uMI Panvivo S," "collected from various clinical sites" and explicitly stated to be "different from the training data." "Volunteers were enrolled" suggests prospective collection for the test set.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document does not explicitly state that experts established "ground truth" for the quantitative metrics (e.g., SUV, CNR, BV, CR, ∆Volume, ∆SUVmax, HU differences) for the test sets. These seem to be derived from physical measurements on phantoms or calculations from patient image data using established methods.
-
For qualitative evaluation/clinical diagnosis assessment:
- DeepRecon.PET: Two American Board of Radiologists certified physicians.
- uExcel DPR: Two American board-certified nuclear medicine physicians.
- OncoFocus: Two American Board of Radiologists-certified physicians.
- DeepMAC: Two American Board of Radiologists certified physicians.
The exact years of experience for these experts are not provided, only their board certification status.
4. Adjudication Method for the Test Set
The document states that the radiologists/physicians evaluated images "independently" (uExcel DPR) or simply "were evaluated by" (DeepRecon.PET, OncoFocus, DeepMAC). There is no mention of an adjudication method (such as 2+1 or 3+1 consensus) for discrepancies between reader evaluations for any of the functionalities. The evaluations appear to be separate assessments, with no stated consensus mechanism.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance
- The document describes qualitative evaluations by radiologists/physicians comparing the AI-processed images to conventionally processed images (OSEM/no motion correction/no MAC). These are MRMC comparative studies in the sense that multiple readers evaluated multiple cases.
- However, these studies were designed to evaluate the image quality (e.g., diagnostic sufficiency, noise, contrast, sharpness, lesion detectability, artifact reduction) of the AI-processed images compared to baseline images, rather than to measure an improvement in human reader performance (e.g., diagnostic accuracy, sensitivity, specificity, reading time) when assisted by AI vs. without AI.
- Therefore, the studies were not designed as comparative effectiveness studies measuring the effect size of human reader improvement with AI assistance. They focus on the perceived quality of the AI-processed images themselves.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- Yes, for DeepRecon.PET, uExcel DPR, OncoFocus, and DeepMAC, quantitative (phantom and numerical) evaluations were conducted that represent the standalone performance of the algorithms in terms of image metrics (e.g., SUV bias, BV, SNR, CNR, CR, COV, ∆Volume, ∆SUVmax, HU differences). These quantitative results are directly attributed to the algorithm's output without human intervention for the measurement/calculation.
- The qualitative evaluations by the physicians (described in point 3 above) also assess the output of the algorithm, but with human interpretation.
7. The Type of Ground Truth Used
-
For Quantitative Evaluations:
- Phantoms: The "ground truth" for phantom studies is implicitly the known physical properties and geometry of the NEMA IQ and PMMA phantoms, allowing for quantitative measurements (e.g., true SUV, true CR, true signal-to-noise).
- Clinical Data (DeepRecon.PET, uExcel DPR): For these reconstruction algorithms, "ground-truth images were reconstructed from fully-sampled raw data" for the training set. For the test set, comparisons seem to be made against OSEM with Gaussian filtering or full-count OSEM images as reference/comparison points, rather than an independent "ground truth" established by an external standard.
- Clinical Data (OncoFocus): Comparisons are made relative to "no motion correction images" (∆Volume and ∆SUVmax), implying these are the baseline for comparison, not necessarily an absolute ground truth.
- Clinical Data (DeepMAC): Comparisons are made to a "control image" without metal artifacts for quantitative assessment of HU differences.
-
For Qualitative Evaluations:
- The "ground truth" is based on the expert consensus / qualitative assessment by the American Board-certified radiologists/nuclear medicine physicians, who compared images for attributes like noise, contrast, sharpness, motion artifact reduction, and diagnostic sufficiency. This suggests a form of expert consensus, although no specific adjudication is described. There's no mention of pathology or outcomes data as ground truth.
8. The Sample Size for the Training Set
The document provides the following for the training sets:
- DeepRecon.PET: "image samples with different tracers, covering a wide and diverse range of clinical scenarios." No specific number provided.
- uExcel DPR: "High statistical properties of the PET data acquired by the Long Axial Field-of-View (LAFOV) PET/CT system enable the model to better learn image features. Therefore, the training dataset for the AI module in the uExcel DPR system is derived from the uEXPLORER and uMI Panorama GS PET/CT systems." No specific number provided.
- OncoFocus: "The training dataset of the segmentation network (CNN-BC) and the mumap synthesis network (CNN-AC) in OncoFocus was collected from general clinical scenarios. Each subject was scanned by UIH PET/CT systems for clinical protocols. All the acquisitions ensure whole-body coverage." No specific number provided.
- DeepMAC: Not explicitly stated for the training set. Only validation dataset details are given.
9. How the Ground Truth for the Training Set Was Established
- DeepRecon.PET: "Ground-truth images were reconstructed from fully-sampled raw data. Training inputs were generated by reconstructing subsampled data at multiple down-sampling factors." This implies that the "ground truth" for training was derived from high-quality, fully-sampled (and likely high-dose) PET data.
- uExcel DPR: "Full-sampled data is used as the ground truth, while corresponding down-sampled data with varying down-sampling factors serves as the training input." Similar to DeepRecon.PET, high-quality, full-sampled data served as the ground truth.
- OncoFocus:
- For CNN-BC (body cavity segmentation network): "The input data of CNN-BC are CT-derived attenuation coefficient maps, and the target data of the network are body cavity region images." This suggests the target (ground truth) was pre-defined body cavity regions.
- For CNN-AC (attenuation map (umap) synthesis network): "The input data are non-attenuation-corrected (NAC) PET reconstruction images, and the target data of the network are the reference CT attenuation coefficient maps." The ground truth was "reference CT attenuation coefficient maps," likely derived from actual CT scans.
- DeepMAC: Not explicitly stated for the training set. The mention of pre-trained neural networks suggests an established training methodology, but the specific ground truth establishment is not detailed.
Ask a specific question about this device
(27 days)
uMI Panvivo (uMI Panvivo); uMI Panvivo (uMI Panvivo S)
The uMI Panvivo is a PET/CT system designed for providing anatomical and functional images. The PET provides the distribution of specific radiopharmaceuticals. CT provides diagnostic tomographic anatomical information as well as photon attenuation information for the scanned region. PET and CT scans can be performed separately. The system is intended for assessing metabolic (molecular) and physiologic functions in various parts of the body. When used with radiopharmaceuticals approved by the regulatory authority in the country of use, the uMI Panvivo system generates images depicting the distribution of these radiopharmaceuticals. The images produced by the uMI Panvivo are intended for analysis and interpretation by qualified medical professionals. They can serve as an aid in detection, localization, evaluation, diagnosis, staging, re-staging, monitoring, and/or follow-up of abnormalities, lesions, tumors, inflammation, infection, organ function, disorders, and/or diseases, in several clinical areas such as oncology, infection and inflammation, neurology. The images produced by the system can also be used by the physician to aid in radiotherapy treatment planning and interventional radiology procedures.
The CT system can be used for low dose CT lung cancer screening for the early detection of lung nodules that may represent cancer. The screening must be performed within the established inclusion criteria of programs / protocols that have been approved and published by either a governmental body or professional medical societv. *
- Please refer to clinical literature, including the results of the National Lung Screening Trial (N Engl J Med 2011; 365:395-409) and subsequent literature, for further information.
The proposed device uMI Panvivo combines a 235/295 mm axial field of view (FOV) PET and 160-slice CT system to provide high quality functional and anatomical images, fast PET/CT imaging and better patient experience. The system includes PET system. CT system, patient table, power distribution unit, control and reconstruction system (host, monitor, and reconstruction computer, system software, reconstruction software), vital signal module and other accessories.
The uMI Panvivo was previously cleared by the FDA via K241596. The modification in this submission involves the addition of a new model. The previous uMI Panvivo(K241596) is designed with scalable PET rings and uMI Panvivo S is scaling to 80 PET rings compare to the uMI Panvivo 100 PET rings.
This document does not contain the detailed acceptance criteria and study information that would typically be found in a FDA Summary of Safety and Effectiveness Data (SSED) report or a more comprehensive clinical study report. The provided text is a 510(k) Summary, which primarily focuses on demonstrating substantial equivalence to a predicate device rather than providing extensive details on novel performance studies for acceptance criteria.
However, based on the limited information available in the "Performance Verification" section on page 8, I can infer some points related to image quality and the type of evaluation performed.
Here's a breakdown of what can and cannot be extracted from the provided text according to your requested categories:
1. A table of acceptance criteria and the reported device performance
The document states:
"A Sample clinical images were reviewed by U.S. board-certified radiologist. It was shown that the proposed device can generate images as intended and the image quality is sufficient for diagnostic use."
This implies that the acceptance criteria for image quality were met, as determined by a qualified professional. However, the specific quantitative acceptance criteria (e.g., minimum spatial resolution, signal-to-noise ratio, contrast-to-noise ratio, lesion detection sensitivity/specificity targets) and the reported device performance against these specific criteria are not detailed in this summary.
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
The document mentions "Sample clinical images." The exact sample size of images or cases used in this review is not specified. The data provenance (country of origin, retrospective/prospective nature) is also not mentioned.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
The document states that images were "reviewed by U.S. board-certified radiologist."
- Number of experts: Singular ("radiologist") suggests one radiologist, but it could also implicitly mean "radiologists" as a group of experts. The exact number is unclear.
- Qualifications: "U.S. board-certified radiologist" is a qualification. Specific experience (e.g., "10 years of experience") is not provided.
- Role in ground truth: Based on the text, the radiologist(s) reviewed images to confirm "image quality is sufficient for diagnostic use." This implies they evaluated the image quality itself, rather than strictly establishing a ground truth for a diagnostic task (e.g., confirming presence/absence of a lesion against a gold standard).
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
Since the number of experts is unclear or potentially singular, and the nature of the review was for "image quality is sufficient for diagnostic use," an explicit adjudication method like 2+1 or 3+1 is not mentioned and likely not applied in the traditional sense for a diagnostic ground truth.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not describe an MRMC comparative effectiveness study, nor does it mention AI assistance. The device is described as a PET/CT system, and the performance verification mentions evaluation of image quality by a radiologist. This is not an MRMC study comparing human readers with and without AI assistance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
This section describes a PET/CT imaging system, not an AI algorithm. Therefore, the concept of "standalone (algorithm only)" performance does not directly apply to the described device in this context. The study described is a human evaluation of the device's output (images).
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The verification states that images were reviewed by a radiologist to determine if "image quality is sufficient for diagnostic use." This implies a subjective expert evaluation of image quality rather than a definitive "ground truth" established by pathology, clinical outcomes, or expert consensus for a diagnostic task. The ground truth here is essentially the radiologist's assessment of image diagnostic sufficiency.
8. The sample size for the training set
The document does not describe any machine learning or AI components that would require a "training set." It focuses on verification of a hardware imaging system. Therefore, a sample size for a training set is not applicable and not mentioned.
9. How the ground truth for the training set was established
As there is no mention of a training set, this information is not applicable and not provided.
Summary of what is available and what is missing:
The provided 510(k) Summary focuses on demonstrating substantial equivalence of the uMI Panvivo with a new model (uMI Panvivo S) to its predicate device (uMI Panvivo K241596). The "Performance Verification" section mentions a review of sample clinical images by a U.S. board-certified radiologist to confirm that the device generates images as intended and that the image quality is sufficient for diagnostic use. This is a very high-level statement and lacks the quantitative details typically associated with detailed acceptance criteria and study results. The document does not provide specifics on:
- Quantitative acceptance criteria for image quality or diagnostic performance.
- Specific device performance metrics against these criteria.
- The exact sample size of images/cases.
- The data provenance (country, retrospective/prospective).
- The precise number of experts or their detailed experience.
- Any formal adjudication method for ground truth.
- MRMC studies for AI assistance or standalone algorithm performance.
- Details on how "ground truth" was established beyond general expert review of image sufficiency.
- Training set information, as it's not an AI/ML device per se in this context.
Ask a specific question about this device
Page 1 of 1