Search Results
Found 1 results
510(k) Data Aggregation
(221 days)
iPETcertum is an image processing software intended for use by radiologists and nuclear medicine physicians for noise reduction, sharpening, resolution improvement, and optional segmentation based on uptake value in PET images (including PET/CT and PET/MRI) obtained with any kind of radionuclides, e.g. fluorodeoxyglucose (FDG). Enhanced images can be saved in DICOM, NIfTI or ECAT files and exist in conjunction with original images.
iPETcertum is a Software as a Medical Device (SaMD), that implements image enhancement and processing algorithms to increase image quality of Positron Emission Tomography (PET) images and enable visualization of regions of interest based on quantification of uptake of values. The iPETcertum enhancement and processing provide an improved and enhanced image with optional segmentation based on standard uptake value (SUV) and volume thresholds as per clinician defined parameters. The original image/data is not altered and is available for comparison with the processed image.
iPETcertum can be used to enhance PET images with optional simultaneous Magnetic Resonance Imaging (MRI) or Computerized Tomography (CT) scans of the same subject. iPETcertum takes as input DICOM [Digital Imaging and Communications in Medicine], NIfTI [Neuroimaging Informatics Technology Initiative], or ECAT 7.x files of PET, MRI, and CT volumes, interactively visualizes the content, and produces an enhanced output of the same PET volume, in DICOM, NIfTI, or ECAT 7.x formats. The objective is to enhance the input data that are obscured and not clearly visible, to become more visible, sharper, and clearer through the image enhancement process. If CT or MR guide is available, iPETcertum computes the fusion of functional (from PET) and anatomic (from MR or CT) information. During this process, no new feature is introduced that did not exist in the PET data, just the existing features are emphasized if they are also supported by the anatomy or suppressed, if they belong to the noise and are not supported by the anatomy. Noisy scans can be enhanced reducing noise and improving clarity with the use of iPETcertum.
The iPETcertum software can be used to visualize regions of interest based on standard uptake values (SUV) and volume as per clinician defined parameters to provide additional visual information to the clinician. High uptake voxels can be identified and grouped together into connected regions, referred to as segmentation. The Standard Uptake Value (SUV) is computed and connected regions belonging to the specified range are segmented and quantified.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) clearance letter for iPETcertum (v1.0):
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category | Specific Acceptance Criterion | iPETcertum Reported Performance | Notes |
---|---|---|---|
Noise Reduction/Image Quality (Mathematical Phantom) | For iPETcertum compared to Claritas iPET (predicate): RMSE ≤ (smaller or equal), SNR ≥ (greater or equal), PSNR ≥ (greater or equal), SSIM ≥ (greater or equal). This applies with and without variance stabilization. | With variance stabilization disabled: SNR, PSNR, RMSE values are the same as Claritas iPET. | |
With variance stabilization enabled: SNR, PSNR increased; RMSE decreased; improvement especially high for noisy datasets. | Met. The new feature (variance stabilization) improved performance compared to the predicate device. | ||
Noise Reduction/Image Quality (Clinical Data Simulation) | For iPETcertum enhanced scans compared to original unprocessed scans: RMSE ≤ (smaller or equal), SNR ≥ (greater or equal), PSNR ≥ (greater or equal), SSIM ≥ (greater or equal). | ||
For iPETcertum enhanced scans compared to iPET enhanced scans: iPETcertum enhanced RMSE must be smaller or equal, and SNR must be greater or equal. | iPETcertum enhanced SNR, PSNR, SSIM increased; RMSE decreased compared to original, unprocessed scans. | ||
iPETcertum quality measures are equal to or slightly better than Claritas iPET. | Met. Confirms general image quality improvement and non-inferiority/slight superiority to predicate on simulated clinical data. | ||
Lesion Segmentation (SUV and Volume Comparison) | Manual segmentation of Claritas iPET and automatic segmentation of iPETcertum should produce similar SUV average, SUV maximum, and lesion volume values. | ||
The difference from the ground truth should be smaller for iPETcertum. | All test cases passed successfully, indicating similar SUV average, SUV maximum, and lesion volume values, with iPETcertum providing a smaller difference from ground truth. | Met. Demonstrates accuracy of automated segmentation compared to manual methods and ground truth. | |
Lesion Segmentation (DICE Index) | iPETcertum contoured lesions and manually segmented lesions must overlap with at least 50% DICE index. | The study concluded that iPETcertum can identify lesions and provide estimates of their SUV, SUV maximum, and volume values meeting and exceeding the acceptance criteria (implying the 50% DICE index was met). | Met. Demonstrates good spatial agreement between automated and manually segmented lesions. |
2. Sample Size Used for the Test Set and Data Provenance
The document does not explicitly state the exact numerical sample size (number of patients/scans) used for the test sets in each study. However, it describes the types of data used:
- Mathematical Phantom: This is a simulated dataset where noise is controlled and ground truth is known precisely. Data provenance is synthetic (not from human patients).
- Long and High Dose PET Reconstructed Results: This likely refers to retrospective clinical data from human patients, where a high-quality scan serves as a "ground truth" to simulate lower-dose/shorter-time scans. The country of origin is not specified but is implied to be clinical data.
- Database of Manually Annotated Scans: This also refers to retrospective clinical data from human patients, where lesions were manually annotated by experts. The country of origin is not specified.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document does not specify the number of experts or their qualifications (e.g., radiologist with X years of experience) for establishing the ground truth for any of the test sets.
- In the case of the mathematical phantom, the "ground truth" is inherently defined by the phantom's design.
- For the "long and high dose PET reconstructed results," the high-quality scan itself is considered the ground truth.
- For the "database of manually annotated scans," ground truth was established by "manual annotation," implying human experts, but details are missing.
4. Adjudication Method for the Test Set
The document does not specify an explicit adjudication method (e.g., 2+1, 3+1). For the "database of manually annotated scans," the ground truth was established by "manual annotation," which often implies a single annotator or a consensus process, but this is not detailed.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, What was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study evaluating human reader improvement with AI assistance versus without AI assistance was not reported in this document. The studies focused on the intrinsic performance of the algorithm and its comparison to a predicate device.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, the studies described are primarily standalone performance evaluations of the iPETcertum algorithm.
- The first study used a mathematical phantom to assess noise reduction on its own.
- The second study compared iPETcertum's enhanced output to original and predicate-enhanced outputs, again in a standalone manner based on quantitative metrics.
- The lesion segmentation studies evaluated the algorithm's ability to segment lesions and quantify SUV values against a ground truth, without human interaction in the segmentation process itself.
7. The Type of Ground Truth Used
The types of ground truth used include:
- Mathematical Phantom: An artificially generated dataset with known characteristics, providing a perfect ground truth for noise reduction evaluation.
- Long and High Dose PET Reconstructed Results: High-quality, long-duration, or high-dose PET scans of human patients, serving as a proxy for ground truth to evaluate noise reduction and image quality improvement in simulated lower-quality scans. This is a form of clinical surrogate ground truth.
- Manually Annotated Scans: Expert consensus or individual expert "manual segmentation" of lesions on clinical PET scans, serving as the ground truth for evaluating the accuracy of the automated lesion segmentation. This is a form of expert consensus ground truth.
8. The Sample Size for the Training Set
The document does not provide any information regarding the sample size used for the training set. It focuses solely on the validation and performance testing.
9. How the Ground Truth for the Training Set Was Established
Since no information is provided about the training set, there is also no information on how its ground truth was established.
Ask a specific question about this device
Page 1 of 1