Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K244016
    Date Cleared
    2025-08-05

    (221 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    iPETcertum is an image processing software intended for use by radiologists and nuclear medicine physicians for noise reduction, sharpening, resolution improvement, and optional segmentation based on uptake value in PET images (including PET/CT and PET/MRI) obtained with any kind of radionuclides, e.g. fluorodeoxyglucose (FDG). Enhanced images can be saved in DICOM, NIfTI or ECAT files and exist in conjunction with original images.

    Device Description

    iPETcertum is a Software as a Medical Device (SaMD), that implements image enhancement and processing algorithms to increase image quality of Positron Emission Tomography (PET) images and enable visualization of regions of interest based on quantification of uptake of values. The iPETcertum enhancement and processing provide an improved and enhanced image with optional segmentation based on standard uptake value (SUV) and volume thresholds as per clinician defined parameters. The original image/data is not altered and is available for comparison with the processed image.

    iPETcertum can be used to enhance PET images with optional simultaneous Magnetic Resonance Imaging (MRI) or Computerized Tomography (CT) scans of the same subject. iPETcertum takes as input DICOM [Digital Imaging and Communications in Medicine], NIfTI [Neuroimaging Informatics Technology Initiative], or ECAT 7.x files of PET, MRI, and CT volumes, interactively visualizes the content, and produces an enhanced output of the same PET volume, in DICOM, NIfTI, or ECAT 7.x formats. The objective is to enhance the input data that are obscured and not clearly visible, to become more visible, sharper, and clearer through the image enhancement process. If CT or MR guide is available, iPETcertum computes the fusion of functional (from PET) and anatomic (from MR or CT) information. During this process, no new feature is introduced that did not exist in the PET data, just the existing features are emphasized if they are also supported by the anatomy or suppressed, if they belong to the noise and are not supported by the anatomy. Noisy scans can be enhanced reducing noise and improving clarity with the use of iPETcertum.

    The iPETcertum software can be used to visualize regions of interest based on standard uptake values (SUV) and volume as per clinician defined parameters to provide additional visual information to the clinician. High uptake voxels can be identified and grouped together into connected regions, referred to as segmentation. The Standard Uptake Value (SUV) is computed and connected regions belonging to the specified range are segmented and quantified.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) clearance letter for iPETcertum (v1.0):

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria CategorySpecific Acceptance CriterioniPETcertum Reported PerformanceNotes
    Noise Reduction/Image Quality (Mathematical Phantom)For iPETcertum compared to Claritas iPET (predicate): RMSE ≤ (smaller or equal), SNR ≥ (greater or equal), PSNR ≥ (greater or equal), SSIM ≥ (greater or equal). This applies with and without variance stabilization.With variance stabilization disabled: SNR, PSNR, RMSE values are the same as Claritas iPET.
    With variance stabilization enabled: SNR, PSNR increased; RMSE decreased; improvement especially high for noisy datasets.Met. The new feature (variance stabilization) improved performance compared to the predicate device.
    Noise Reduction/Image Quality (Clinical Data Simulation)For iPETcertum enhanced scans compared to original unprocessed scans: RMSE ≤ (smaller or equal), SNR ≥ (greater or equal), PSNR ≥ (greater or equal), SSIM ≥ (greater or equal).
    For iPETcertum enhanced scans compared to iPET enhanced scans: iPETcertum enhanced RMSE must be smaller or equal, and SNR must be greater or equal.iPETcertum enhanced SNR, PSNR, SSIM increased; RMSE decreased compared to original, unprocessed scans.
    iPETcertum quality measures are equal to or slightly better than Claritas iPET.Met. Confirms general image quality improvement and non-inferiority/slight superiority to predicate on simulated clinical data.
    Lesion Segmentation (SUV and Volume Comparison)Manual segmentation of Claritas iPET and automatic segmentation of iPETcertum should produce similar SUV average, SUV maximum, and lesion volume values.
    The difference from the ground truth should be smaller for iPETcertum.All test cases passed successfully, indicating similar SUV average, SUV maximum, and lesion volume values, with iPETcertum providing a smaller difference from ground truth.Met. Demonstrates accuracy of automated segmentation compared to manual methods and ground truth.
    Lesion Segmentation (DICE Index)iPETcertum contoured lesions and manually segmented lesions must overlap with at least 50% DICE index.The study concluded that iPETcertum can identify lesions and provide estimates of their SUV, SUV maximum, and volume values meeting and exceeding the acceptance criteria (implying the 50% DICE index was met).Met. Demonstrates good spatial agreement between automated and manually segmented lesions.

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not explicitly state the exact numerical sample size (number of patients/scans) used for the test sets in each study. However, it describes the types of data used:

    • Mathematical Phantom: This is a simulated dataset where noise is controlled and ground truth is known precisely. Data provenance is synthetic (not from human patients).
    • Long and High Dose PET Reconstructed Results: This likely refers to retrospective clinical data from human patients, where a high-quality scan serves as a "ground truth" to simulate lower-dose/shorter-time scans. The country of origin is not specified but is implied to be clinical data.
    • Database of Manually Annotated Scans: This also refers to retrospective clinical data from human patients, where lesions were manually annotated by experts. The country of origin is not specified.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The document does not specify the number of experts or their qualifications (e.g., radiologist with X years of experience) for establishing the ground truth for any of the test sets.

    • In the case of the mathematical phantom, the "ground truth" is inherently defined by the phantom's design.
    • For the "long and high dose PET reconstructed results," the high-quality scan itself is considered the ground truth.
    • For the "database of manually annotated scans," ground truth was established by "manual annotation," implying human experts, but details are missing.

    4. Adjudication Method for the Test Set

    The document does not specify an explicit adjudication method (e.g., 2+1, 3+1). For the "database of manually annotated scans," the ground truth was established by "manual annotation," which often implies a single annotator or a consensus process, but this is not detailed.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, What was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study evaluating human reader improvement with AI assistance versus without AI assistance was not reported in this document. The studies focused on the intrinsic performance of the algorithm and its comparison to a predicate device.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    Yes, the studies described are primarily standalone performance evaluations of the iPETcertum algorithm.

    • The first study used a mathematical phantom to assess noise reduction on its own.
    • The second study compared iPETcertum's enhanced output to original and predicate-enhanced outputs, again in a standalone manner based on quantitative metrics.
    • The lesion segmentation studies evaluated the algorithm's ability to segment lesions and quantify SUV values against a ground truth, without human interaction in the segmentation process itself.

    7. The Type of Ground Truth Used

    The types of ground truth used include:

    • Mathematical Phantom: An artificially generated dataset with known characteristics, providing a perfect ground truth for noise reduction evaluation.
    • Long and High Dose PET Reconstructed Results: High-quality, long-duration, or high-dose PET scans of human patients, serving as a proxy for ground truth to evaluate noise reduction and image quality improvement in simulated lower-quality scans. This is a form of clinical surrogate ground truth.
    • Manually Annotated Scans: Expert consensus or individual expert "manual segmentation" of lesions on clinical PET scans, serving as the ground truth for evaluating the accuracy of the automated lesion segmentation. This is a form of expert consensus ground truth.

    8. The Sample Size for the Training Set

    The document does not provide any information regarding the sample size used for the training set. It focuses solely on the validation and performance testing.

    9. How the Ground Truth for the Training Set Was Established

    Since no information is provided about the training set, there is also no information on how its ground truth was established.

    Ask a Question

    Ask a specific question about this device

    K Number
    K213140
    Device Name
    Claritas iPET
    Date Cleared
    2021-12-22

    (86 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Claritas iPET is an image processing software intended for use by radiologists and nuclear medicine physicians for noise reduction, sharpening, and resolution improvement of PET images (including PET/CT and PET/MRI) obtained with any kind of radionuclides, e.g. fluorodeoxyglucose (FDG). Enhanced images will be saved in DICOM files and exist in conjunction with original images.

    Device Description

    Claritas iPET v1.0 Image Enhancement System is a medical image enhancement software, i.e., a Software as a Medical Device (SaMD), that can be used to increase image quality by implementation of an image processing and image fusion algorithm.

    Claritas iPET can be used to enhance Positron Emission Tomography (PET) images with optional simultaneous Magnetic Resonance Imaging (MRI) or Computerized Tomography (CT) scans of the same subject. Claritas iPET takes as input DICOM [Digital Imaging and Communications in Medicine] files of PET, MRI, and CT images, and produces an enhanced image of the same file. The objective is to enhance the DICOM files that are obscured and not clearly visible, to be more visible, sharper, and clearer through the Claritas iPET image enhancement process. Claritas iPET is intended to be used by radiologists and nuclear medicine physicians in hospitals, radiology centres and clinics, as well as by medical universities and research intuitions.

    The image improvement includes noise reduction, sharpening of organ boundaries, and achieving super-resolution. With the help of Claritas iPET software, high quality PET scans can be produced. The Claritas iPET algorithm computes the fusion of functional (from PET) and anatomic (from MR or CT) information, and is based on Non-Local Means filtering. The goal of the software is to process and visualize the content of DICOM files storing 3D voxel arrays, i.e. a uniformly spaced sequence of slices of a PET scan. The processing algorithm may also input another 3D voxel array storing the density values obtained by a CT or MRI scan. The PET and CT/MR volumes should at least partially overlap to exploit the additional anatomic information. The CT or MR volume is expected to have a higher resolution than the PET volume in order for effective improvement. The sharpness. style and the detail of the visualization can be controlled by the user and can be compared to the visualization of the raw image data. During this process, no new feature is introduced that did not exist in the PET data, just the existing features are emphasized if they are also supported by the anatomy or suppressed if they are in the noise region and are not supported by the anatomy.

    AI/ML Overview

    This document does not contain the level of detail necessary to fully answer all aspects of your request, particularly regarding the specific acceptance criteria for regulatory clearance and the detailed methodology of human reader studies (MRMC). However, based on the provided text, here's a breakdown of the available information:


    1. A table of acceptance criteria and the reported device performance

    The document describes the device's performance in terms of improving image quality metrics (RMSE and SNR), but it does not explicitly state pre-defined acceptance criteria in a table format that would have been submitted to the FDA for regulatory clearance. Instead, it describes observed improvements.

    Implied Acceptance Criteria (Performance Metric Improvements, based on the text):

    Performance MetricAcceptance Criteria (Implied)Reported Device Performance
    RMSE ReductionAt least 10% (for high dosage/longer scans)Decreased by at least 10% (for high dosage/longer scans)
    50% (for low dosage/short scans)Decreased by 50% (for low dosage/short scans)
    SNR IncreaseAt least 20% (for high dosage/longer scans)Increased by at least 20% (for high dosage/longer scans)
    4-5 times (for low dosage/short scans)Increased by 4-5 times (for low dosage/short scans)

    Note: The document states "All tests have passed," indicating that these performance levels met internal acceptance thresholds.


    2. Sample size used for the test set and the data provenance

    • Test Set Sample Size: The document does not specify the exact number of cases/patients used for the test set. It mentions "real full body human PET scans" in the first option for ground truth, and the "Zubal mathematical phantom" in the second option. The number of scans for each of these is not quantified.
    • Data Provenance:
      • Real Human Data: "real full body human PET scans" (retrospective, as they were "enhanced" after acquisition). The country of origin is not specified.
      • Synthetic Data: "Zubal mathematical phantom" (synthetic/simulated data).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    The document does not mention the use of experts to establish ground truth for performance testing. Instead, it defines ground truth through:

    1. Long scan / high dosage PET: "We executed a long scan and accepted the reconstructed results as ground truth."
    2. Mathematical phantom: "we took the Zubal mathematical phantom, and considered it as the ground truth."

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    No adjudication method involving experts is mentioned for the test set. Ground truth was established via the long scan/high dosage PET or the mathematical phantom.


    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    The document does not indicate that an MRMC comparative effectiveness study was performed or submitted as part of this 510(k) summary. The performance testing described focuses solely on the algorithm's effect on image metrics (RMSE, SNR) compared to a defined ground truth, not on human reader performance.


    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, the performance testing described is precisely a standalone (algorithm only) performance evaluation. The improvements in RMSE and SNR are calculated based on the algorithm's output compared to the ground truth, without human intervention or interpretation as part of the study design.


    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    Two types of ground truth were used:

    1. "Golden Standard" Imaging: In the first option, a "long scan" and "reconstructed results" of real full-body human PET scans were accepted as ground truth. This is akin to using a high-fidelity scan as the reference.
    2. Mathematical Phantom: In the second option, the "Zubal mathematical phantom" was used as the ground truth for simulated data.

    8. The sample size for the training set

    The document does not specify the sample size for the training set. It describes the algorithm as a "modification of the non-local means algorithm" and states it "finds the weights using the statistical analysis of the PET data and the data of additional modalities (MRI / CT)." This implies a more traditional image processing approach rather than a purely deep learning model requiring a distinct, large training dataset. The predicate device does use a CNN, implying a training set, but the subject device (Claritas iPET) is explicitly described as different in its core technology.


    9. How the ground truth for the training set was established

    Since the document does not specify a distinct "training set" in the context of a machine learning model, it also does not describe how ground truth was established for a training set. The algorithm's mechanism (non-local means guided by other modalities and statistical analysis of PET data) suggests a design that may not require a labeled training set in the same way a deep learning model would.

    Ask a Question

    Ask a specific question about this device

    K Number
    K212470
    Device Name
    iRAD
    Date Cleared
    2021-10-20

    (75 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    iRAD Image Enhancement System is an image processing software than can be used for image enhancement of MRI, CT, and X-Ray images. Enhanced images will be sent to PACS server and exist in conjunction to the original images.

    Device Description

    iRAD v1.0 Image Enhancement System is a medical image enhancement software, i.e., a Software as a Medical Device (SaMD), that can be used to enhance images of MRI, CT and X-Ray. iRAD takes as input DICOM [Digital Imaging and Communications in Medicine] files of MRI, CT, and X-Ray images, and produces an enhanced output of the same file, in DICOM format that can be sent to a PACS server. The objective is to enhance the DICOM files that are obscured and not clearly visible, to be more visible, sharper, and clearer through the iRAD image enhancement process. The iRAD image enhancement is done by the implementation of an image enhancement algorithm.

    iRAD is intended to be used by medical doctors, radiologists and clinicians in hospitals, radiology centers and clinics, as well as by medical universities and research intuitions. The system allows selection of input DICOM images from PACS servers. DICOM images are sent to the iRAD image enhancement server, where they are processed and sent back to the PACS server after enhancement. The enhanced and original images exist in conjunction and can be compared. The system provides the user with a set of adjustable parameters through which to control the degree of contrast and strength enhancement and noise suppression.

    iRAD implements a modified contrast limited adaptive histogram equalization algorithm to improve the visibility of the image and it uses the iRAD guided filter to reduce noise. The original image is deconstructed into overlapping rectangular components. The equalization and matching algorithm is executed in overlapping rectangular regions, resulting in several level-of-detail layers. The enhanced and denoised image is reconstructed using the level-of-detail layers based on user-controlled parameters of noise suppression and detail enhancement.

    AI/ML Overview

    Here's the breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

    Key Takeaway Regarding the Study: The provided document describes performance testing for an image enhancement system (iRAD). This system aims to improve the visibility, sharpness, and clarity of medical images (MRI, CT, X-Ray) by increasing contrast and reducing noise. It's crucial to understand that this is not a diagnostic AI system that classifies or detects pathologies. Instead, it's a tool to improve image quality for human interpretation. Therefore, the "acceptance criteria" and "study that proves the device meets the acceptance criteria" are focused on the technical performance of the image enhancement rather than diagnostic accuracy comparisons.

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria Category/MetricReported Device Performance (iRAD)
    Image Enhancement Effectiveness
    Contrast Ratio ImprovementIncreased in all test cases.
    Entropy ImprovementIncreased in all test cases.
    Noise Reduction
    Signal-to-Noise Ratio (SNR) ImprovementIncreased significantly in all test cases (mathematical phantoms); improved by at least 50% without degradation in MRI, CT, and X-Ray scans.

    Note: The document describes these as "test cases" that "passed successfully" or "passed." These are implicit acceptance criteria based on the device's intended function to enhance images. Specific quantitative thresholds for "increase" or "improvement" are not explicitly stated as numerical acceptance limits in this summary, but the successful passing of these tests implies that the observed improvements met the internal criteria.

    2. Sample Sizes Used for the Test Set and Data Provenance

    • Mathematical Derenzo Phantom: Used for controlled testing of contrast ratio, entropy, and SNR improvement. (Quantitative count of phantoms not specified, but referred to as "the mathematical Derenzo Phantom" and "two mathematical phantoms").
    • X-Ray Scans: A collection of 82 lower resolution and 21 high resolution X-Ray scans (total 103) processed.
    • MRI Scans: 100 MRI scans processed.
    • CT Scans: CT scans were used for SNR improvement testing (quantitative count not specified, but implied to be part of the "all images tested").
    • Camera Scans: 38 camera scans processed (though the device is for medical images, these were used in one phase of testing).

    Data Provenance: The document does not specify the country of origin for the X-Ray, MRI, CT, or camera scans. It also does not explicitly state whether the data was retrospective or prospective. Given the nature of performance testing for an image enhancement algorithm, it's highly likely to be retrospective data.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    The concept of "ground truth" for a diagnostic task (e.g., presence/absence of a disease) is not directly applicable here, as iRAD is an image enhancement system, not a diagnostic AI. The "ground truth" in this context refers to the ideal enhanced image or the known properties of the phantom.

    • For Phantoms: Mathematical phantoms inherently have a known "ground truth" for their properties (e.g., known contrast, known signal, known noise levels). No human experts are needed to establish this.
    • For Clinical Images: The "ground truth" for enhancement is assessed by the measured improvement in objective metrics like contrast ratio, entropy, and SNR, rather than expert labels of image content. The document does not mention human readers or experts for grading image quality or establishing a "ground truth" for the test sets used to measure these quantitative improvements.

    4. Adjudication Method for the Test Set

    Not applicable, as the evaluation relies on quantitative, objective metrics for image enhancement (contrast ratio, entropy, SNR) calculated algorithmically, rather than subjective human assessment where adjudication might be needed.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done

    No, an MRMC comparative effectiveness study was not described in the provided text. This type of study is typically performed for diagnostic AI systems to show how AI assistance impacts human reader performance (e.g., changes in sensitivity, specificity, reading time). Since iRAD is an image enhancement tool, its primary evaluation is on the objective improvement of image characteristics, not directly on diagnostic accuracy.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done

    Yes, the testing described is primarily standalone algorithm performance. The device's ability to improve contrast ratio, entropy, and SNR values was measured objectively, without direct human interaction or diagnostic performance evaluation.

    7. The Type of Ground Truth Used

    • Mathematical/Computational Ground Truth: For the phantom studies, the ground truth of the image properties (contrast, signal, noise) is known conceptually from the phantom design specification.
    • Objective Metric-Based Ground Truth: For the clinical image sets, the "ground truth" for success is measured by the objective improvement in quantitative metrics (contrast ratio, entropy, SNR) calculated before and after processing by the iRAD algorithm. There is no "pathology" or "outcomes data" ground truth involved, as the device does not perform diagnosis.

    8. The Sample Size for the Training Set

    The document does not specify a sample size for a training set. Given that iRAD implements a "modified contrast limited adaptive histogram equalization algorithm" and "iRAD guided filter" (Section 5.6), which are described as algorithmic processes rather than deep learning models requiring large-scale data training, it's possible that a traditional "training set" in the machine learning sense was not used, or if it was, its size is not disclosed. The description points to a rules-based or filter-based image processing approach.

    9. How the Ground Truth for the Training Set was Established

    As no training set (in the machine learning sense) is explicitly mentioned, the method for establishing its ground truth is also not described. The algorithms appear to be designed based on image processing principles rather than being learned from labeled data.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1