Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K240290
    Device Name
    AiMIFY (1.x)
    Date Cleared
    2024-08-21

    (202 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    Reference Devices :

    K223623

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AiMIFY is an image processing software that can be used for image enhancement in MRI images. It can be used to increase contrast-to-noise ratio (CNR), contrast enhancement (CEP), and lesion-to-brain ratio (LBR) of enhancing tissue in brain MRI images acquired with a gadolinium-based contrast agent. It is intended to enhance MRI images acquired using standard approved dosage per the contrast agent's instructions for use.

    Device Description

    The AiMIFY device is a software as a medical device consisting of a machine learning software algorithm that enhances images taken by MRI scanners. AiMIFY consists of a software algorithm that improves contrast-to-noise ratio (CNR), contrast enhancement (CEP), and lesion-to-brain ratio (LBR) of Gadolinium-Based Contrast Agent (GBCA) enhanced T1-weighted images while maintaining diagnostic performance, using deep learning technology. It is a post-processing software that does not directly interact with the MR scanner and does not have a graphical user interface. It is intended to be used by radiologists in an imaging center, clinic, or hospital. The AiMIFY software uses T1 pre and post-contrast MR images acquired as part of standard of care contrast-enhanced MRI exams as the software input. The outputs are the corresponding images with enhanced contrast presence. AiMIFY enhances DICOM images.

    AiMIFY image processing software uses a convolutional network based algorithm to enhance the AiMIFY-contrast images from pre-contrast and standard-dose post-contrast images. The image processing can be performed on MRI images with predefined or specific acquisition protocol settings as follows: gradient echo (pre- and post-contrast), 3D BRAVO (pre- and post-contrast), 3D MPRAGE (preand post-contrast), 2D T1 spin echo (pre- and post-contrast), T1 FLAIR/ inversion recovery spin echo (pre- and post-contrast).

    The AiMIFY image is created by AiMIFY and sent back to the picture archiving and communication system (PACS) or other DICOM node by the compatible MDDS for clinical review.

    Because the software runs in the background, it has no user interface. It is intended to be used by radiologists in an imaging center, clinic, or hospital.

    Note, depending on the functionality of the compatible MDDS, AiMIFY can be used within the facility's network or remotely. The AiMFY device itself is not networked and therefore does not increase the cybersecurity risk of its users. Users are provided cybersecurity recommendations in labeling.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and the study proving the device meets those criteria, based on the provided text.


    Device: AiMIFY (1.x)
    Indications for Use: Image processing software for enhancement of MRI images (increase CNR, CEP, LBR of enhancing tissue in brain MRI images acquired with gadolinium-based contrast agent).


    1. Acceptance Criteria and Reported Device Performance

    Table of Acceptance Criteria and Reported Device Performance:

    MetricAcceptance CriteriaReported Device Performance
    Quantitative Assessment
    CNR (Contrast-to-Noise Ratio) ImprovementOn average, improved by >= 50% after AiMIFY enhancement compared to traditionally acquired contrast images.Achieved: 559.94% across all 95 cases; 831.70% for 57 lesion-only cases. Significantly higher than standard post-contrast images (Wilcoxon signed-rank test, p < 0.0001).
    LBR (Lesion-to-Brain Ratio) ImprovementOn average, improved by >= 50% after AiMIFY enhancement compared to traditionally acquired contrast images. (Inferred from primary endpoint definition encompassing CNR, LBR, CEP)Achieved: 62.07% across all 95 cases; 58.80% for 57 lesion-only cases. Significantly better than standard post-contrast images (Wilcoxon signed-rank test, p-value < 0.0001).
    CEP (Contrast Enhancement Percentage) ImprovementOn average, improved by >= 50% after AiMIFY enhancement compared to traditionally acquired contrast images. (Inferred from primary endpoint definition encompassing CNR, LBR, CEP)Achieved: 133.29% across all 95 cases; 101.80% for 57 lesion-only cases. Significantly better than standard post-contrast images (Wilcoxon signed-rank test, p-value < 0.0001).
    Qualitative Assessment (Reader Study)
    Perceived Visibility of Lesion Features (Lesion Contrast Enhancement, Border Delineation, Internal Morphology)Statistically significantly better for AiMIFY processed images per the Wilcoxon signed-rank test by p < 0.05.Achieved: Significantly better than standard post-contrast by p < 0.0001 for all three features.
    Perceived Image Quality and Artifact Presence And Impact On Clinical DiagnosisNOT statistically significantly worse than standard post-contrast images per the Wilcoxon signed-rank test by p < 0.05.Achieved: Significantly not worse than standard post-contrast by p < 0.0001. Two of three readers demonstrated Perceived Image Quality is better than standard post-contrast by p < 0.0001.
    Radiomics Analysis
    CCC for Lesion Tissue (7 feature classes)>= 0.65Achieved: Ranged from 0.68 to 0.89 for lesion tissue.
    CCC for Parenchyma Tissue (7 feature classes)>= 0.8Achieved: Ranged from 0.82 to 0.92 for parenchyma tissue.
    SubtleMR Denoising Module Performance
    Visibility of Small StructuresAverage scores between original and SubtleMR enhanced images <= 0.5 Likert scale points.Achieved: Average score difference was 0.05 points.
    Perceived SNR, Image Quality, ArtifactsAverage scores difference between original and SubtleMR enhanced images <= 0.5 Likert scale points. (Measured for Septum Pellucidum, Cranial Nerves, Cerebellar Folia)Achieved: SNR differences: 0.05 (Septum Pellucidum), 0.08 (Cranial Nerves), 0.07 (Cerebellar Folia). Image quality/diagnostic confidence differences: 0.11 (Septum Pellucidum), 0.04 (Cranial Nerves), -0.05 (Cerebellar Folia). Imaging artifacts differences: 0.11 (Septum Pellucidum), 0.14 (Cranial Nerves), 0.05 (Cerebellar Folia).
    SNR Improvement from SubtleMR>= 5% (Acceptance criteria established in SubtleMR validation K223623)Achieved: Average SNR improvement was 14%.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: 95 T1 brain cases.
      • Of these, 57 cases had identified lesions and were used for lesion-specific analyses (e.g., LBR, lesion-specific CNR).
    • Data Provenance: Retrospective, acquired from clinical sites or hospitals.
      • Country of Origin: USA (California, New York, Nationwide), Beijing, China.
      • Acquisition details: Variety of T1 input protocols (BRAVO, MPRAGE+, FLAIR, FSE), orientations (axial, sagittal, coronal), acquisition types (2D, 3D), field strengths (0.3T, 1.5T, 3.0T), and MR scanner vendors (GE, Philips, Siemens, Hitachi).
      • Patient Demographics: Age (7 to 86, relatively even distribution), Sex (relatively even distribution of females and males), Pathologies (Cerebritis, Glioma, Meningioma, Metastases, Multiple Sclerosis, Neuritis, Inflammation, Other tumor related, other abnormalities).

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications

    • Quantitative Assessment (ROI drawing): One board-certified radiologist.
    • Qualitative Assessment (Reader Study): Three board-certified neuro-radiologists.
      • Specific years of experience are not mentioned, but "board-certified" implies a certain level of qualification and experience within their specialty.

    4. Adjudication Method for the Test Set

    • Quantitative Assessment: ROIs were drawn by a single board-certified radiologist. No explicit mention of adjudication or multiple expert consensus for the initial ROI placement. The statistical analysis (Wilcoxon signed-rank test) focuses on the comparison of metrics derived from these ROIs.
    • Qualitative Assessment (Reader Study): The readers individually rated images on Likert scales. The results are presented as aggregated statistics (e.g., "significantly better/not worse by p<0.0001"). There is no mention of an adjudication process (e.g., 2+1, 3+1) to arrive at a single consensus ground truth or final rating for each case from the multiple radiologists.
      • For exploratory endpoints, such as false lesion analysis, it's mentioned that "100% of cases received scores from all readers that the Standard-of-Care image was sufficient to identify the false lesion(s)," indicating agreement, but this is not a formal adjudication process for establishing ground truth from disagreements.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    • Yes, a MRMC study was performed. The "Qualitative Assessment (Reader Study)" involved three board-certified neuro-radiologists evaluating cases.
    • Effect Size of Human Reader Improvement with AI vs. Without AI Assistance:
      • The study design presented is a comparison of standard post-contrast images vs. AiMIFY-enhanced images, evaluated by human readers. It assesses if AiMIFY improves perceived image quality and lesion features.
      • The results show improvement in features like "Lesion Contrast Enhancement, Border Delineation, and Internal Morphology" (p < 0.0001 compared to standard post-contrast). Perceived Image Quality was "not worse" and even "better" for two of three readers (p < 0.0001).
      • This study directly demonstrates the improvement in image characteristics for human readers when viewing AiMIFY-enhanced images. It does not, however, describe a comparative effectiveness study showing how much human readers' diagnostic accuracy or confidence improves when assisted by AI vs. not assisted. The study focuses on the image enhancement characteristics as perceived by readers rather than a change in diagnostic outcome or reader performance statistics.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    • Yes, a standalone assessment was performed. The "Quantitative Assessment (Bench Test)" evaluated the algorithm's performance directly by comparing calculated metrics (CNR, LBR, CEP) from AiMIFY-processed images against standard post-contrast images. This assessment did not involve human readers' diagnostic interpretation of the images but rather quantifiable improvements generated by the algorithm itself.

    7. The Type of Ground Truth Used

    • Quantitative Assessment: The ground truth for calculating CNR, LBR, and CEP was based on ROIs drawn by a single board-certified radiologist, identifying enhancing lesions and brain parenchyma. This can be considered a form of expert-defined ground truth based on anatomical and radiological characteristics. The lesions themselves were "identified" in the test datasets, suggesting a pre-existing clinical determination of their presence.
    • Qualitative Assessment: The ground truth for "lesion presence" in the Qualitative Assessment was presumably based on cases identified to "have lesions" in the initial test dataset (57 out of 95 cases). The evaluation itself was subjective (Likert scale ratings of perceived visibility, quality, etc.), with readers comparing the standard and AiMIFY images. This relies on the subjective judgment of multiple experts rather than an independent "true" ground truth like pathology.

    8. The Sample Size for the Training Set

    • The document does not explicitly state the sample size of the training set.
    • It mentions that the training and validation datasets were compared for CNR increase, and that the training data compared low-dose to regular-dose post-contrast images, but provides no numerical size for the training set itself.

    9. How the Ground Truth for the Training Set Was Established

    • The document does not explicitly describe how the ground truth for the training set was established.
    • It implies that the training data involved "low-dose to regular-dose post-contrast images," suggesting that perhaps the ground truth for training the enhancement model was the "regular-dose" image, or that the model was trained to transform low-signal images into higher-signal enhanced images. However, specifics on how the "true" enhanced state or lesion characteristics within the training data were determined are not provided.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1