Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K252379

    Validate with FDA (Live)

    Device Name
    AIR Recon DL
    Date Cleared
    2025-12-23

    (146 days)

    Product Code
    Regulation Number
    892.1000
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AIR Recon DL is a deep learning based reconstruction technique that is available for use on GE HealthCare 1.5T, 3.0T, and 7.0T MR systems. AIR Recon DL reduces noise and ringing (truncation artifacts) in MR images, which can be used to reduce scan time and improve image quality. AIR Recon DL is intended for use with all anatomies, and for patients of all ages. Depending on the anatomy of interest being imaged, contrast agents may be used.

    Device Description

    AIR Recon DL is a software feature intended for use with GE HealthCare MR systems. It is a deep learning-based reconstruction technique that removes noise and ringing (truncation) artifacts from MR images. AIR Recon DL is an optional feature that is integrated into the MR system software and activated through purchasable software option keys. AIR Recon DL has been previously cleared for use with 2D Cartesian, 3D Cartesian, and PROPELLER imaging sequences.

    The proposed device is a modified version of AIR Recon DL that includes a new deep-learning phase correction algorithm for applications that create multiple intermediate images and combine them, such as Diffusion Weighted Imaging where multiple NEX images are collected and combined. This enhancement is an optional feature that is integrated into the MR system software and activated through an additional purchasable software option key (separate from the software option keys of the predicate device).

    AI/ML Overview

    This document describes the acceptance criteria and the studies conducted to prove the performance of the AIR Recon DL device, as presented in the FDA 510(k) clearance letter.

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria CategorySpecific Metric/DescriptionAcceptance Criteria DetailsReported Device Performance
    Nonclinical TestingDLPC Model: Accuracy of Phase CorrectionProvides more accurate phase correctionDemonstrates more accurate phase correction
    DLPC Model: Impact on Noise FloorEffectively reduce signal biasEffectively reduces signal bias and lowers the noise floor
    PC-ARDL Model: SNRImprove SNRImproves SNR
    PC-ARDL Model: Image SharpnessImprove image sharpnessImproves image sharpness
    PC-ARDL Model: Low Contrast DetectabilityImprove low contrast detectabilityDoes not adversely impact retention of low contrast features
    Overall Image Quality/Safety/PerformanceNo adverse impacts to image quality, safety, or performanceNo adverse impacts to image quality, safety, or performance identified
    In-Vivo Performance TestingDLPC & PC-ARDL: ADC Accuracy (Diffusion Imaging)Accurate and unbiased ADC values, especially at higher b-valuesAchieved accurate and unbiased ADC values across all b-values tested (whereas predicate showed significant reductions)
    DLPC & PC-ARDL: Low-Contrast DetectabilityRetention of low-contrast featuresSignificant improvement in contrast-to-noise ratio, "not adversely impacting the retention of low contrast features"
    Quantitative Post ProcessingADC Measurement RepeatabilitySimilar repeatability to conventional methodsCoefficient of variability for ADC values closely matched those generated with product reconstruction
    Effectiveness of Phase Correction (Real/Imaginary Channels)Signal primarily in the real channel, noise only in the imaginary channelFor DLPC, all signal was in the real channel, imaginary channel contained noise only (outperforming conventional methods)
    Clinical Image Quality StudyDiagnostic QualityExcellent diagnostic quality without loss of diagnostic quality, even in challenging situationsProduces images of excellent diagnostic quality, delivering overall exceptional image quality across all organ systems, even in challenging situations

    2. Sample Size Used for the Test Set and Data Provenance

    • Nonclinical Testing:
      • Phantom testing was conducted for the DLPC and PC-ARDL models. No specific sample size (number of phantom scans) is provided, but it implies a sufficient number for evaluation.
    • In-Vivo Performance Testing:
      • ADC Accuracy: Diffusion-weighted brain images were acquired at 1.5T with b-values = 50, 400, 800, 1200 s/mm². The number of subjects is not explicitly stated, but it's referred to as "diffusion images" and "diffusion-weighted brain images."
      • Low-Contrast Detectability: Raw data from 4 diffusion-weighted brain scans were used.
    • Quantitative Post Processing (Repeatability Study):
      • 6 volunteers were recruited. 2 volunteers scanned on a 1.5T scanner, 4 on a 3T scanner.
      • Scanned anatomical regions included brain, spine, abdomen, pelvis, and breast.
      • Each sequence was repeated 4 times.
      • Data Provenance: The document states "in-vivo data" and "volunteer scanning was performed simulating routine clinical workflows." This suggests prospective scanning of human subjects, likely in a controlled environment. The country of origin is not specified, but given the FDA submission, it's likely U.S. or international data meeting U.S. standards. The statement "previously acquired de-identified cases" for the Clinical Image Quality Study refers to retrospective data for that specific study, but the volunteer scanning for repeatability appears prospective.
    • Clinical Image Quality Study:
      • 34 datasets of previously acquired de-identified cases.
      • Data Provenance: "previously acquired de-identified cases" indicates retrospective data. The country of origin is not specified.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Nonclinical Testing: Ground truth established through phantom measurements and expected physical properties (e.g., signal bias, noise floor). No human experts involved in establishing ground truth here.
    • In-Vivo Performance Testing:
      • ADC Accuracy: "Average ADC values were measured from regions of interest in the lateral ventricles." This implies expert selection of ROIs, but the number of experts is not specified. The ground truth for ADC is the expected isotropic Gaussian diffusion in these regions.
      • Low-Contrast Detectability: "The contrast ratio and contrast-to-noise ratio for each of the inserts were measured." This is a quantitative measure, not explicitly relying on expert consensus for ground truth on detectability, but rather on the known properties of the inserted synthetic objects.
    • Quantitative Post Processing:
      • ADC Repeatability: Ground truth for repeatability is based on quantitative measurements and statistical analysis (coefficient of variability). ROI placement would typically be done by an expert, but the number is not specified.
      • Phase Correction Effectiveness: Ground truth is based on the theoretical expectation of signal distribution in real/imaginary channels after ideal phase correction.
    • Clinical Image Quality Study:
      • One (1) U.S. Board Certified Radiologist was used.
      • Qualifications: "U.S. Board Certified Radiologist." No explicit number of years of experience is stated, but Board Certification indicates a high level of expertise.

    4. Adjudication Method for the Test Set

    • Nonclinical/Phantom Testing: No explicit adjudication method described beyond passing defined acceptance criteria for quantitative metrics.
    • In-Vivo Performance Testing: Quantitative measurements (ADC values, contrast ratios, CNR) were used. Paired t-tests were conducted, which is a statistical comparison method, not an adjudication process as typically defined for expert readings.
    • Quantitative Post Processing: Quantitative measurements and statistical analysis (coefficient of variability, comparison of real/imaginary channels).
    • Clinical Image Quality Study: A single U.S. Board Certified Radiologist made the assessment. There is no stated adjudication method described, implying a single-reader assessment for clinical image quality.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • An MRMC comparative effectiveness study was not explicitly described as a formal study design in the provided text.
    • The "Clinical Image Quality Study" involved only one radiologist, so it does not qualify as an MRMC study.
    • There is no reported effect size of how much human readers improve with AI vs. without AI assistance. The study rather focused on the AI-reconstructed images' standalone diagnostic quality.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    • Yes, performance was evaluated in a standalone manner.
      • Nonclinical Testing: Phantom studies directly evaluate the algorithm's output against known physical properties and defined metrics.
      • In-Vivo Performance Testing: ADC accuracy and low-contrast detectability were measured directly from the reconstructed images, which is a standalone evaluation of the algorithm's quantitative output.
      • Quantitative Post Processing: Repeatability and effectiveness of phase correction in real/imaginary channels are algorithm-centric evaluations.
      • Even the clinical image quality study, while involving a human reader, assessed the standalone output of the algorithm (AIR Recon DL with Phase Correction) for diagnostic quality.

    7. Type of Ground Truth Used

    • Expert Consensus: Not explicitly stated as the primary ground truth for quantitative metrics, but one radiologist's assessment served as the primary clinical ground truth.
    • Pathology: Not used as ground truth in the provided study descriptions. While some datasets "included pathological features such as prostate cancer... hepatocellular carcinoma," the assessment by the radiologist was on "diagnostic quality" of the images, not a comparison against pathology reports for definitive disease identification.
    • Outcomes Data: Not used as ground truth.
    • Other:
      • Physical Properties/Known Standards: For phantom testing (e.g., signal bias, noise floor, SNR, sharpness), and for theoretical expectations of ADC values in specific regions (lateral ventricles).
      • Known Synthetic Inserts: For low-contrast detectability.
      • Theoretical Expectations: For phase correction effectiveness (signal in real, noise in imaginary).

    8. Sample Size for the Training Set

    • The document does not provide any specific sample size for the training set used for the deep learning models (DLPC and PC-ARDL). It only states that the models are "deep learning-based."

    9. How the Ground Truth for the Training Set Was Established

    • The document does not provide any information on how the ground truth for the training set was established. It only describes the testing of the final, trained models.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1