Search Filters

Search Results

Found 674 results

510(k) Data Aggregation

    K Number
    K252379

    Validate with FDA (Live)

    Device Name
    AIR Recon DL
    Date Cleared
    2025-12-23

    (146 days)

    Product Code
    Regulation Number
    892.1000
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AIR Recon DL is a deep learning based reconstruction technique that is available for use on GE HealthCare 1.5T, 3.0T, and 7.0T MR systems. AIR Recon DL reduces noise and ringing (truncation artifacts) in MR images, which can be used to reduce scan time and improve image quality. AIR Recon DL is intended for use with all anatomies, and for patients of all ages. Depending on the anatomy of interest being imaged, contrast agents may be used.

    Device Description

    AIR Recon DL is a software feature intended for use with GE HealthCare MR systems. It is a deep learning-based reconstruction technique that removes noise and ringing (truncation) artifacts from MR images. AIR Recon DL is an optional feature that is integrated into the MR system software and activated through purchasable software option keys. AIR Recon DL has been previously cleared for use with 2D Cartesian, 3D Cartesian, and PROPELLER imaging sequences.

    The proposed device is a modified version of AIR Recon DL that includes a new deep-learning phase correction algorithm for applications that create multiple intermediate images and combine them, such as Diffusion Weighted Imaging where multiple NEX images are collected and combined. This enhancement is an optional feature that is integrated into the MR system software and activated through an additional purchasable software option key (separate from the software option keys of the predicate device).

    AI/ML Overview

    This document describes the acceptance criteria and the studies conducted to prove the performance of the AIR Recon DL device, as presented in the FDA 510(k) clearance letter.

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria CategorySpecific Metric/DescriptionAcceptance Criteria DetailsReported Device Performance
    Nonclinical TestingDLPC Model: Accuracy of Phase CorrectionProvides more accurate phase correctionDemonstrates more accurate phase correction
    DLPC Model: Impact on Noise FloorEffectively reduce signal biasEffectively reduces signal bias and lowers the noise floor
    PC-ARDL Model: SNRImprove SNRImproves SNR
    PC-ARDL Model: Image SharpnessImprove image sharpnessImproves image sharpness
    PC-ARDL Model: Low Contrast DetectabilityImprove low contrast detectabilityDoes not adversely impact retention of low contrast features
    Overall Image Quality/Safety/PerformanceNo adverse impacts to image quality, safety, or performanceNo adverse impacts to image quality, safety, or performance identified
    In-Vivo Performance TestingDLPC & PC-ARDL: ADC Accuracy (Diffusion Imaging)Accurate and unbiased ADC values, especially at higher b-valuesAchieved accurate and unbiased ADC values across all b-values tested (whereas predicate showed significant reductions)
    DLPC & PC-ARDL: Low-Contrast DetectabilityRetention of low-contrast featuresSignificant improvement in contrast-to-noise ratio, "not adversely impacting the retention of low contrast features"
    Quantitative Post ProcessingADC Measurement RepeatabilitySimilar repeatability to conventional methodsCoefficient of variability for ADC values closely matched those generated with product reconstruction
    Effectiveness of Phase Correction (Real/Imaginary Channels)Signal primarily in the real channel, noise only in the imaginary channelFor DLPC, all signal was in the real channel, imaginary channel contained noise only (outperforming conventional methods)
    Clinical Image Quality StudyDiagnostic QualityExcellent diagnostic quality without loss of diagnostic quality, even in challenging situationsProduces images of excellent diagnostic quality, delivering overall exceptional image quality across all organ systems, even in challenging situations

    2. Sample Size Used for the Test Set and Data Provenance

    • Nonclinical Testing:
      • Phantom testing was conducted for the DLPC and PC-ARDL models. No specific sample size (number of phantom scans) is provided, but it implies a sufficient number for evaluation.
    • In-Vivo Performance Testing:
      • ADC Accuracy: Diffusion-weighted brain images were acquired at 1.5T with b-values = 50, 400, 800, 1200 s/mm². The number of subjects is not explicitly stated, but it's referred to as "diffusion images" and "diffusion-weighted brain images."
      • Low-Contrast Detectability: Raw data from 4 diffusion-weighted brain scans were used.
    • Quantitative Post Processing (Repeatability Study):
      • 6 volunteers were recruited. 2 volunteers scanned on a 1.5T scanner, 4 on a 3T scanner.
      • Scanned anatomical regions included brain, spine, abdomen, pelvis, and breast.
      • Each sequence was repeated 4 times.
      • Data Provenance: The document states "in-vivo data" and "volunteer scanning was performed simulating routine clinical workflows." This suggests prospective scanning of human subjects, likely in a controlled environment. The country of origin is not specified, but given the FDA submission, it's likely U.S. or international data meeting U.S. standards. The statement "previously acquired de-identified cases" for the Clinical Image Quality Study refers to retrospective data for that specific study, but the volunteer scanning for repeatability appears prospective.
    • Clinical Image Quality Study:
      • 34 datasets of previously acquired de-identified cases.
      • Data Provenance: "previously acquired de-identified cases" indicates retrospective data. The country of origin is not specified.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Nonclinical Testing: Ground truth established through phantom measurements and expected physical properties (e.g., signal bias, noise floor). No human experts involved in establishing ground truth here.
    • In-Vivo Performance Testing:
      • ADC Accuracy: "Average ADC values were measured from regions of interest in the lateral ventricles." This implies expert selection of ROIs, but the number of experts is not specified. The ground truth for ADC is the expected isotropic Gaussian diffusion in these regions.
      • Low-Contrast Detectability: "The contrast ratio and contrast-to-noise ratio for each of the inserts were measured." This is a quantitative measure, not explicitly relying on expert consensus for ground truth on detectability, but rather on the known properties of the inserted synthetic objects.
    • Quantitative Post Processing:
      • ADC Repeatability: Ground truth for repeatability is based on quantitative measurements and statistical analysis (coefficient of variability). ROI placement would typically be done by an expert, but the number is not specified.
      • Phase Correction Effectiveness: Ground truth is based on the theoretical expectation of signal distribution in real/imaginary channels after ideal phase correction.
    • Clinical Image Quality Study:
      • One (1) U.S. Board Certified Radiologist was used.
      • Qualifications: "U.S. Board Certified Radiologist." No explicit number of years of experience is stated, but Board Certification indicates a high level of expertise.

    4. Adjudication Method for the Test Set

    • Nonclinical/Phantom Testing: No explicit adjudication method described beyond passing defined acceptance criteria for quantitative metrics.
    • In-Vivo Performance Testing: Quantitative measurements (ADC values, contrast ratios, CNR) were used. Paired t-tests were conducted, which is a statistical comparison method, not an adjudication process as typically defined for expert readings.
    • Quantitative Post Processing: Quantitative measurements and statistical analysis (coefficient of variability, comparison of real/imaginary channels).
    • Clinical Image Quality Study: A single U.S. Board Certified Radiologist made the assessment. There is no stated adjudication method described, implying a single-reader assessment for clinical image quality.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • An MRMC comparative effectiveness study was not explicitly described as a formal study design in the provided text.
    • The "Clinical Image Quality Study" involved only one radiologist, so it does not qualify as an MRMC study.
    • There is no reported effect size of how much human readers improve with AI vs. without AI assistance. The study rather focused on the AI-reconstructed images' standalone diagnostic quality.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    • Yes, performance was evaluated in a standalone manner.
      • Nonclinical Testing: Phantom studies directly evaluate the algorithm's output against known physical properties and defined metrics.
      • In-Vivo Performance Testing: ADC accuracy and low-contrast detectability were measured directly from the reconstructed images, which is a standalone evaluation of the algorithm's quantitative output.
      • Quantitative Post Processing: Repeatability and effectiveness of phase correction in real/imaginary channels are algorithm-centric evaluations.
      • Even the clinical image quality study, while involving a human reader, assessed the standalone output of the algorithm (AIR Recon DL with Phase Correction) for diagnostic quality.

    7. Type of Ground Truth Used

    • Expert Consensus: Not explicitly stated as the primary ground truth for quantitative metrics, but one radiologist's assessment served as the primary clinical ground truth.
    • Pathology: Not used as ground truth in the provided study descriptions. While some datasets "included pathological features such as prostate cancer... hepatocellular carcinoma," the assessment by the radiologist was on "diagnostic quality" of the images, not a comparison against pathology reports for definitive disease identification.
    • Outcomes Data: Not used as ground truth.
    • Other:
      • Physical Properties/Known Standards: For phantom testing (e.g., signal bias, noise floor, SNR, sharpness), and for theoretical expectations of ADC values in specific regions (lateral ventricles).
      • Known Synthetic Inserts: For low-contrast detectability.
      • Theoretical Expectations: For phase correction effectiveness (signal in real, noise in imaginary).

    8. Sample Size for the Training Set

    • The document does not provide any specific sample size for the training set used for the deep learning models (DLPC and PC-ARDL). It only states that the models are "deep learning-based."

    9. How the Ground Truth for the Training Set Was Established

    • The document does not provide any information on how the ground truth for the training set was established. It only describes the testing of the final, trained models.
    Ask a Question

    Ask a specific question about this device

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Intended Use / Indications for Use

    Indications for Use for MAGNETOM Vida, MAGNETOM Lumina, MAGNETOM Vida Fit, MAGNETOM Sola, MAGNETOM Altea, MAGNETOM Sola Fit, MAGNETOM Viato.Mobile:

    The MAGNETOM system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces transverse, sagittal, coronal and oblique cross sectional images, spectroscopic images and/or spectra, and that displays the internal structure and/or function of the head, body, or extremities. Other physical parameters derived from the images and/or spectra may also be produced. Depending on the region of interest, contrast agents may be used. These images and/or spectra and the physical parameters derived from the images and/or spectra when interpreted by a trained physician yield information that may assist in diagnosis.

    The MAGNETOM system may also be used for imaging during interventional procedures when performed with MR compatible devices such as in-room displays and MR Safe biopsy needles.

    Indications for Use for MAGNETOM Flow.Elite, MAGNETOM Flow.Neo, MAGNETOM Flow.Rise:

    The MAGNETOM system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces transverse, sagittal, coronal and oblique cross sectional images, spectroscopic images and/or spectra, and that displays, depending on optional local coils that have been configured with the system, the internal structure and/or function of the head, body, or extremities. Other physical parameters derived from the images and/or spectra may also be produced. Depending on the region of interest, contrast agents may be used. These images and/or spectra and the physical parameters derived from the images and/or spectra when interpreted by a trained physician yield information that may assist in diagnosis.

    The MAGNETOM system may also be used for imaging during interventional procedures when performed with MR compatible devices such as in-room displays and MR Safe biopsy needles.

    Device Description

    The subject device, MAGNETOM Vida with software Syngo MR XB10, consists of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Vida with syngo MR XA60A (K231560).

    A high-level summary of the new and modified hardware and software is provided below:

    New Hardware:

    • myExam 3D Camera
    • BM Contour XL Coil

    Modified Hardware:

    • RF Transmitter TBX3 3T (TX Box 3)
    • MaRS (Measurement and reconstruction system)

    Software

    New Features and Applications:

    • Brachytherapy Support for use with MR conditional applicators
    • CS Vibe
    • myExam Implant Suite
    • DANTE blood suppression
    • SMS Averaging for TSE
    • SMS Averaging for TSE_DIXON
    • SMS for BLADE without diffusion function
    • BioMatrix Motion Sensor
    • RF pulse optimization with VERSE
    • Deep Resolve Boost for FL3D_VIBE and SPACE
    • Deep Resolve Sharp for FL3D_VIBE and SPACE
    • ASNR recommended protocols for imaging of ARIA
    • Preview functionality for Deep Resolve Boost
    • EP2D_FID_PHS
    • EP_SEG_FID_PHS
    • 3D Whole Heart
    • Ghost reduction (Dual polarity Grappa (DPG))
    • Fleet Reference Scan
    • AutoMate Cardiac (Cardiac AI Scan Companion)
    • Complex Averaging
    • myExam Autopilot Spine
    • myExam Autopilot Brain and myExam Autopilot Knee
    • Open Workflow

    Modified features and applications:

    • GRE_PC
    • myExam RT Assist workflow improvements
    • Open Recon 2.0
    • Deep Resolve Boost for TSE
    • "MTC Mode" for SPACE
    • SPACE Improvement: high bandwidth IR pulse
    • SPACE Improvement: increase gradient spoiling

    The subject device, MAGNETOM Lumina with software Syngo MR XB10, consists of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Lumina with syngo MR XA60A (K231560). A high-level summary of the new and modified hardware and software is provided below:

    New Hardware:

    • myExam 3D Camera
    • BM Contour XL Coil

    Modified Hardware:

    • RF Transmitter TBX3 3T (TX Box 3)
    • MaRS (Measurement and reconstruction system)

    Software

    New Features and Applications:

    • CS Vibe
    • myExam Implant Suite
    • DANTE blood suppression
    • SMS Averaging for TSE
    • SMS Averaging for TSE_DIXON
    • SMS for BLADE without diffusion function
    • BioMatrix Motion Sensor
    • RF pulse optimization with VERSE
    • Deep Resolve Boost for FL3D_VIBE and SPACE
    • Deep Resolve Sharp for FL3D_VIBE and SPACE
    • Preview functionality for Deep Resolve Boost
    • EP2D_FID_PHS
    • EP_SEG_FID_PHS
    • Ghost reduction (Dual polarity Grappa (DPG))
    • Fleet Reference Scan
    • AutoMate Cardiac (Cardiac AI Scan Companion)
    • Complex Averaging
    • myExam Autopilot Spine
    • myExam Autopilot Brain and myExam Autopilot Knee
    • Compressed Sensing Cardiac Cine
    • Open Workflow

    Modified Features and Applications:

    • GRE_PC
    • Open Recon 2.0
    • Deep Resolve Boost for TSE
    • "MTC Mode" for SPACE
    • SPACE Improvement: high bandwidth IR pulse
    • SPACE Improvement: increase gradient spoiling

    The subject device, MAGNETOM Lumina with software Syngo MR XB10, consists of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Lumina with syngo MR XA60A (K231560). A high-level summary of the new and modified hardware and software is provided below:

    New Hardware:

    • myExam 3D Camera
    • BM Contour XL Coil

    Modified Hardware:

    • RF Transmitter TBX3 3T (TX Box 3)
    • MaRS (Measurement and reconstruction system)

    Software

    New Features and Applications:

    • CS Vibe
    • myExam Implant Suite
    • DANTE blood suppression
    • SMS Averaging for TSE
    • SMS Averaging for TSE_DIXON
    • SMS for BLADE without diffusion function
    • BioMatrix Motion Sensor
    • RF pulse optimization with VERSE
    • Deep Resolve Boost for FL3D_VIBE and SPACE
    • Deep Resolve Sharp for FL3D_VIBE and SPACE
    • Preview functionality for Deep Resolve Boost
    • EP2D_FID_PHS
    • EP_SEG_FID_PHS
    • Ghost reduction (Dual polarity Grappa (DPG))
    • Fleet Reference Scan
    • AutoMate Cardiac (Cardiac AI Scan Companion)
    • Complex Averaging
    • myExam Autopilot Spine
    • myExam Autopilot Brain and myExam Autopilot Knee
    • Compressed Sensing Cardiac Cine
    • Open Workflow

    Modified Features and Applications:

    • GRE_PC
    • Open Recon 2.0
    • Deep Resolve Boost for TSE
    • "MTC Mode" for SPACE
    • SPACE Improvement: high bandwidth IR pulse
    • SPACE Improvement: increase gradient spoiling

    The subject device, MAGNETOM Vida Fit with software Syngo MR XB10, consists of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Vida with syngo MR XA60A (K231560).

    A high-level summary of the new and modified hardware and software is provided below:

    New Hardware:

    • myExam 3D Camera
    • Beat Sensor
    • BM Contour XL Coil

    Modified Hardware:

    • RF Transmitter TBX3 3T (TX Box 3)
    • MaRS (Measurement and reconstruction system)
    • Host computers

    Software

    New Features and Applications:

    • Brachytherapy Support for use with MR conditional applicators
    • CS Vibe
    • myExam Implant Suite
    • DANTE blood suppression
    • SMS Averaging for TSE
    • SMS Averaging for TSE_DIXON
    • SMS for BLADE without diffusion function
    • BioMatrix Motion Sensor
    • RF pulse optimization with VERSE
    • Deep Resolve Boost for FL3D_VIBE and SPACE
    • Deep Resolve Sharp for FL3D_VIBE and SPACE
    • ASNR recommended protocols for imaging of ARIA
    • Preview functionality for Deep Resolve Boost
    • EP2D_FID_PHS
    • EP_SEG_FID_PHS
    • GRE_PC
    • Open Recon 2.0
    • 3D Whole Heart
    • Ghost reduction (Dual polarity Grappa (DPG))
    • Fleet Reference Scan
    • AutoMate Cardiac (Cardiac AI Scan Companion)
    • myExam Autopilot Spine
    • myExam Autopilot Brain and myExam Autopilot Knee
    • Deep Resolve for EPI
    • Deep Resolve for HASTE
    • Physiologging
    • Complex Averaging
    • Open Workflow

    Modified features and applications:

    • myExam RT Assist workflow improvements
    • Deep Resolve Boost for TSE
    • "MTC Mode" for SPACE
    • myExam Angio Advanced Assist (Test Bolus)
    • SPACE Improvement: high bandwidth IR pulse
    • SPACE Improvement: increase gradient spoiling

    The subject device, MAGNETOM Sola with software Syngo MR XB10, consists of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Sola with syngo MR XA61A (K232535).

    A high-level summary of the new and modified hardware and software is provided below:

    New Hardware:

    • BM Contour XL Coil

    Modified Hardware:

    • MaRS (Measurement and reconstruction system)

    Software

    New Features and Applications:

    • Brachytherapy Support for use with MR conditional applicators
    • CS Vibe
    • DANTE blood suppression
    • BioMatrix Motion Sensor
    • SPAIR FatSat Improvements: SPAIR "Abdomen&Pelvis" mode and SPAIR Breast mode
    • RF pulse optimization with VERSE
    • Deep Resolve Boost for FL3D_VIBE and SPACE
    • Deep Resolve Sharp for FL3D_VIBE and SPACE
    • ASNR recommended protocols for imaging of ARIA
    • Preview functionality for Deep Resolve Boost
    • EP2D_FID_PHS
    • EP_SEG_FID_PHS
    • 3D Whole Heart
    • AutoMate Cardiac (Cardiac AI Scan Companion)
    • SMS Averaging for TSE
    • SMS Averaging for TSE_DIXON
    • SMS for BLADE without diffusion function
    • Ghost reduction (Dual polarity Grappa (DPG))
    • Fleet Reference Scan
    • Deep Resolve Swift Brain
    • myExam Autopilot Spine
    • Open Workflow
    • Complex Averaging
    • Open Workflow

    Modified features and applications:

    • myExam RT Assist workflow improvements
    • Deep Resolve Boost for TSE
    • "MTC Mode" for SPACE
    • myExam Angio Advanced Assist (Test Bolus)
    • SPACE Improvement: high bandwidth IR pulse
    • SPACE Improvement: increase gradient spoiling

    The subject device, MAGNETOM Sola with software Syngo MR XB10, consists of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Sola with syngo MR XA61A (K232535).

    A high-level summary of the new and modified hardware and software is provided below:

    New Hardware:

    • BM Contour XL Coil

    Modified Hardware:

    • MaRS (Measurement and reconstruction system)

    Software

    New Features and Applications:

    • Brachytherapy Support for use with MR conditional applicators
    • CS Vibe
    • DANTE blood suppression
    • BioMatrix Motion Sensor
    • SPAIR FatSat Improvements: SPAIR "Abdomen&Pelvis" mode and SPAIR Breast mode
    • RF pulse optimization with VERSE
    • Deep Resolve Boost for FL3D_VIBE and SPACE
    • Deep Resolve Sharp for FL3D_VIBE and SPACE
    • ASNR recommended protocols for imaging of ARIA
    • Preview functionality for Deep Resolve Boost
    • EP2D_FID_PHS
    • EP_SEG_FID_PHS
    • 3D Whole Heart
    • AutoMate Cardiac (Cardiac AI Scan Companion)
    • SMS Averaging for TSE
    • SMS Averaging for TSE_DIXON
    • SMS for BLADE without diffusion function
    • Ghost reduction (Dual polarity Grappa (DPG))
    • Fleet Reference Scan
    • Deep Resolve Swift Brain
    • myExam Autopilot Spine
    • Open Workflow

    Modified features and applications:

    • myExam Implant Suite
    • GRE_PC
    • myExam RT Assist workflow improvements
    • Open Recon 2.0
    • Deep Resolve Boost for TSE
    • "MTC Mode" for SPACE
    • SPACE Improvement: high bandwidth IR pulse
    • SPACE Improvement: increase gradient spoiling

    The subject device, MAGNETOM Altea with software Syngo MR XB10, consists of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Altea with syngo MR XA61A (K232535).

    A high-level summary of the new and modified hardware and software is provided below:

    New Hardware:

    • BM Contour XL Coil

    Modified Hardware:

    • MaRS (Measurement and reconstruction system)

    Software

    New Features and Applications:

    • CS Vibe
    • DANTE blood suppression
    • BioMatrix Motion Sensor
    • SPAIR FatSat Improvements: SPAIR "Abdomen&Pelvis" mode and SPAIR Breast mode
    • RF pulse optimization with VERSE
    • Deep Resolve Boost for FL3D_VIBE and SPACE
    • Deep Resolve Sharp for FL3D_VIBE and SPACE
    • Preview functionality for Deep Resolve Boost
    • EP2D_FID_PHS
    • EP_SEG_FID_PHS
    • AutoMate Cardiac (Cardiac AI Scan Companion)
    • SMS Averaging for TSE
    • SMS Averaging for TSE_DIXON
    • SMS for BLADE without diffusion function
    • Ghost reduction (Dual polarity Grappa (DPG))
    • Fleet Reference Scan
    • Deep Resolve Swift Brain
    • myExam Autopilot Spine
    • Compressed Sensing Cardiac Cine
    • Open Workflow

    Modified features and applications:

    • myExam Implant Suite
    • GRE_PC
    • myExam RT Assist workflow improvements
    • Open Recon 2.0
    • Deep Resolve Boost for TSE
    • "MTC Mode" for SPACE
    • SPACE Improvement: high bandwidth IR pulse
    • SPACE Improvement: increase gradient spoiling

    The subject device, MAGNETOM Sola Fit with software Syngo MR XB10, consists of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Sola Fit with syngo MR XA70A (K250443).

    A high-level summary of the new and modified hardware and software is provided below:

    New Hardware:

    • BM Contour XL Coil

    Modified Hardware:

    • MaRS (Measurement and reconstruction system)
    • Host computers

    Software

    New Features and Applications:

    • Brachytherapy Support for use with MR conditional applicators
    • CS Vibe
    • DANTE blood suppression
    • BioMatrix Motion Sensor
    • SPAIR FatSat Improvements: SPAIR "Abdomen&Pelvis" mode and SPAIR Breast mode
    • RF pulse optimization with VERSE
    • Deep Resolve Boost for FL3D_VIBE and SPACE
    • Deep Resolve Sharp for FL3D_VIBE and SPACE
    • ASNR recommended protocols for imaging of ARIA
    • Preview functionality for Deep Resolve Boost
    • EP2D_FID_PHS
    • EP_SEG_FID_PHS
    • myExam Implant Suite
    • GRE_PC
    • Open Recon 2.0
    • SMS Averaging for TSE
    • SMS Averaging for TSE_DIXON
    • SMS for BLADE without diffusion function
    • Deep Resolve Swift Brain
    • myExam Autopilot Spine
    • Open Workflow

    Modified features and applications:

    • myExam RT Assist workflow improvements
    • myExam Implant Suite
    • Deep Resolve Boost for TSE
    • "MTC Mode" for SPACE
    • SPACE Improvement: high bandwidth IR pulse
    • SPACE Improvement: increase gradient spoiling

    The subject device, MAGNETOM Sola Fit with software Syngo MR XB10, consists of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Sola Fit with syngo MR XA70A (K250443).

    A high-level summary of the new and modified hardware and software is provided below:

    New Hardware:

    • BM Contour XL Coil

    Modified Hardware:

    • MaRS (Measurement and reconstruction system)
    • Host computers

    Software

    New Features and Applications:

    • Brachytherapy Support for use with MR conditional applicators
    • CS Vibe
    • DANTE blood suppression
    • BioMatrix Motion Sensor
    • SPAIR FatSat Improvements: SPAIR "Abdomen&Pelvis" mode and SPAIR Breast mode
    • RF pulse optimization with VERSE
    • Deep Resolve Boost for FL3D_VIBE and SPACE
    • Deep Resolve Sharp for FL3D_VIBE and SPACE
    • ASNR recommended protocols for imaging of ARIA
    • Preview functionality for Deep Resolve Boost
    • EP2D_FID_PHS
    • EP_SEG_FID_PHS
    • myExam Implant Suite
    • GRE_PC
    • Open Recon 2.0
    • SMS Averaging for TSE
    • SMS Averaging for TSE_DIXON
    • SMS for BLADE without diffusion function
    • Deep Resolve Swift Brain
    • myExam Autopilot Spine
    • Open Workflow

    Modified features and applications:

    • myExam RT Assist workflow improvements
    • myExam Implant Suite
    • Deep Resolve Boost for TSE
    • "MTC Mode" for SPACE

    The subject device, MAGNETOM Viato.Mobile with software Syngo MR XB10, consists of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Viato.Mobile with syngo MR XA70A (K250443).

    A high-level summary of the new and modified hardware and software is provided below:

    New Hardware:

    • BM Contour XL Coil

    Modified Hardware:

    • MaRS (Measurement and reconstruction system)
    • Host computers

    Software

    New Features and Applications:

    • CS Vibe
    • DANTE blood suppression
    • BioMatrix Motion Sensor
    • SPAIR FatSat Improvements: SPAIR "Abdomen&Pelvis" mode and SPAIR Breast mode
    • RF pulse optimization with VERSE
    • Deep Resolve Boost for FL3D_VIBE and SPACE
    • Deep Resolve Sharp for FL3D_VIBE and SPACE
    • ASNR recommended protocols for imaging of ARIA
    • Preview functionality for Deep Resolve Boost
    • EP2D_FID_PHS
    • EP_SEG_FID_PHS
    • myExam Implant Suite
    • GRE_PC
    • Open Recon 2.0
    • SMS Averaging for TSE
    • SMS Averaging for TSE_DIXON
    • SMS for BLADE without diffusion function
    • Deep Resolve Swift Brain
    • myExam Autopilot Spine
    • Open Workflow

    Modified features and applications:

    • myExam Implant Suite
    • Deep Resolve Boost for TSE
    • "MTC Mode" for SPACE

    With the subject software version, Syngo MR XB10, we are also introducing the following new 1.5T devices, which are part of our MAGNETOM Flow. Platform:

    MAGNETOM Flow.Elite
    MAGNETOM Flow.Neo
    MAGNETOM Flow.Rise

    The subject device, MAGNETOM Flow.Elite, MAGNETOM Flow.Neo and MAGNETOM Flow.Rise with software Syngo MR XB10, consists of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Sola with syngo MR XA61A (K232535).

    A high-level summary of the new and modified hardware and software is provided below:

    New Hardware:

    • Magnet
    • MREF (Magnet Refrigerator)
    • Gradient system
    • Gradient Coil
    • RF System
    • System Cover
    • Patient Table
    • MaRS (Measurement and Reconstruction System)
    • Select&GO Display (TPAN_3G) and Control Panel (CPAN_2G)
    • Body Coil
    • Head/Neck Coil
    • BM Head/Neck Coil (with ComfortSound)
    • BM Contour S Coil
    • BM Contour M Coil
    • BM Contour L Coil
    • BM Contour XL Coil
    • Foot/Ankle Coil
    • BM Spine Coil
    • iTx Extremity 18 Flare
    • Multi-Index MR-RT Positioning (a part of "RT Pro Edition" marketing bundle) (not available for MAGNETOM Flow.Rise)

    Modified Hardware:

    • Gradient Power Amplifier (GPA)
    • SAR Monitoring
    • In-Vivo Shim

    Software

    New Features and Applications:

    • CS Vibe
    • BioMatrix Motion Sensor
    • SPAIR FatSat Improvements: SPAIR "Abdomen&Pelvis" mode and SPAIR Breast mode
    • Deep Resolve Boost for FL3D_VIBE and SPACE
    • Deep Resolve Sharp for FL3D_VIBE and SPACE
    • Preview functionality for Deep Resolve Boost
    • EP2D_FID_PHS
    • EP_SEG_FID_PHS
    • AutoMate Cardiac (Cardiac AI Scan Companion)
    • DANTE blood suppression
    • SMS Averaging for TSE
    • SMS Averaging for TSE_DIXON
    • SMS for BLADE without diffusion function
    • Ghost reduction (Dual polarity Grappa (DPG))
    • Fleet Reference Scan
    • Deep Resolve Swift Brain
    • Quick Protocols
    • myExam Autopilot Spine
    • Open Workflow

    Modified features and applications:

    • myExam Implant Suite
    • GRE_PC
    • myExam RT Assist workflow improvements (not available for MAGNETOM Flow.Rise)
    • Open Recon 2.0
    • Deep Resolve Boost for TSE
    • "MTC Mode" for SPACE
    • SPACE Improvement: high bandwidth IR pulse
    • SPACE Improvement: increase gradient spoiling

    New (general) Software / Platform / Workflow:

    • Select&GO extension (coil-based Iso Centering, Patient Registration at the touch display, Start Scan at the touch display)
    • New Startup-Timer
    • myExam RT Assist (not available for MAGNETOM Flow.Rise)
    • myExam Brain RT-Autopilot (not available for MAGNETOM Flow.Rise)
    • Eco Power Mode Pro

    Modified (general) Software / Platform:

    • Improved Gradient ECO Mode Settings

    Furthermore, the following minor updates and changes were conducted for the subject devices MAGNETOM Vida, MAGNETOM Lumina, MAGNETOM Vida Fit, MAGNETOM Sola, MAGNETOM Altea:

    • Off-Center Planning Support
    • Flip Angle Optimization (Lock TR and FA)
    • Inline Image Filter
    • Automatic System Shutdown (ASS) sensor (Smoke Detector)
    • ID Gain (re-naming)
    • Select&Go Display (Touch Display (TPAN))
    • Marketing bundle "myExam Companion"

    The following minor updates and changes were conducted for the subject devices MAGNETOM Sola Fit and MAGNETOM Viato.Mobile:

    • Off-Center Planning Support
    • Automatic System Shutdown (ASS) sensor (Smoke Detector)
    • ID Gain (re-naming)
    • Select&Go Display (Touch Display (TPAN))
    • Marketing bundle "myExam Companion"

    The following minor updates and changes were conducted for the subject devices MAGNETOM Flow.Elite, MAGNETOM Flow.Neo, MAGNETOM Flow.Rise:

    • Off-Center Planning Support
    • Flip Angle Optimization (Lock TR and FA)
    • Inline Image Filter
    • Automatic System Shutdown (ASS) sensor (Smoke Detector)
    • ID Gain (re-naming)
    • Marketing bundle "myExam Companion"
    • Marketing Bundle "RT Pro Edition"(not available for MAGNETOM Flow.Rise)
    AI/ML Overview

    This FDA 510(k) clearance letter pertains to several MAGNETOM MRI systems with software Syngo MR XB10. The document primarily focuses on demonstrating substantial equivalence to predicate devices through non-clinical testing of new and modified hardware and software features, particularly those involving Artificial Intelligence (AI) such as "Deep Resolve" functionalities.

    Here's an analysis of the acceptance criteria and the studies that prove the devices meet them, specifically for the AI features:

    1. Table of Acceptance Criteria and Reported Device Performance for AI Features

    The document does not explicitly state "acceptance criteria" for the AI features in a numerical format that would typically be seen for a device's performance metrics (e.g., minimum sensitivity, specificity). Instead, the acceptance criteria are implicitly defined by the evaluation methods and the "Test result summary" for each Deep Resolve feature, which aim to demonstrate equivalent or improved image quality compared to conventional methods.

    AI FeatureAcceptance Criteria (Implied)Reported Device PerformanceComments
    Deep Resolve Swift Brain- Quantitative quality metrics (PSNR, SSIM, NMSE) to demonstrate network impact.- Visual inspection to ensure no undetected artifacts.- Evaluation in clinical settings with collaboration partners.- "Impact of the network has been characterized by several quality metrics such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM) and normalized mean squared error (NMSE)."- "Images were inspected visually to ensure that potential artefacts are detected that are not well captured by the metrics."- "Work-in-progress packages of the network were delivered and evaluated in clinical settings with collaboration partners."The results indicate successful performance in meeting these criteria, suggesting the AI feature performs as intended without negative impact on image quality and with acceptable quantitative metrics.
    Deep Resolve Boost for FL3D_VIBE and Deep Resolve Boost for SPACE- Quantitative evaluations (SSIM, PSNR, MSE) showing convergence of training and improvements over conventional parallel imaging.- Visual inspection to confirm no negative impact on image quality.- The function should allow for faster acquisition or improved image quality.- "Quantitative evaluations of structural similarity index (SSIM), peak signal-to-noise ratio (PSNR) and mean squared error (MSE) metrics showed a convergence of the training and improvements compared to conventional parallel imaging."- "An inspection of the test images did not reveal any negative impact to the image quality."- "The function has been used either to acquire images faster or to improve image quality."The results indicate successful performance, demonstrating quantitative improvements and confirming user benefit (faster acquisition or improved image quality) without negative visual impact.
    Deep Resolve Sharp for FL3D_VIBE and Deep Resolve Sharp for SPACE- Quantitative quality metrics (PSNR, SSIM, perceptual loss).- Rating and evaluation of image sharpness by intensity profile comparisons.- Demonstration of increased edge sharpness and reduced Gibb's artifacts.- "The impact of the Deep Resolve Sharp network has been characterized by several quality metrics such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and perceptual loss."- "The tests include rating and an evaluation of image sharpness by intensity profile comparisons of reconstruction with and without Deep Resolve Sharp. Both tests show increased edge sharpness and reduced Gibb's artifacts."The results directly confirm improved image sharpness and reduced artifacts, meeting the implied performance criteria.
    Deep Resolve Boost for TSE- Similar metrics (PSNR, SSIM, LPIPS) to predicate (cleared) network, both outperforming conventional GRAPPA.- Statistically significant reduction of banding artifacts.- No significant changes in sharpness and detail visibility.- Radiologist evaluation confirming no difference in suitability for clinical diagnostics.- "The evaluation on the test dataset confirmed very similar metrics in terms of peak signal-to-noise ratio (PSNR), structural similarity index (SSIM) and learned perceptual image patch similarity metrics (LPIPS) for the predicate and the modified network with both outperforming conventional GRAPPA as the reference."- "Visual evaluations confirmed statistically significant reduction of banding artifacts with no significant changes in sharpness and detail visibility."- "In addition, the radiologist evaluation revealed no difference in suitability for clinical diagnostics between updated and cleared predicate network."This AI feature directly demonstrates equivalent or improved performance compared to the predicate, with specific mention of "radiologist evaluation" ensuring clinical suitability.

    2. Sample Size Used for the Test Set and Data Provenance

    Since the document distinguishes between training, validation, and testing datasets, the "test set" here refers to the data used for final evaluation of the AI model's performance.

    • Deep Resolve Swift Brain:

      • Test Set Sample Size: The document lists "Validation: 3,616 slices (1.5T validation); 6,048 slices (3T validation)" as part of the split. It also mentions "work-in-progress packages of the network were delivered and evaluated in clinical settings with collaboration partners," implying additional testing, but a specific numerical sample size for this external validation is not provided in detail. However, the initial splits serve as the primary "test set" for performance metrics mentioned.
      • Data Provenance: "in-house measurement," implying retrospective data collected at Siemens' facilities. The document notes that "attributes like gender, age and ethnicity are not relevant to the training data" due to network architecture, but no specific country of origin is stated beyond "in-house."
    • Deep Resolve Boost for FL3D_VIBE and Deep Resolve Boost for SPACE:

      • Test Set Sample Size: The document states 19% of 1265 measurements for validation. It also explicitly mentions "collaboration partners (testing)" indicating an external test set, but a specific numerical breakdown for this is not provided.
      • Data Provenance: "in-house measurements (training and validation) and collaboration partners (testing)." This suggests a mix of retrospective data potentially from various countries where Siemens has collaboration, though specific locations are not listed.
    • Deep Resolve Sharp for FL3D_VIBE and Deep Resolve Sharp for SPACE:

      • Test Set Sample Size: 30% of the 500 measurements are listed for validation, which serves as a test set. This equates to 150 measurements.
      • Data Provenance: "in-house measurements," implying retrospective data from Siemens' research facilities. Specific country not mentioned.
    • Deep Resolve Boost for TSE:

      • Test Set Sample Size: "Additional test dataset for banding artifact reduction: more than 2000 slices."
      • Data Provenance: "in-house measurements and collaboration partners" for training/validation. The "additional test dataset for banding artifact reduction" likely follows the same provenance. Retrospective data.

    3. Number of Experts Used and Qualifications for Ground Truth

    The document does not explicitly state the number of experts used to establish ground truth or their specific qualifications (e.g., "radiologist with 10 years of experience") for any of the Deep Resolve features.

    However, for Deep Resolve Boost for TSE, it mentions:

    • "Visual evaluations confirmed statistically significant reduction of banding artifacts... "
    • "In addition, the radiologist evaluation revealed no difference in suitability for clinical diagnostics..."

    This indicates that radiologists were involved in the evaluation of the Deep Resolve Boost for TSE feature, presumably as experts to establish the clinical suitability. The exact number and their detailed qualifications are not provided. For other features, the ground truth is primarily based on the acquired raw data or manipulated versions of it, without explicit mention of expert review in the ground truth establishment process.


    4. Adjudication Method (for the test set)

    The document does not specify an adjudication method like "2+1" or "3+1" for establishing ground truth or evaluating the test set for any of the AI features. The ground truth for training and validation is derived from the "acquired datasets" which are considered the ground truth due to data manipulation and augmentation from these high-quality source images. For Deep Resolve Boost for TSE, a "radiologist evaluation" is mentioned, implying expert review without detailing a specific adjudication protocol.


    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    The document does not indicate that a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done to measure the improvement of human readers with AI assistance versus without AI assistance. The evaluations focus on the standalone performance of the AI algorithms in improving image quality metrics and, in one instance (Deep Resolve Boost for TSE, radiologist evaluation), the suitability for clinical diagnostics, rather than the impact on human reader performance.


    6. Standalone (Algorithm Only) Performance

    Yes, standalone (algorithm only) performance was done. The descriptions for each Deep Resolve feature focus entirely on the algorithm's performance in terms of quantitative image quality metrics (PSNR, SSIM, NMSE, MSE, LPIPS), visual inspection for artifacts, and improvements over conventional techniques. There is no mention of a "human-in-the-loop" component in the described performance evaluations for these AI features, except for the "radiologist evaluation" for Deep Resolve Boost for TSE which assessed clinical suitability of the output images, not reader performance with the AI.


    7. Type of Ground Truth Used

    • For Deep Resolve Swift Brain, Deep Resolve Boost for FL3D_VIBE & SPACE, and Deep Resolve Sharp for FL3D_VIBE & SPACE:

      • The ground truth used was the acquired datasets (raw MRI data). The input data for the AI models was then "retrospectively created from the ground truth by data manipulation and augmentation" (e.g., undersampling k-space, adding noise, cropping, creating sub-volumes, cropping k-space to simulate low-resolution input from high-resolution output). This means the AI models were trained to learn the mapping from manipulated (e.g., noisy, low-resolution, undersampled) inputs to the original, high-quality acquired image data.
    • For Deep Resolve Boost for TSE:

      • Similar to above, the "acquired training/validation datasets" were considered the ground truth. Input data was generated by "data manipulation and augmentation" (e.g., discarding k-space lines, lowering SNR, mirroring k-space data).

    In essence, the AI models are trained to restore or enhance images to resemble the high-quality, fully acquired MRI data that serves as the reference ground truth.


    8. Sample Size for the Training Set

    • Deep Resolve Swift Brain: 20,076 slices
    • Deep Resolve Boost for FL3D_VIBE and Deep Resolve Boost for SPACE: 81% of 1265 measurements. (This equates to approximately 1024 measurements).
    • Deep Resolve Sharp for FL3D_VIBE and Deep Resolve Sharp for SPACE: 70% of 500 measurements. (This equates to 350 measurements).
    • Deep Resolve Boost for TSE: More than 23,250 slices (93% of the total dataset).

    9. How the Ground Truth for the Training Set Was Established

    For all Deep Resolve features, the ground truth for the training set was established from acquired MRI datasets (either "in-house measurements" or from "collaboration partners"). These acquired datasets are implicitly considered the "true" or "high-quality" images. The AI models are designed to process inputs that mimic suboptimal acquisition conditions (e.g., undersampled k-space, lower SNR, lower resolution) and generate outputs that match these high-quality acquired images, which serve as the ground truth for learning. The process involved:

    • Retrospective creation: Input data was created retrospectively from the acquired ground truth data.
    • Data manipulation and augmentation: This involved techniques such as:
      • Discarding k-space lines (undersampling).
      • Lowering the SNR level by adding Gaussian noise to k-space data.
      • Uniformly-random cropping of training data.
      • Creating sub-volumes of acquired data.
      • Cropping k-space to generate low-resolution inputs corresponding to high-resolution ground truth.
      • Mirroring of k-space data.

    This approach demonstrates an unsupervised or self-supervised learning paradigm where the ground truth is derived directly from the complete and high-fidelity raw data, and the AI is trained to reconstruct or enhance images from degraded inputs to match this ideal ground truth.

    Ask a Question

    Ask a specific question about this device

    K Number
    K253489

    Validate with FDA (Live)

    Manufacturer
    Date Cleared
    2025-12-12

    (49 days)

    Product Code
    Regulation Number
    892.1000
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Swoop Portable MR Imaging System is a portable, ultra-low field magnetic resonance imaging device for producing images that display the internal structure of the head where full diagnostic examination is not clinically practical. When interpreted by a trained physician, these images provide information that can be useful in determining a diagnosis.

    Device Description

    The Swoop System is portable, ultra-low field MRI device that enables visualization of the internal structures of the head using standard magnetic resonance imaging contrasts. The main interface is a commercial off-the-shelf device that is used for operating the system, providing access to patient data, exam setup, exam execution, viewing MRI image data for quality control purposes, and cloud storage interactions. The system can generate MRI data sets with a broad range of contrasts. The Swoop system user interface includes touch screen menus, controls, indicators, and navigation icons that allow the operator to control the system and to view imagery. The Swoop System image reconstruction algorithm utilizes deep learning to provide improved image quality for T1W, T2W, FLAIR, and DWI sequences.

    The subject Swoop System described in this submission includes software modifications related to the pulse sequences.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the Swoop® Portable MR Imaging® System, based on the provided FDA 510(k) clearance letter:

    Acceptance Criteria and Reported Device Performance

    Acceptance Criteria CategorySpecific Acceptance CriteriaReported Device Performance
    Advanced ReconstructionPerformance Analysis: Robustness, stability, and generalizability over a variety of subjects, design parameters, artifacts, and scan conditions using reference-based metrics (NMSE and SSIM). The ability of Advanced Reconstruction to reproduce the ground truth image compared to Linear Reconstruction should be superior or demonstrate expected behavior.NMSE was reduced and SSIM was improved for Advanced Reconstruction test images compared to Linear Reconstruction test images across all models and test datasets. Reconstruction outputs with motion and zipper artifacts were qualitatively assessed to be acceptable.
    Contrast-to-Noise Ratio (CNR) ValidationMean CNR of Advanced Reconstruction required to be greater than the mean CNR of the baseline Linear Reconstruction at a statistical significance level of 0.05 for each sequence type. This demonstrates that pathology features are preserved.In all cases, CNR of Advanced Reconstruction was greater than or equal to Linear Reconstruction for both hyper- and hypo-intense pathologies. This demonstrates that Advanced Reconstruction does not unexpectedly modify, remove, or reduce the contrast of pathology features.
    Image Validation (Radiologist Review)Advanced Reconstruction required to perform at least as well as Linear Reconstruction in all categories (median score ≥0 on Likert scale) and perform better (≥1 on Likert scale) in at least one of the quality-based categories (noise, sharpness, contrast, geometric fidelity, artifact, and overall image quality).Advanced Reconstruction achieved a median score of 2 (the most positive rating scale value) in all categories (noise, sharpness, contrast, geometric fidelity, artifact, and overall image quality). This indicates reviewers found Advanced Reconstruction improved image quality while maintaining diagnostic consistency relative to Linear Reconstruction.
    Software VerificationSoftware verification testing in accordance with design requirements.Passed all testing in accordance with internal requirements and applicable standards (IEC 62304:2016, FDA Guidance, "Content of Premarket Submissions for Device Software Functions").
    Image PerformanceTesting to verify the subject device meets all image quality criteria.Passed all testing in accordance with internal requirements and applicable standards (NEMA MS 1-2008 (R2020), NEMA MS 3-2008 (R2020), NEMA MS 9-2008 (R2020), NEMA MS 12-2016, American College of Radiology standards for named sequences).
    CybersecurityTesting to verify cybersecurity controls and management.Passed all testing in accordance with internal requirements and applicable standards (FDA Guidance, "Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions").
    Software ValidationValidation to ensure the subject device meets user needs and performs as intended.Passed all testing in accordance with internal requirements and applicable standards (FDA Guidance, "Content of Premarket Submissions for Device Software Functions").

    Study Details for Advanced Reconstruction Validation (DWI Sequence - updated in this submission)

    This section focuses specifically on the studies conducted to validate the Advanced Reconstruction models for the updated DWI sequence. Performance analysis and validation for T1/T2/FLAIR models were leveraged from predicate devices, so this analysis covers the new data.

    1. Performance Analysis

    • Sample Size:
      • Test Set (DWI): 8 patients, 31 images.
      • Data Provenance: Not explicitly stated, but includes data from 6 different sites. Countries of origin are not specified. The study is retrospective, utilizing existing MRI data.
    • Ground Truth Establishment (Test Set):
      • Number of Experts: Not applicable for quantitative metrics (NMSE, SSIM). Quantitative metrics were reference-based, comparing reconstructed images to ground truth target images (Swoop data, high field images, and synthetic contrast images).
      • Qualifications of Experts: N/A.
    • Adjudication Method: N/A.
    • MRMC Comparative Effectiveness Study: No, this was a standalone performance analysis comparing Advanced Reconstruction to Linear Reconstruction against a reference standard.
    • Standalone Performance: Yes. The algorithm's output was compared to ground truth images using quantitative metrics.
    • Type of Ground Truth: Reference images, including Swoop data, high field images, and synthetic contrast images. Test input data (synthetic k-space) was generated from these target images.
    • Training Set Sample Size: Not explicitly stated for this particular updated DWI model. The document states "None of these test images were used in model training," implying a separate training set, but its size is not provided.
    • Ground Truth Establishment (Training Set): Not explicitly stated, but generally for deep learning reconstruction, the training data would include raw k-space data paired with corresponding reference images (often higher quality, known good reconstructions, or synthetic data).

    2. Contrast-to-Noise Ratio (CNR) Validation

    • Sample Size:
      • Test Set (DWI): 12 patients, 45 images, 145 Regions of Interest (ROIs).
      • Data Provenance: Not explicitly stated, but includes data from 5 different sites. Countries of origin are not specified. Retrospective.
    • Ground Truth Establishment (Test Set):
      • Number of Experts: At least one.
      • Qualifications of Experts: An American Board of Radiology (ABR) certified radiologist reviewed the annotations for accuracy.
    • Adjudication Method: Not explicitly stated as a formal adjudication method (like 2+1), but radiologists reviewed ROI accuracy.
    • MRMC Comparative Effectiveness Study: No, this was a standalone quantitative comparison of CNR between Advanced Reconstruction and Linear Reconstruction.
    • Standalone Performance: Yes. The algorithm's output was quantitatively measured and compared to the linear reconstruction, using expert-annotated ROIs for pathology.
    • Type of Ground Truth: Expert-annotated regions of interest (ROIs) encompassing pathologies, reviewed for accuracy by an ABR-certified radiologist.
    • Training Set Sample Size: Not explicitly stated.
    • Ground Truth Establishment (Training Set): Not explicitly stated.

    3. Advanced Reconstruction Image Validation (Radiologist Review)

    • Sample Size:
      • Test Set (DWI): 34 patients, 34 sets of DWI images (102 individual images when considering b=0, trace-weighted/single direction, and ADC).
      • Data Provenance: Not explicitly stated, but includes data from 8 different sites. Countries of origin are not specified. Retrospective by nature of rating existing images.
    • Ground Truth Establishment (Test Set): Ground truth for rating was established by consensus of the clinical reviewers' assessments on a Likert scale. There wasn't an independent "definitive" ground truth for image quality beyond the expert reviews.
    • Number of Experts: Four.
    • Qualifications of Experts: External, ABR-certified radiologists representing clinical users.
    • Adjudication Method: Not explicitly stated if there was a formal adjudication if reviewers disagreed. Instead, they rated independently, and median scores were used for evaluation.
    • MRMC Comparative Effectiveness Study: This study had elements of an MRMC study by using multiple readers (4 radiologists) to rate multiple cases (34 image sets) with and without the AI assistance (Advanced vs. Linear Reconstruction, though not exactly "assisted" as in "human + AI" vs. "human only").
      • Effect Size: Advanced Reconstruction achieved a median score of 2 (the most positive rating scale value) in all categories. This indicates a significant improvement in perceived image quality and diagnostic consistency compared to Linear Reconstruction (which would be analogous to "without AI assistance" in this context), as the criteria required only a median score ≥1 in one category for "better performance."
    • Standalone Performance: Partially. While radiologists rated the images, their input constituted the performance metric. It's not a purely algorithmic standalone performance against a fixed ground truth.
    • Type of Ground Truth: Expert consensus ratings (Likert scale) on image quality attributes and diagnostic consistency.
    • Training Set Sample Size: Not explicitly stated.
    • Ground Truth Establishment (Training Set): Not explicitly stated.

    In summary, for the updated DWI sequence validation:

    • Test Set Sample Sizes:
      • Performance Analysis: 8 patients, 31 images
      • CNR Validation: 12 patients, 45 images, 145 ROIs
      • Image Validation: 34 patients, 34 image sets (102 images)
    • Data Provenance: Retrospective, multiple sites (6 for performance, 5 for CNR, 8 for image validation via different Swoop System models), countries not specified.
    • Expert Reviewers: An ABR-certified radiologist for ROI accuracy in CNR validation, and four external ABR-certified radiologists for the image quality review.
    • Ground Truth: Varied from reference images, to expert-annotated ROIs, to expert consensus ratings.
    • Training Set Details: Minimal information provided regarding the training set's size or ground truth establishment in this document. The focus here is on the validation of the updated DWI model.
    Ask a Question

    Ask a specific question about this device

    K Number
    K251822

    Validate with FDA (Live)

    Date Cleared
    2025-11-20

    (160 days)

    Product Code
    Regulation Number
    892.1000
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    MAGNETOM Free.Max:
    The MAGNETOM MR system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces transverse, sagittal, coronal and oblique cross-sectional images that display, depending on optional local coils that have been configured with the system, the internal structure and/or function of the head, body or extremities.

    Other physical parameters derived from the images may also be produced. Depending on the region of interest, contrast agents may be used. These images and the physical parameters derived from the images when interpreted by a trained physician or dentist trained in MRI yield information that may assist in diagnosis.

    The MAGNETOM MR system may also be used for imaging during interventional procedures when performed with MR-compatible devices such as MR Safe biopsy needles.

    When operated by dentists and dental assistants trained in MRI, the MAGNETOM MR system must only be used for scanning the dentomaxillofacial region.

    MAGNETOM Free.Star:
    The MAGNETOM MR system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces transverse, sagittal, coronal and oblique cross-sectional images that display, depending on optional local coils that have been configured with the system, the internal structure and/or function of the head, body or extremities.

    Other physical parameters derived from the images may also be produced. Depending on the region of interest, contrast agents may be used. These images and the physical parameters derived from the images when interpreted by a trained physician yield information that may assist in diagnosis.

    Device Description

    The subject devices MAGNETOM Free.Max and MAGNETOM Free.Star with software version syngo MR XA80A, consists of new and modified hardware and software features comparing to the predicate device MAGNETOM Free.Max and MAGNETOM Free.Star with software version syngo MR XA60A (K231617).

    New hardware features (Only for MAGNETOM Free.Max):

    • Dental coil
    • High-end host
    • syngo Workplace

    Modified hardware features:

    • MaRS
    • Select&GO Display (TPAN_3G)

    New Pulse Sequences/ Software Features / Applications:

    Only for MAGNETOM Free.Max:

    • EP_SEG_FID_PHS
    • EP2D_FID_PHS
    • EP_SEG_PHS
    • GRE_Proj
    • GRE_PHS
    • myExam Dental Assist
    • Select&GO Dental
    • Slice Overlapping

    For both MAGNETOM Free.Max and MAGNETOM Free.Star:

    • Eco Power Mode
    • Extended Gradient Eco Mode
    • System Startup Timer

    Modified Features and Applications:

    • myExam RT Assist (only for MAGNETOM Free.Max)
    • Deep Resolve for HASTE
    • Deep Resolve for EPI Diffusion
    • Select&GO for dental (only for MAGNETOM Free.Max)
    • Select&GO extension: Patient Registration and Start Scan
    • SPACE improvement: MTC prep module

    Other Modifications and Minor Changes:

    • MAGNETOM Free.Max Dental Edition marketing bundle (only for MAGNETOM Free.Max)
    • MAGNETOM Free.Max RT Pro Edition marketing bundle (only for MAGNETOM Free.Max)
    • Off-Center Planning Support
    • ID Gain
    AI/ML Overview

    The provided FDA 510(k) clearance letter and summary for MAGNETOM Free.Max and MAGNETOM Free.Star (K251822) offer high-level information regarding the devices and their comparison to predicate devices. However, it does not explicitly detail acceptance criteria (performance metrics with pass/fail thresholds) or a specific study proving the device meets those criteria for the overall device clearance.

    The document primarily focuses on demonstrating substantial equivalence to predicate devices for general MR diagnostic imaging. The most detailed performance evaluation mentioned is for the AI feature "Deep Resolve Boost." Therefore, the response will focus on the information provided regarding Deep Resolve Boost, and address other points based on what is stated and what is not.


    Acceptance Criteria and Device Performance (Focusing on Deep Resolve Boost)

    Table 1. Deep Resolve Boost Performance Summary

    MetricAcceptance Criteria (Implicit from "significantly better")Reported Device Performance
    Structural Similarity Index (SSIM)Significantly better structural similarity with the gold standard than conventional reconstruction.Deep Resolve reconstruction has significantly better structural similarity with the gold standard than the conventional reconstruction.
    Peak Signal-to-Noise Ratio (PSNR) / Signal-to-Noise Ratio (SNR)Significantly better SNR than conventional reconstruction.Deep Resolve reconstruction has significantly better signal-to-noise ratio (SNR) than the conventional reconstruction, and visual evaluation confirmed higher SNR.
    Aliasing ArtifactsNot found to have caused artifacts.Deep Resolve reconstruction was not found to have caused artifacts.
    Image SharpnessSuperior sharpness compared to conventional reconstruction.Visual evaluation confirmed superior sharpness.
    Denoising LevelsImproved denoising levels.Visual evaluation confirmed improved denoising Levels (implicit in higher SNR and image quality).

    Note: The document does not provide numerical thresholds or specific statistical methods used to define "significantly better" for SSIM and PSNR. The acceptance criteria are implicitly derived from the reported positive performance relative to conventional reconstruction.

    Study Details for Deep Resolve Boost

    1. Sample Size used for the test set and the data provenance:

      • Test Data: A "set of test data" was used for quantitative metrics (SSIM, PSNR) and visual evaluation. This test data was a "retrospectively undersampled copy of the test data" which was also used for conventional reconstruction.
      • Provenance: "In-house measurements and collaboration partners."
      • Retrospective/Prospective: The process of creating the test data by manipulating (undersampling) retrospectively acquired data indicates a retrospective approach.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Number of Experts: Not specified. The document states, "Visual evaluation was performed by qualified readers."
      • Qualifications of Experts: "Qualified readers." No further specific qualifications (e.g., years of experience, specialty) are provided.
    3. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

      • Not specified. The document states, "Visual evaluation was performed by qualified readers." It does not mention whether multiple readers were used per case or how discrepancies were resolved.
    4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No, an MRMC comparative effectiveness study involving human readers with vs. without AI assistance was not explicitly described for the Deep Resolve Boost feature. The visual evaluation was focused on comparing images reconstructed with conventional methods versus Deep Resolve Boost, primarily to assess image quality attributes without explicit human performance metrics (e.g., diagnostic accuracy, reading time).
    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • Yes, a standalone performance evaluation was done. The quantitative metrics (SSIM, PSNR) and the visual assessment of images reconstructed solely by the algorithm (Deep Resolve Boost) were performed to characterize the network's impact independently.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc):

      • The "acquired datasets represent the ground truth for the training and validation." Input data for testing was "retrospectively created from the ground truth by data manipulation and augmentation." This implies that the raw, fully sampled, and high-quality MRI acquisitions are considered the ground truth against which the reconstructed images (conventional and Deep Resolve Boost) are compared. This is a technical ground truth rather than a clinical ground truth like pathology.
    7. The sample size for the training set:

      • TSE: More than 25,000 slices.
      • HASTE: Pretrained on the TSE dataset and refined with more than 10,000 HASTE slices.
      • EPI Diffusion: More than 1,000,000 slices.
    8. How the ground truth for the training set was established:

      • "The acquired datasets represent the ground truth for the training and validation."
      • Input data for training was "retrospectively created from the ground truth by data manipulation and augmentation." This included "further under-sampling of the data by discarding k-space lines, lowering of the SNR level by addition of noise and mirroring of k-space data."
      • This indicates that the ground truth for training was derived from high-quality, fully sampled MRI acquisitions, which were then manipulated to simulate lower quality inputs for the AI to learn from.
    Ask a Question

    Ask a specific question about this device

    K Number
    K252371

    Validate with FDA (Live)

    Device Name
    uMR 680
    Date Cleared
    2025-09-25

    (57 days)

    Product Code
    Regulation Number
    892.1000
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The uMR 680 system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces sagittal, transverse, coronal, and oblique cross sectional images, and spectroscopic images, and that display internal anatomical structure and/or function of the head, body and extremities.

    These images and the physical parameters derived from the images when interpreted by a trained physician yield information that may assist the diagnosis. Contrast agents may be used depending on the region of interest of the scan.

    Device Description

    The uMR 680 is a 1.5T superconducting magnetic resonance diagnostic device with a 70cm size patient bore. It consists of components such as magnet, RF power amplifier, RF coils, gradient power amplifier, gradient coils, patient table, spectrometer, computer, equipment cabinets, power distribution system, internal communication system, and vital signal module etc. The uMR 680 Magnetic Resonance Diagnostic Device is designed to conform to NEMA and DICOM standards.

    This special 510(k) is to request modifications for the cleared uMR 680(K243397). The modifications performed on the uMR 680 in this submission are due to the following changes that include:

    (1) Addition of RF coils: Tx/Rx Head Coil.
    (2) Addition of a mobile configuration.

    AI/ML Overview

    N/A

    Ask a Question

    Ask a specific question about this device

    K Number
    K251386

    Validate with FDA (Live)

    Device Name
    ECHELON Synergy
    Date Cleared
    2025-09-17

    (135 days)

    Product Code
    Regulation Number
    892.1000
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The ECHELON Synergy System is an imaging device and is intended to provide the physician with physiological and clinical information, obtained non-invasively and without the use of ionizing radiation. The MR system produces transverse, coronal, sagittal, oblique, and curved cross sectional images that display the internal structure of the head, body, or extremities. The images produced by the MR system reflect the spatial distribution of protons (hydrogen nuclei) exhibiting magnetic resonance. The NMR properties that determine the image appearance are proton density, spin-lattice relaxation time (T1), spin-spin relaxation time (T2) and flow. When interpreted by a trained physician, these images provide information that can be useful in diagnosis determination.

    Device Description

    The ECHELON Synergy is a Magnetic Resonance Imaging System that utilizes a 1.5 Tesla superconducting magnet in a gantry design. The control and image processing hardware and the base elements of the system software are identical to the predicate device.

    AI/ML Overview

    This document describes the ECHELON Synergy MRI system's acceptance criteria and the studies conducted to demonstrate its performance. The submission for FDA 510(k) clearance (K251386) references a predicate device, the ECHELON Synergy MRI System (K241429), and outlines modifications to hardware and software.

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly present a table of "acceptance criteria" against which a numeric performance metric is listed for each new feature. Instead, it details that certain functionalities (DLR Symmetry and AutoPose) underwent performance evaluations. The "performance" reported is described qualitatively or comparatively to conventional methods.

    Feature/MetricAcceptance Criteria (Implicit/Derived)Reported Device Performance
    DLR Symmetry - Artifact ReductionReduction of artifacts should be demonstrated.Phantom testing demonstrated DLR Symmetry could reduce artifacts in the image using Normalized Root Mean Square Error (NRMSE). Clinical image review by radiologists indicated superior artifact reduction (p<0.05) compared to conventional images.
    DLR Symmetry - Image Quality (SNR, Sharpness, Contrast, Lesion Conspicuity, Overall)Should not degrade image quality compared to conventional methods. Images should be clinically acceptable.Phantom Testing: Did not degrade image quality based on SNR, Relative Edge Sharpness, and Contrast Change Rate. Clinical Image Review: Radiologists reported superior SNR, image sharpness, lesion conspicuity, and overall image quality (p<0.05) in DLR Symmetry images. All DLR Symmetry images were evaluated as clinically acceptable.
    AutoPose (Shoulder, Knee, HipJoint, Abdomen, Pelvis (male/female), Cardiac) - Automatic Slice PositioningShould be able to set slice positions for a scan without manual adjustment in most cases. For remaining cases, user operation steps should be equivalent to manual positioning.Evaluation by certified radiological technologists showed that "almost cases" were able to set slice positions without manual adjustment. The remaining cases required user operation steps equivalent to manual slice positioning.

    2. Sample Sizes Used for the Test Set and Data Provenance

    • DLR Symmetry:
      • Clinical Image Test Set: 89 unique subjects (patients and healthy subjects).
      • Data Provenance: From U.S. and Japan.
      • Data Type: Retrospective (clinical images collected).
    • AutoPose:
      • Shoulder: 60 cases
      • Knee: 60 cases
      • HipJoint: 65 cases
      • Abdomen: 115 cases
      • Pelvis for male: 60 cases
      • Pelvis for female: 68 cases
      • Cardiac: 126 cases
      • Data Provenance: FUJIFILM Corp., FUJIFILM Healthcare Americas Corp., and clinical sites.
      • Data Type: Subject type includes healthy volunteers and patients, implying a mix of prospective data collection for testing new features and potentially retrospective for some patient data.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • DLR Symmetry:
      • Number of Experts: Three US board certified radiologists.
      • Qualifications: "US board certified radiologists." Specific years of experience are not mentioned.
    • AutoPose:
      • Number of Experts/Evaluators: Three certified radiological technologists.
      • Qualifications: "Certified radiological technologists." Specific years of experience are not mentioned.

    4. Adjudication Method for the Test Set

    • DLR Symmetry: The document states that comparisons were made by "the reviewers" (plural) in terms of image quality metrics using a 3-point scale. It doesn't explicitly state an adjudication method like 2+1 or 3+1 if there were disagreements among the three radiologists. It implies a consensus or majority rule might have been used for the reported "superior" findings, but this isn't detailed.
    • AutoPose: The evaluation results are simply described as "evaluation results showed," implying a summary of the technologists' findings. No specific adjudication method is described.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done

    • DLR Symmetry: A form of MRMC study was conducted where three US board-certified radiologists reviewed images reconstructed with DLR Symmetry versus conventional methods.
    • Effect Size of Human Readers with AI vs. Without AI Assistance: The document indicates that images with DLR Symmetry (AI-assisted reconstruction) were "superior to those in the conventional images with statistically significant difference (p<0.05)" across various image quality metrics. This shows an improvement in the perceived image quality for human readers when using DLR Symmetry, but it does not quantify the "effect size of how much human readers improve with AI vs. without AI assistance" in terms of diagnostic performance (e.g., improved sensitivity/specificity for a given task). Instead, it focuses on the quality of the image presented to the reader.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • DLR Symmetry: Yes, in part. Phantom testing was conducted to evaluate artifact reduction (NRMSE), SNR, Relative Edge Sharpness, and Contrast Change Rate. This is an algorithmic performance evaluation independent of human interpretation of clinical images, although it assesses image characteristics rather than diagnostic output.
    • AutoPose: The evaluation by certified radiological technologists focuses on the algorithm's ability to set slice positions automatically, which is a standalone performance metric for the automation function.

    7. The Type of Ground Truth Used

    • DLR Symmetry:
      • For phantom testing: "Ground truth" refers to the known characteristics of the phantom and metrics like NRMSE, SNR, etc.
      • For clinical image review: The ground truth was established by expert consensus or individual assessment of the "clinical acceptability" of the images and comparative judgment (superiority) of image quality metrics by three US board-certified radiologists. This isn't pathology or outcomes data, but rather expert radiological opinion on image quality and clinical utility.
    • AutoPose: The "ground truth" was whether the automated positioning successfully set the slice positions without manual adjustment, as evaluated by certified radiological technologists.

    8. The Sample Size for the Training Set

    The document explicitly states regarding DLR Symmetry: "The test dataset was independent of the training and validation datasets." However, it does not provide the sample size or details for the training set (or validation set) for DLR Symmetry or AutoPose.

    9. How the Ground Truth for the Training Set Was Established

    The document does not provide details on how the ground truth for the training set was established for either DLR Symmetry or AutoPose, as the training set details are not included in the provided text.

    Ask a Question

    Ask a specific question about this device

    K Number
    K251399

    Validate with FDA (Live)

    Device Name
    SIGNA™ Sprint
    Date Cleared
    2025-09-11

    (128 days)

    Product Code
    Regulation Number
    892.1000
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The SIGNA™ Sprint is a whole body magnetic resonance scanner designed to support high resolution, high signal-to-noise ratio, and short scan times. It is indicated for use as a diagnostic imaging device to produce axial, sagittal, coronal, and oblique images, spectroscopic images, parametric maps, and/or spectra, dynamic images of the structures and/or functions of the entire body, including, but not limited to, head, neck, TMJ, spine, breast, heart, abdomen, pelvis, joints, prostate, blood vessels, and musculoskeletal regions of the body. Depending on the region of interest being imaged, contrast agents may be used.

    The images produced by SIGNA™ Sprint reflect the spatial distribution or molecular environment of nuclei exhibiting magnetic resonance. These images and/or spectra when interpreted by a trained physician yield information that may assist in diagnosis.

    Device Description

    SIGNA™ Sprint is a whole-body magnetic resonance scanner designed to support high resolution, high signal-to-noise ratio, and short scan time. The system uses a combination of time-varying magnet fields (Gradients) and RF transmissions to obtain information regarding the density and position of elements exhibiting magnetic resonance. The system can image in the sagittal, coronal, axial, oblique, and double oblique planes, using various pulse sequences, imaging techniques and reconstruction algorithms. The system features a 1.5T superconducting magnet with 70cm bore size. The system is designed to conform to NEMA DICOM standards (Digital Imaging and Communications in Medicine).

    Key aspects of the system design:

    • Uses the same magnet as a conventional whole-body 1.5T system, with integral active shielding and a zero boil-off cryostat.
    • A gradient coil that achieves up to 65 mT/m peak gradient amplitude and 200 T/m/s peak slew rate.
    • An embedded body coil that reduces thermal and enhance intra-bore visibility.
    • A newly designed 1.5T AIR Posterior Array.
    • A detachable patient table.
    • A platform software with various PSD and applications, including the following AI features:
    AI/ML Overview

    The provided text is a 510(k) clearance letter and summary for a new MRI device, SIGNA™ Sprint. It states explicitly that no clinical studies were required to support substantial equivalence. Therefore, the information requested regarding acceptance criteria, study details, sample sizes, ground truth definitions, expert qualifications, and MRMC studies is not available in this document.

    The document highlights the device's technical equivalence to a predicate device (SIGNA™ Premier) and reference devices (SIGNA™ Artist, SIGNA™ Champion) and relies on non-clinical tests and sample clinical images to demonstrate acceptable diagnostic performance.

    Here's a breakdown of what can be extracted from the document regarding testing, and why other requested information is absent:


    1. A table of acceptance criteria and the reported device performance

    • Acceptance Criteria (Implicit): The document states that the device's performance is demonstrated through "bench testing and clinical testing that show the image quality performance of SIGNA™ Sprint compared to the predicate device." It also mentions "acceptable diagnostic image performance... in accordance with the FDA Guidance 'Submission of Premarket Notifications for Magnetic Resonance Diagnostic Devices' issued on October 10, 2023."
      • Specific quantitative acceptance criteria (e.g., minimum SNR, CNR, spatial resolution thresholds) are not explicitly stated in this document.
    • Reported Device Performance: "The images produced by SIGNA™ Sprint reflect the spatial distribution or molecular environment of nuclei exhibiting magnetic resonance. These images and/or spectra when interpreted by a trained physician yield information that may assist in diagnosis."
      • No specific quantitative performance metrics (e.g., sensitivity, specificity, accuracy, or detailed image quality scores) are provided in this regulatory summary. The statement "The image quality of the SIGNA™ Sprint is substantially equivalent to that of the predicate device" is the primary performance claim.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Test Set Sample Size: Not applicable/Not provided. The document explicitly states: "The subject of this premarket submission, the SIGNA™ Sprint, did not require clinical studies to support substantial equivalence."
    • Data Provenance: Not applicable/Not provided for a formal clinical test set. The document only mentions "Sample clinical images have been included in this submission," but does not specify their origin or nature beyond being "sample."

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    • Not applicable. Since no formal clinical study was conducted for substantial equivalence, there was no "test set" requiring ground truth established by experts in the context of an effectiveness study. The "interpretation by a trained physician" is mentioned in the Indications for Use, which is general to MR diagnostics, not specific to a study.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    • Not applicable. No clinical test set requiring adjudication was conducted for substantial equivalence.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No. The document explicitly states: "The subject of this premarket submission, the SIGNA™ Sprint, did not require clinical studies to support substantial equivalence." While the device incorporates AI features cleared in other submissions (AIRx™, AIR™ Recon DL, Sonic DL™), this specific 510(k) for the SIGNA™ Sprint system itself does not include an MRMC study or an assessment of human reader improvement with these integrated AI features. The focus is on the substantial equivalence of the overall MR system.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • No, not for the SIGNA™ Sprint as a whole system. This 510(k) is for the MR scanner itself, not for a standalone algorithm. Any standalone performance for the integrated AI features (AIRx™, AIR™ Recon DL, Sonic DL™) would have been part of their respective clearance submissions (K183231, K202238, K223523), not this one.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    • Not applicable. No formal clinical study requiring ground truth was conducted for this submission.

    8. The sample size for the training set

    • Not applicable/Not provided. This submission is for the SIGNA™ Sprint MR system itself, not a new AI algorithm requiring a training set developed for this specific submission. The AI features mentioned (AIRx™, AIR™ Recon DL, Sonic DL™) were cleared in previous 510(k)s and would have had their own training and validation processes.

    9. How the ground truth for the training set was established

    • Not applicable/Not provided. As explained in point 8, this submission does not detail the training of new AI algorithms.
    Ask a Question

    Ask a specific question about this device

    K Number
    K251682

    Validate with FDA (Live)

    Device Name
    MuscleView 2.0
    Date Cleared
    2025-09-09

    (102 days)

    Product Code
    Regulation Number
    892.1000
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    MuscleView 2.0 is a magnetic resonance diagnostic software device used in adults and pediatrics aged 18 and older which automatically segments muscle, bone, fat and other anatomical structures from magnetic resonance imaging. After segmentation, it enables the generation, display and review of magnetic resonance imaging data. The segmentation results need to be reviewed and edited using appropriate software. Other physical parameters derived from the images may also be produced. This device is not intended for use with patients who have tumors in the trunk, arms and/or lower limb(s). When interpreted by a trained clinician, these images and physical parameters may yield information that may assist in diagnosis.

    Device Description

    MuscleView 2.0 is a software-only medical device which performs automatic segmentation of musculoskeletal structures. The software utilizes a locked artificial intelligence/machine learning (AI/ML) algorithm to identify and segment anatomical structures for quantitative analysis. The input to the software is DICOM data from magnetic resonance imaging (MRI), but the subject device does not directly interface with any devices. The output includes volumetric and dimensional metrics of individual and grouped regions of interest (ROIs) (such as muscles, bones and adipose tissue) and comparative analysis against a Virtual Control Group (VCG) derived from reference population data.

    MuscleView 2.0 builds upon the predicate device, MuscleView 1.0 (K241331, cleared 10/01/2024), which was cleared for the segmentation and analysis of lower extremity structures (hips to ankles). The subject device extends functionality to include:

    • Upper body regions (neck to hips)
    • Adipose tissue segmentation (subcutaneous, visceral, intramuscular, and hepatic fat)
    • Quantitative comparison with a Virtual Control Group
    • Additional derived metrics including Z-scores and composite scores (e.g., muscle-bone score)

    The submission includes a Predetermined Change Control Plan which details the procedure for retraining AI/ML algorithms or adding data to the Virtual Control Groups in order to improve performance without negatively impacting the safety or efficacy of the device.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) clearance letter for MuscleView 2.0:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria for MuscleView 2.0 were based on the device's segmentation accuracy, measured by Dice Similarity Coefficient (DSC) and absolute Volume Difference (VDt), remaining within the interobserver variability observed among human experts. The study demonstrated the device met these criteria. Since the text explicitly states the AI model's performance was "consistently within these predefined interobserver ranges," and "passed validation," the reported performance for all ROIs was successful in meeting the acceptance criteria.

    MetricAcceptance CriteriaComment on Reported Performance
    Dice Similarity Coefficient (DSC)DSC values where the 95% confidence interval for each ROI (across all subgroup analyses) indicates performance at or below interobserver variability (meaning higher DSC, closer to 1.0, is better). Specifically, a desired outcome was "a mean better than or equal to the acceptance criteria."Consistently within predefined interobserver ranges and passed validation for all evaluated ROIs and subgroups. (See Table 1 for 95% CIs of individual ROIs across subgroups).
    Absolute Volume Difference (VDt)VDt values where the 95% confidence interval for each ROI (across all subgroup analyses) indicates performance at or below interobserver variability (meaning lower VDt, closer to 0, is better). Specifically, a desired outcome was "a mean better than or equal to the acceptance criteria."Consistently within predefined interobserver ranges and passed validation for all evaluated ROIs and subgroups. (See Table 2 for 95% CIs of individual ROIs across subgroups).

    2. Sample Sizes Used for the Test Set and Data Provenance

    AI SettingNumber of Unique ScansNumber of Unique SubjectsData Provenance
    AI Setting 1 (Lower Extremity)148148Retrospective, "diverse population," multiple imaging sites and MRI manufacturers (GE, Siemens, Philips, Canon, Toshiba/Other). Countries of origin not explicitly stated, but "regional demographics" are provided implying a mix of populations.
    AI Setting 2 & 3 (Upper Extremity and Adipose Tissue)171171Retrospective, "diverse population," multiple imaging sites and MRI manufacturers (GE, Siemens, Philips, Canon, Other/Unknown). Countries of origin not explicitly stated, but "regional demographics" are provided implying a mix of populations.
    • Overall Test Set: 148 unique subjects (for AI Setting 1) + 171 unique subjects (for AI Settings 2 & 3) = 319 unique subjects.
    • Data Provenance: Retrospective, curated collection of MRI datasets from a diverse patient population (age, BMI, biological sex, ethnicity) from multiple imaging sites and MRI manufacturers (GE, Siemens, Philips, Canon, Toshiba/Other/Unknown). Independent from training datasets. De-identified.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Number of Experts: Not explicitly stated, but referred to as "expert segmentation analysts" and "expert human annotation." The study mentions "consensus process by expert segmentation analysts" for training data, and for testing, "manual segmentation performed by experts" and that the "interobserver variability range observed among experts" was used as a benchmark. The document does not specify the exact number of experts or their specific qualifications (e.g., years of experience or board certification).

    4. Adjudication Method for the Test Set

    • Adjudication Method: The ground truth for both training and testing datasets was established through a "consensus process by expert segmentation analysts" for training data and "manual segmentation performed by experts" for the test set. It does not explicitly state a 2+1 or 3+1 method; rather, it implies a consensus was reached among the experts. The key here is the measurement of "interobserver variability," suggesting that multiple experts initially segmented the data, and their agreement (or discordance) defined the benchmark, from which a final consensus might have been derived.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • No MRMC study was performed. The performance testing was a standalone study comparing the AI segmentation to expert manual segmentation (ground truth) rather than comparing human readers with and without AI assistance. The text states: "Performance results demonstrated segmentation accuracy within the interobserver variability range observed among experts." This indicates a comparison of the AI's output against what multiple human experts would agree upon, not an evaluation of human performance improvement with AI.

    6. Standalone Performance Study

    • Yes, a standalone study was done. The document states: "To evaluate the performance of the MuscleView AI segmentation algorithm, a comprehensive test was conducted using a test set that was fully independent from the training set. The AI was blinded to the ground truth segmentation labels during inference, ensuring an unbiased comparison." This clearly describes a standalone performance evaluation of the algorithm.

    7. Type of Ground Truth Used

    • Expert Consensus / Expert Manual Segmentation: The ground truth was established by "manual segmentation performed by experts" and through a "consensus process by expert segmentation analysts." This is a form of expert consensus derived from detailed manual annotation. The benchmark for acceptance was the "interobserver variability range observed among experts."

    8. Sample Size for the Training Set

    AI SettingNumber of Unique ScansNumber of Unique Subjects
    AI Setting 1 (Lower Extremity)16581294
    AI Setting 2 & 3 (Upper Extremity and Adipose Tissue)392209
    Total Unique Subjects for Training: 1294 + 209 = 1503 (Note: Some subjects might be present in both sets if they had both lower and upper extremity scans, but the table specifies "unique subjects" per AI setting)
    • Total Training Set: 1658 (scans for AI Setting 1) + 392 (scans for AI Settings 2 & 3) = 2050 unique scans.
    • Total Unique Subjects: 1294 (for AI Setting 1) + 209 (for AI Settings 2 & 3) = 1503 unique subjects.

    9. How Ground Truth for the Training Set Was Established

    • The ground truth for the training set was established through a "consensus process by expert segmentation analysts" on a "curated collection of retrospective MRI datasets."
    Ask a Question

    Ask a specific question about this device

    K Number
    K251029

    Validate with FDA (Live)

    Manufacturer
    Date Cleared
    2025-08-21

    (141 days)

    Product Code
    Regulation Number
    892.1000
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Vista OS is an accessory to 1.5T and 3.0T whole-body magnetic resonance diagnostic devices (MRDD). It is intended to operate alongside, and in parallel with, the existing MR console to acquire traditional, real-time and accelerated images.

    Vista OS software controls the MR scanner to acquire, reconstruct and display static and dynamic transverse, coronal, sagittal, and oblique cross-sectional images that display the internal structures and/or functions of the entire body. The images produced reflect the spatial distribution of nuclei exhibiting magnetic resonance. The magnetic resonance properties that determine image appearance are proton density, spin-lattice relaxation time (T1), spin-spin relaxation time (T2) and flow. When interpreted by a trained physician, these images provide information that may assist in the determination of a diagnosis.

    Vista OS is intended for use as an accessory to the following MRI systems:

    Manufacturers: GE Healthcare (GEHC), Siemens Healthineers
    Field Strength: 1.5T and 3.0T
    GE Software Versions: 12, 15, 16, 23, 24, 25, 26, 30
    Siemens Software Versions: N4/VE; NX/VA

    Device Description

    The Vista AI "Vista OS" product provides a seamless user experience for performing MRI studies on GE and Siemens scanners. The underlying software platform that we use to accomplish this task is called "RTHawk".

    RTHawk is a software platform designed from the ground up to provide efficient MRI data acquisition, data transfer, image reconstruction, and interactive scan control and display of static and dynamic MR imaging data. It can control MR pulse sequences provided by Vista AI and, on scanners that support it, it can equally control MR pulse sequences provided by the scanner vendor. Scan protocols can be created by the user that mix and match among all available sequences.

    RTHawk is an accessory to clinical 1.5T and 3.0T MR systems, operating alongside, and in parallel with, the MR scanner console with no permanent physical modifications to the MRI system required.

    The software runs on a stand-alone Linux-based computer workstation with color monitor, keyboard and mouse. It is designed to operate alongside, and in parallel with, the existing MR console with no hardware modifications required to be made to the MR system or console. This workstation (the "Vista Workstation") is sourced by the Customer in conformance with specifications provided by Vista AI, and is verified prior to installation.

    A private Ethernet network connects the Vista Workstation to the MR scanner computer. When not in use, the Vista Workstation may be detached from the MR scanner with no detrimental, residual impact upon MR scanner function, operation, or throughput.

    RTHawk is an easy-to-use, yet fully functional, MR Operating System environment. RTHawk has been designed to provide a platform for the efficient acquisition, control, reconstruction, display, and storage of high-quality static and dynamic MRI images and data.

    Data is continuously acquired and displayed. By user interaction or data feedback, fundamental scan parameters can be modified. Real-time and high-resolution image acquisition methods are used throughout RTHawk for scan plane localization, for tracking of patient motion, for detection of transient events, for on-the-fly, sub-second latency adjustment of image acquisition parameters (e.g., scan plane, flip angle, field-of-view, etc.) and for image visualization.

    RTHawk implements the conventional MRI concept of anatomy- and indication-specific Protocols (e.g., ischemia evaluation, valvular evaluation, routine brain, etc.). Protocols are pre-set by Vista AI, but new protocols can be created and modified by the end user.

    RTHawk Apps (Applications) are composed of a pulse sequence, predefined fixed and adjustable parameters, reconstruction pipeline(s), and a tailored graphical user interface containing image visualization and scan control tools. RTHawk Apps may provide real-time interactive scanning, conventional (traditional) batch-mode scanning, accelerated scanning, or calibration functions, in which data acquired may be used to tune or optimize other Apps.

    When vendor-supplied pulse sequences are used in Vista OS, parameters and scan planes are prescribed in the Vista interface and images reconstructed by the scanner appear on the Vista Workstation. RTHawk Apps and vendor-supplied sequences can be mixed within a single protocol with a unified user experience for both.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for Vista OS, Vista AI Scan, and RTHawk, based on the provided FDA 510(k) clearance letter:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document describes several clinical verification studies for new AI-powered features. Each feature has specific acceptance criteria.

    Feature TestedAcceptance CriterionReported Performance (meets criteria?)
    Automatic Detection of Motion Artifacts in Cine Cartesian SSFP80% agreement between neural-network assessment at its default sensitivity level and the cardiologist readerMeets or exceeds
    Automatic Detection of Ungateable Cardiac Waveforms80% agreement between neural-network assessment at its default sensitivity level and the cardiologist readerMeets or exceeds
    Automatic Cardiac Image Denoising1. Denoising should not detract from diagnostic accuracy in all cases. 2. Diagnostic quality of denoised data judged superior to paired non-denoised series in > 80% of test cases.Meets or exceeds
    Automatic Brain Localizer PrescriptionsMean error in plane angulation < 3 degrees with standard deviation < 5 degrees, AND mean plane position error < 5 mm with standard deviation < 15 mm.Meets or exceeds
    Automatic Prostate Localizer PrescriptionsMean 3D Intersection-over-Union (IoU) metrics of at least 0.65 for each volumetric scan prescription.Meets or exceeds
    Automatic Prediction of Velocity-Encoding VENC for Cine Flow StudiesAverage velocity error < 10% individually for all vessels and views.Meets or exceeds

    2. Sample Sizes and Data Provenance for Test Sets

    The document provides sample sizes for each clinical verification study test set:

    • Automatic Detection of Motion Artifacts: 120 sample images.
    • Automatic Detection of Ungateable Cardiac Waveforms: 100 sample ECGs.
    • Automatic Cardiac Image Denoising: 209 sample image series (paired with non-denoised).
    • Automatic Brain Localizer Prescriptions: 323 sample image localizations.
    • Automatic Prostate Localizer Prescriptions: 329 sample image localizations.
    • Automatic Prediction of Velocity-Encoding VENC: 42 sample VENC peak estimates.

    Data Provenance:

    • Data was "collected from prior versions of Vista OS."
    • "Data used in clinical verification were obtained from multiple clinical sites representing diverse ethnic groups, genders, and ages."
    • The document implies the data is retrospective as it was "collected from prior versions of Vista OS" and used for verification after model training.
    • Specific countries of origin are not mentioned, but the mention of "multiple clinical sites" and "diverse ethnic groups" suggests a broad geographic scope.

    3. Number of Experts and Qualifications for Ground Truth - Test Set

    The document states:

    • "Clinical assessments were performed by independent board-certified radiologists or cardiologists."
    • The number of experts is not explicitly stated (e.g., "three experts"), but it says "cardiologist reader" for cardiac studies and "trained physician" for other interpretations, implying at least one expert per study type.
    • Qualifications: "board-certified radiologists or cardiologists." Specific experience (e.g., "10 years of experience") is not provided.

    4. Adjudication Method for Test Set

    The adjudication method is not explicitly stated. It refers to "agreement between neural-network assessment... and the cardiologist reader" for cardiac studies, and "judged superior" for denoising, which suggests a single expert's assessment was used as ground truth for comparison. It does not mention methods like 2+1 or 3+1 consensus.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No MRMC comparative effectiveness study is explicitly mentioned. The studies focus on the performance of the AI models against expert assessment, not on comparing human readers with AI assistance versus without. The statement "all automations are provided as an additional aid to the trained operator who has the final decision power to accept or reject any suggestion or image enhancement that is provided" implies human-in-the-loop, but a specific MRMC study to quantify improvement is not described.

    6. Standalone Performance Study

    Yes, standalone (algorithm only without human-in-the-loop) performance studies were done for each of the new AI features. The acceptance criteria and reported performance directly measure the accuracy and agreement of the AI algorithm outputs against expert-established ground truth. The technologist retains the ability to reject or modify, but the initial validation is on the AI's standalone output.

    7. Type of Ground Truth Used

    The ground truth used for the clinical verification test sets was expert consensus / expert opinion.

    • For artifact detection and ungateable waveforms: "cardiologist reader" assessment.
    • For denoising: "diagnostic quality... judged superior" by "independent board-certified radiologists or cardiologists."
    • For localizer prescriptions and VENC prediction: Implicitly, metrics like angular error, positional error, IoU, and velocity error are measured against a "correct" or "optimal" ground truth typically established by expert manual prescription or known physical values.

    8. Sample Size for the Training Set

    The document explicitly states that the test data was "segregated from training and tuning data." However, the exact sample size for the training set is not provided in the given text.

    9. How Ground Truth for the Training Set Was Established

    The document states:

    • "Neural-network models were developed and trained using industry-standard methods for partitioning and isolating training, tuning, and internal testing datasets."
    • "Model development data was partitioned by unique anonymous patient identifiers to prevent overlap across training, internal testing, and clinical verification datasets."
    • "Clinical assessments were performed by independent board-certified radiologists or cardiologists who were not involved in any aspect of model development (including providing labels for training, tuning or internal testing)."

    This implies that ground truth for the training set was established by expert labeling or consensus, but these experts were different from those who performed the final clinical verification. The exact number of experts involved in training data labeling and their qualifications are not specified.

    Ask a Question

    Ask a specific question about this device

    K Number
    K252239

    Validate with FDA (Live)

    Date Cleared
    2025-08-06

    (20 days)

    Product Code
    Regulation Number
    892.1000
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The InVision™ 3T Recharge Operating Suite is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces transverse, sagittal, coronal and oblique cross sectional images, spectroscopic images and/or spectra, and that displays the internal structure and/or function of the head, body or extremities.

    Other physical parameters derived from the images and/or spectra may also be produced. Depending on the region of interest, contrast agents may be used. These images and/or spectra and the physical parameters derived from the images and/or spectra when interpreted by a trained physician yield information that may assist in diagnosis.

    The InVision™ 3T Recharge Operating Suite may also be used for imaging during intraoperative and interventional procedures when performed with MR safe devices or MR conditional devices approved for use with the MR scanner.

    The InVision™ 3T Recharge Operating Suite may also be used for imaging in a multi-room suite.

    Device Description

    The proposed InVision™ 3T Recharge Operating Suite featuring the Siemens MAGNETOM Skyra Fit upgrade from Verio is a traditional Magnetic Resonance Imaging (MRI) scanner that is suspended on an overhead rail system. It is designed to operate inside a Radio Frequency (RF) shielded room to facilitate intraoperative and multi-room use. The InVision™ 3T Recharge Operating Suite uses a scanner, the Siemens MAGNETOM Skyra Fit (K220589, reference device), to produce images of the internal structures of the head as well as the whole body. The Siemens 3T MAGNETOM Skyra Fit MRI scanner is an actively shielded magnet with a magnetic field strength of 3 Tesla.

    The InVision™ 3T Recharge Operating Suite provides surgeons with access to magnetic resonance (MR) images while in the surgical field without changing the surgical/clinical workflow. When images are requested in the operating room (OR), the magnet is moved from the diagnostic room (DR) to the OR on a pair of overhead rails while the patient remains stationary during the procedure. Imaging is performed and once complete the magnet is moved out of the OR to the DR. The magnet can be moved in and out of the surgical field multiple times, as required, throughout the course of the surgical procedure. When the Siemens MAGNETOM Skyra Fit MRI scanner is in the DR, the OR may be used as a standard OR, utilizing standard surgical instruments and equipment during surgery. When not required in the OR, the scanner is available for use in the DR as a standard diagnostic MRI.

    AI/ML Overview

    The provided FDA 510(k) Clearance Letter states that the "InVision™ 3T Recharge Operating Suite" has been tested and determined to be substantially equivalent to the predicate device "IMRIS iMRI 3T V (K212367)". However, the document does not contain specific acceptance criteria, reported device performance metrics (e.g., sensitivity, specificity, accuracy), or details of a de novo study proving the device meets said criteria.

    Instead, the document focuses on demonstrating substantial equivalence through:

    • Comparison of indications for use, principles of operation, and technological characteristics.
    • Conformity to FDA recognized consensus standards.
    • Successful completion of non-clinical performance, electrical, mechanical, structural, electromagnetic compatibility, and software testing.
    • Successful completion of standard Siemens QA tests and expert review of sample clinical images to assess clinically acceptable MR imaging performance.

    Therefore, many of the requested details about acceptance criteria and a specific study proving those criteria are not present in this document. The device is a "Magnetic resonance diagnostic device" (MRDD), implying its performance is related to image quality and ability to assist in diagnosis, but quantitative metrics are not provided.

    Here is a summary of the information that can be extracted or inferred from the provided text, with blank or "N/A" for information not present:


    1. Table of Acceptance Criteria and Reported Device Performance

    As noted, the document does not specify quantitative acceptance criteria or reported device performance metrics in terms of clinical diagnostic efficacy (e.g., sensitivity, specificity, accuracy). The acceptance is based on demonstrating substantial equivalence to a predicate device and successful completion of various engineering and functional tests.

    Acceptance Criteria (Not explicitly stated in quantitative terms; inferred from substantial equivalence)Reported Device Performance (Not explicitly stated in quantitative terms)
    - Substantially equivalent Indications for UseSame as predicate device (stated in "Equivalence Comparison" columns)
    - Substantially equivalent Principles of OperationSame as predicate device
    - Substantially equivalent Technological Characteristics (with differences validated)Differences in Siemens MRI scanner, magnet mover, and software. Validation data supports equivalent safety and performance profile.
    - Conformity to FDA recognized consensus standards (e.g., for safety, EMC, software)Conforms to listed standards (Table 2)
    - Performance equivalent to predicate device for intraoperative featuresEquivalence demonstrated through testing, no new safety/effectiveness issues.
    - Clinically acceptable MR imaging performance in DR and ORDemonstrated through standard Siemens QA tests and expert review of sample clinical images.
    - Passed non-clinical testing (functional, imaging, integration, software, acoustic, heating)Successful completion of all listed non-clinical tests (Table 3)

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: Not specified. The document mentions "Sample Clinical Images in DR and OR" were assessed, but the number of images or cases is not given.
    • Data Provenance: Not specified.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Number of Experts: Not specified. The document mentions "expert review of sample clinical images."
    • Qualifications of Experts: It states "interpreted by a trained physician" in the Indications for Use, and mentions "expert review" for image assessment, but specific qualifications (e.g., years of experience, subspecialty) are not provided.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not specified. The document states "expert review of sample clinical images," but does not detail how consensus or adjudication was reached if multiple experts were involved.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • MRMC Study: No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly mentioned or described. The submission focuses on demonstrating substantial equivalence via engineering/functional performance and image quality, not directly on reader performance with/without AI assistance.
    • Effect Size of Human Readers Improve with AI vs. without AI Assistance: Not applicable, as no MRMC study involving AI assistance was described. The device itself is an MRI system, not an AI-assisted diagnostic tool.

    6. Standalone (Algorithm Only Without Human-in-the-loop Performance) Study

    • Standalone Study: Yes, in a way. The "system imaging performance testing" and successful completion of "standard Siemens QA tests" would represent standalone performance assessments of the MRI hardware and integrated software components responsible for image acquisition, without human interpretation in the loop for the performance assessment itself. However, the primary purpose of the device is to produce images "interpreted by a trained physician," meaning human-in-the-loop is part of its intended use. The document does not describe an "algorithm only" study in the context of an AI-driven diagnostic algorithm.

    7. Type of Ground Truth Used

    • Ground Truth Type: For the "Sample Clinical Images," the ground truth establishment method is not detailed beyond "expert review." For the general device functionality and image quality, the "ground truth" would be established by engineering specifications, recognized standards, and the performance of the predicate device.

    8. Sample Size for the Training Set

    • Training Set Sample Size: Not applicable. This document describes the clearance of an MRI system, not an AI/ML algorithm that typically requires a discrete "training set." The software components mentioned (Magnet Mover Software and Application Platform Software) likely underwent standard software verification and validation, but not in the context of a "training set" for machine learning.

    9. How the Ground Truth for the Training Set Was Established

    • Ground Truth Establishment for Training Set: Not applicable (see point 8).
    Ask a Question

    Ask a specific question about this device

    Page 1 of 68