Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K243397
    Device Name
    uMR 680
    Date Cleared
    2025-07-16

    (258 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    uMR 680

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The uMR 680 system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces sagittal, transverse, coronal, and oblique cross sectional images, and spectroscopic images, and that display internal anatomical structure and/or function of the head, body and extremities.

    These images and the physical parameters derived from the images when interpreted by a trained physician yield information that may assist the diagnosis. Contrast agents may be used depending on the region of interest of the scan.

    Device Description

    The uMR 680 is a 1.5T superconducting magnetic resonance diagnostic device with a 70cm size patient bore. It consists of components such as magnet, RF power amplifier, RF coils, gradient power amplifier, gradient coils, patient table, spectrometer, computer, equipment cabinets, power distribution system, internal communication system, and vital signal module etc. The uMR 680 Magnetic Resonance Diagnostic Device is designed to conform to NEMA and DICOM standards.

    This traditional 510(k) is to request modifications for the cleared uMR 680(K240744). The modifications performed on the uMR 680 in this submission are due to the following changes that include:
    (1) Addition of RF coils and corresponding accessories: Breast Coil -12, Biopsy Configuration, Head Coil-16, Positioning Couch-top, Coil Support.
    (2) Deletion of VSM (Wireless UIH Gating Unit REF 453564324621, ECG module Ref 989803163121, SpO2 module Ref 989803163111).
    (3) Modification of the dimensions of Detachable table: from width 826mm, height 880mm,2578mm to width 810mm, height 880mm, length 2505mm.
    (4) Addition and modification of pulse sequences
    a) New sequences: gre_snap, gre_quick_4dncemra, gre_pass, gre_mtp, gre_trass, epi_dwi_msh, epi_dti_msh, svs_hise.
    b) Added associated options for certain sequences: fse(add Silicone-Only Imaging, MicroView, MTC, MultiBand), fse_arms(add Silicone-Only Imaging), fse_ssh(add Silicone-Only Imaging), fse_mx(add CEST, T1rho, MicroView, MTC), fse_arms_dwi(add MultiBand), asl_3d(add multi-PLD), gre(add T1rho, MTC, output phase image), gre_fsp(add FSP+), gre_bssfp(add CASS, TI Scout), gre_fsp_c(add 3D LGE, DB/GB PSIR), gre_bssfp_ucs(add real time cine), gre_fq(add 4D Flow), epi_dwi(add IVIM), epi_dti(add DKI, DSI).
    c) Added additional accessory equipment required for certain sequences: gre_bssfp(add Virtual ECG Trigger).
    d) Name change of certain sequences: gre_fine(old name: gre_bssfp_fi).
    e) Added applicable body parts: gre_ute, gre_fine, fse_mx.
    (5) Addition of imaging reconstruction methods: AI-assisted Compressed Sensing (ACS), Spark artifact Correction (SparkCo).
    (6) Addition of imaging processing methods: Inline Cardiac Function, Inline ECV, Inline MRS, Inline MOCO, 4D Flow, SNAP, CEST, T1rho, FSP+, CASS, PASS, MTP.
    (7) Addition of workflow features: TI Scout, EasyCrop, ImageGuard, Mocap, EasyFACT, Auto Bolus tracker, Breast Biopsy and uVision.
    (8) Modification of workflow features: EasyScan(add applicable body parts)

    The modification does not affect the intended use or alter the fundamental scientific technology of the device.

    AI/ML Overview

    The provided FDA 510(k) clearance letter and summary for the uMR 680 Magnetic Resonance Imaging System outlines performance data for several new features and algorithms.

    Here's an analysis of the acceptance criteria and the studies that prove the device meets them for the AI-assisted Compressed Sensing (ACS), SparkCo, Inline ED/ES Phases Recognition, and Inline MOCO algorithms.


    1. Table of Acceptance Criteria and Reported Device Performance

    Feature/AlgorithmEvaluation ItemAcceptance CriteriaReported Performance
    AI-assisted Compressed Sensing (ACS)AI Module Verification TestThe ratio of error: NRMSE(output)/ NRMSE(input) is always less than 1.Pass
    Image SNRACS has higher SNR than CS.Pass (ACS shown to perform better than CS in SNR)
    Image ResolutionACS has higher (standard deviation (SD) / mean value(S)) values than CS.Pass (ACS shown to perform better than CS in resolution)
    Image ContrastBland-Altman analysis of image intensities acquired using fully sampled and ACS was shown with less than 1% bias and all sample points falls in the 95% confidence interval.Pass (less than 1% bias, all sample points within 95% confidence interval)
    Image UniformityACS achieved significantly same image uniformities as fully sampled image.Pass
    Structure MeasurementMeasurements differences on ACS and fully sampled images of same structures under 5% is acceptable.Pass
    Clinical EvaluationAll ACS images were rated with equivalent or higher scores in terms of diagnosis quality."All ACS images were rated with equivalent or higher scores in terms of diagnosis quality" (implicitly, it passed)
    SparkCoSpark Detection AccuracyThe average detection accuracy needs to be larger than 90%.The average detection accuracy is 94%.
    Spark Correction Performance (Simulated)The average PSNR of spark-corrected images needs to be higher than the spark images. Spark artifacts need to be reduced or corrected.The average PSNR of spark-corrected images is 1.6 dB higher than the spark images. The images with spark artifacts were successfully corrected after enabling SparkCo.
    Spark Correction Performance (Real-world)Spark artifacts need to be reduced or corrected (evaluated by one experienced evaluator assessing image quality improvement).The images with spark artifacts were successfully corrected after enabling SparkCo.
    Inline ED/ES Phases RecognitionError between algorithm and gold standardThe average error does not exceed 1 frame.The error between the frame indexes calculated by the algorithm for the ED and ES of all test data and the gold standard frame index is 0.13 frames, which does not exceed 1 frame.
    Inline MOCODice Coefficient (Left Ventricular Myocardium after Motion Correction) Cardiac Perfusion ImagesThe average Dice coefficient of the left ventricular myocardium after motion correction is greater than 0.87.The average Dice coefficient of the left ventricular myocardium after motion correction is 0.92, which is greater than 0.87. Subgroup analysis also showed good generalization:
    • Age: 0.92-0.93
    • Gender: 0.92
    • Ethnicity: 0.91-0.92
    • BMI: 0.91-0.95
    • Magnetic field strength: 0.92-0.93
    • Disease conditions: 0.91-0.93 |
      | | Dice Coefficient (Left Ventricular Myocardium after Motion Correction) Cardiac Dark Blood Images | The average Dice coefficient of the left ventricular myocardium after motion correction is greater than 0.87. | The average Dice coefficient of the left ventricular myocardium after motion correction is 0.96, which is greater than 0.87. Subgroup analysis also showed good generalization:
    • Age: 0.95-0.96
    • Gender: 0.96
    • Ethnicity: 0.95-0.96
    • BMI: 0.96-0.98
    • Magnetic field strength: 0.96
    • Disease conditions: 0.96-0.97 |

    2. Sample Size Used for the Test Set and Data Provenance

    • AI-assisted Compressed Sensing (ACS):
      • Sample Size: 1724 samples from 35 volunteers.
      • Data Provenance: Diverse demographic distributions (gender, age groups, ethnicity, BMI) covering various clinical sites and separated time periods. Implied to be prospective or a carefully curated retrospective set, collected specifically for validation on the uMR 680 system, and independent of training data.
    • SparkCo:
      • Simulated Spark Testing Dataset: 159 spark slices (generated from spark-free raw data).
      • Real-world Spark Testing Dataset: 59 cases from 15 patients.
      • Data Provenance: Real-world data acquired from uMR 1.5T and uMR 3T scanners, covering representative clinical protocols. The report specifies "Asian" for 100% of the real-world dataset's ethnicity, noting that performance is "irrelevant with human ethnicity" due to the nature of spark signal detection. This is retrospective data.
    • Inline ED/ES Phases Recognition:
      • Sample Size: 95 cases from 56 volunteers.
      • Data Provenance: Includes various ages, genders, field strengths (1.5T, 3.0T), disease conditions (NOR, MINF, DCM, HCM, ARV), and ethnicities (Asian, White, Black). The data is independent of the training data. Implied to be retrospective from UIH MRI systems.
    • Inline MOCO:
      • Sample Size: 287 cases in total (105 cardiac perfusion images from 60 patients, 182 cardiac dark blood images from 33 patients).
      • Data Provenance: Acquired from 1.5T and 3T magnetic resonance imaging equipment from UIH. Covers various ages, genders, ethnicities (Asian, White, Black, Hispanic), BMI, field strengths (1.5T, 3.0T), and disease conditions (Positive, Negative, Unknown). The data is independent of the training data. Implied to be retrospective from UIH MRI systems.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts

    • AI-assisted Compressed Sensing (ACS):
      • Number of Experts: More than one (plural "radiologists" used).
      • Qualifications: American Board of Radiologists certificated physicians.
    • SparkCo:
      • Number of Experts: One expert for real-world SparkCo evaluation.
      • Qualifications: "one experienced evaluator." (Specific qualifications like board certification or years of experience are not provided for this specific evaluator).
    • Inline ED/ES Phases Recognition:
      • Number of Experts: Not explicitly stated for ground truth establishment ("gold standard phase indices"). It implies a single, established method or perhaps a consensus by a team, but details are missing.
    • Inline MOCO:
      • Number of Experts: Three licensed physicians.
      • Qualifications: U.S. credentials.

    4. Adjudication Method for the Test Set

    • AI-assisted Compressed Sensing (ACS): Not explicitly stated, but implies individual review by "radiologists" to rate diagnostic quality.
    • SparkCo: For the real-world dataset, evaluation by "one experienced evaluator."
    • Inline ED/ES Phases Recognition: Not explicitly stated; "gold standard phase indices" are referenced, implying a pre-defined or established method without detailing a multi-reader adjudication process.
    • Inline MOCO: "Finally, all ground truth was evaluated by three licensed physicians with U.S. credentials." This suggests an adjudication or confirmation process, but the specific method (e.g., 2+1, consensus) is not detailed beyond "evaluated by."

    5. If a Multi-Reader, Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

    • No MRMC comparative effectiveness study was explicitly described to evaluate human reader improvement with AI assistance. The described studies focus on the standalone performance of the algorithms or a qualitative assessment of images by radiologists for diagnostic quality.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    • Yes, standalone performance was done for all listed algorithms.
      • ACS: Evaluated quantitatively (SNR, Resolution, Contrast, Uniformity, Structure Measurement) and then qualitatively by radiologists. The quantitative metrics are standalone.
      • SparkCo: Quantitative metrics (Detection Accuracy, PSNR) and qualitative assessment by an experienced evaluator. The quantitative metrics are standalone.
      • Inline ED/ES Phases Recognition: Evaluated quantitatively as the error between algorithmic output and gold standard. This is a standalone performance metric.
      • Inline MOCO: Evaluated using the Dice coefficient, which is a standalone quantitative metric comparing algorithm output to ground truth.

    7. The Type of Ground Truth Used

    • AI-assisted Compressed Sensing (ACS):
      • Quantitative: Fully-sampled k-space data transformed to image space.
      • Clinical: Radiologist evaluation ("American Board of Radiologists certificated physicians").
    • SparkCo:
      • Spark Detection Module: Location of spark points (ground truth for simulated data).
      • Spark Correction Module: Visual assessment by "one experienced evaluator."
    • Inline ED/ES Phases Recognition: "Gold standard phase indices" (method for establishing this gold standard is not detailed, but implies expert-derived or a highly accurate reference).
    • Inline MOCO: Left ventricular myocardium segmentation annotated by a "well-trained annotator" and "evaluated by three licensed physicians with U.S. credentials." This is an expert consensus/pathology-like ground truth.

    8. The Sample Size for the Training Set

    • AI-assisted Compressed Sensing (ACS): 1,262,912 samples (from a variety of anatomies, image contrasts, and acceleration factors).
    • SparkCo: 24,866 spark slices (generated from 61 spark-free cases from 10 volunteers).
    • Inline ED/ES Phases Recognition: Not explicitly provided, but stated to be "independent of the data used to test the algorithm."
    • Inline MOCO: Not explicitly provided, but stated to be "independent of the data used to test the algorithm."

    9. How the Ground Truth for the Training Set Was Established

    • AI-assisted Compressed Sensing (ACS): Fully-sampled k-space data were collected and transformed to image space as the ground-truth. All data were manually quality controlled.
    • SparkCo: "The training dataset for the AI module in SparkCo was generated by simulating spark artifacts from spark-free raw data... a total of 24,866 spark slices, along with the corresponding ground truth (i.e., the location of spark points), were generated for training." This indicates a hybrid approach using real spark-free data to simulate and generate the ground truth for spark locations.
    • Inline ED/ES Phases Recognition: Not explicitly provided.
    • Inline MOCO: Not explicitly provided.
    Ask a Question

    Ask a specific question about this device

    K Number
    K240744
    Device Name
    uMR 680
    Date Cleared
    2024-04-10

    (23 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    uMR 680

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The uMR 680 system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces sagittal, transverse, coronal, and oblique cross sectional images, and spectroscopic images, and that display internal anatomical structure and/or function of the head, body and extremities. These images and the physical parameters derived from the images when interpreted by a trained physician yield information that may assist the diagnosis. Contrast agents may be used depending on the region of interest of the scan.

    Device Description

    The uMR 680 is a 1.5T superconducting magnetic resonance diagnostic device with a 70cm size patient bore. It consists of components such as magnet, RF power amplifier, RF coils, gradient power amplifier, gradient coils, patient table, spectrometer, computer, equipment cabinets, power distribution system, internal communication system, and vital signal module etc. The uMR 680 Magnetic Resonance Diagnostic Device is designed to conform to NEMA and DICOM standards. uMR 680 has been previously cleared by FDA via K222755. The modification performed on uMR 680 (K222755) in this submission is due to the addition of - Breast Coil -24 - epi_se_mre - MRE (Magnetic Resonance Elastography) The modification does not affect the intended use or alter the fundamental scientific technology of the device.

    AI/ML Overview

    The provided text describes the acceptance criteria and the study results for the Shanghai United Imaging Healthcare Co., Ltd. uMR 680 Magnetic Resonance Diagnostic Device.

    Here's the breakdown of the information requested:

    1. Table of Acceptance Criteria and Reported Device Performance

    ItemAcceptance CriteriaReported Device Performance
    Breast Coil - 24
    Surface heatingThe maximum temperature of all temperature probes shall not exceed 41 °C.Pass
    General electrical/mechanical safetyConform with ANSI/AAMI ES60601-1Pass
    SNR and UniformitySNR and Uniformity shall fulfill with the design specification.Pass
    BiocompatibilityMaterials of construction and manufacturing materials exempt from testing according to the Biocompatibility guidance (Attachment G), the 510(k) numbers for devices where these materials have been previously approved, or full biocompatibility report (assessment of sensitization, irritation and cytotoxicity risks) for components that have direct contact with the patient.All the materials of patient-contacting components for the Breast Coil - 24 are identical to uMR Omega which was cleared in K230152 in formulation, processing, sterilization, and geometry, and no other chemicals have been added (e.g., plasticizers, fillers, additives, cleaning agents, mold release agents).
    EMC-immunity, electrostatic dischargeConform with IEC 60601-1-2 and IEC 60601-4-2Pass
    Clinical image qualityImage quality is sufficient for diagnostic use.The U.S. Board Certified radiologist approves that image quality is sufficient for diagnostic use.
    MRE/epi_se_mre
    General electrical/mechanical safetyConform with ANSI/AAMI ES60601-1Pass
    EMCConform with IEC 60601-1-2 and IEC 60601-4-2Pass
    PerformanceBias of accuracy, repeatability, reproducibility and parameter sensitivity shall fulfill with the design specification.Pass
    Clinical image qualityImage quality is sufficient for diagnostic use.The U.S. Board Certified radiologist approves that image quality is sufficient for diagnostic use.

    2. Sample size used for the test set and the data provenance:

    • Sample Size: Not explicitly stated for each test beyond "phantom and volunteer test" for MRE performance and "the image generated by Breast Coil-24" for clinical image quality.
    • Data Provenance: Not specified. It does not mention country of origin or if the data was retrospective or prospective.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Number of Experts: For clinical image quality for both Breast Coil - 24 and MRE, it states "The U.S. Board Certified radiologist approves." This implies at least one, but the exact number of radiologists is not specified.
    • Qualifications of Experts: "U.S. Board Certified radiologist."

    4. Adjudication method for the test set:

    • Adjudication Method: Not explicitly stated. The statement "The U.S. Board Certified radiologist approves" suggests a single expert's approval for clinical image quality assessment, rather than a multi-expert consensus method like 2+1 or 3+1.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done:

    • No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly mentioned as being done for this submission. The studies detailed focus on system performance parameters and clinical image quality approval by a radiologist.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    • This device is a Magnetic Resonance Diagnostic Device (MRDD), which is a hardware system that produces images and physical parameters to be interpreted by a trained physician. The performance tests described (surface heating, electrical/mechanical safety, SNR, uniformity, biocompatibility, EMC, performance, and clinical image quality) are for the device itself and its components/functionalities, not for an AI algorithm working in a standalone capacity. So, this question is not directly applicable in the terms of an AI algorithm, but the system performance was evaluated standalone from human interpretation in most of the tests.

    7. The type of ground truth used:

    • Clinical Image Quality: Expert consensus (or approval by a U.S. Board Certified radiologist) that the image quality is sufficient for diagnostic use.
    • Biocompatibility: Demonstrated by using materials identical to a previously cleared device (uMR Omega, K230152) and meeting regulatory guidance.
    • Other performance metrics (Surface heating, electrical/mechanical safety, SNR, Uniformity, EMC, MRE Performance): Based on compliance with established engineering and safety standards (NEMA MS 14, ANSI/AAMI ES60601-1, NEMA MS 1, NEMA MS 3, NEMA MS 6, NEMA MS 9, IEC 60601-1-2, IEC 60601-4-2) and design specifications.

    8. The sample size for the training set:

    • The document does not detail any "training set" as it is describing a hardware medical device with specific new coils and pulse sequences rather than a machine learning or AI-driven diagnostic algorithm that would typically require a training set.

    9. How the ground truth for the training set was established:

    • Not applicable, as no training set for an AI/ML algorithm is mentioned.
    Ask a Question

    Ask a specific question about this device

    K Number
    K222755
    Device Name
    uMR 680
    Date Cleared
    2023-02-16

    (157 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    uMR 680

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The uMR 680 system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces sagittal, transverse, coronal, and oblique cross sectional images, and that display internal anatomical structure and/or function of the head, body and extremities. These images and the physical parameters derived from the images when interpreted by a trained physician yield information that may assist the diagnosis. Contrast agents may be used depending on the region of interest of the scan.

    Device Description

    The uMR 680 is a 1.5T superconducting magnetic resonance diagnostic device with a 70cm size patient bore. It consists of components such as magnet, RF power amplifier, RF coils, gradient power amplifier, gradient coils, patient table, spectrometer, computer, equipment cabinets, power distribution system, internal communication system, and vital signal module etc. The uMR 680 Magnetic Resonance Diagnostic Device is designed to conform to NEMA and DICOM standards.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the DeepRecon algorithm found in the provided FDA 510(k) summary:

    1. Table of Acceptance Criteria and Reported Device Performance (DeepRecon)

    Evaluation ItemAcceptance CriteriaReported Device Performance (Test Result)Results
    Image SNRDeepRecon images achieve higher SNR compared to the images without DeepRecon (NADR)NADR: 137.03; DeepRecon: 186.87PASS
    Image UniformityUniformity difference between DeepRecon images and NADR images under 5%0.03%PASS
    Image ResolutionDeepRecon images achieve 10% or higher resolution compared to the NADR images15.57%PASS
    Image ContrastIntensity difference between DeepRecon images and NADR images under 5%1.0%PASS
    Structure MeasurementMeasurements on NADR and DeepRecon images of same structures, measurement difference under 5%0%PASS

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: 68 US subjects.
    • Data Provenance: The test data was collected from various clinical sites in the US during separate time periods and on subjects different from the training data. The data specifically indicates demographic distributions for US subjects across various genders, age groups, ethnicities, and BMI groups.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    The document mentions that "DeepRecon images were evaluated by American Board of Radiologists certificated physicians." It does not specify the exact number of experts used, nor does it provide details on their years of experience as an example. However, it does state their qualification: American Board of Radiologists certificated physicians.

    4. Adjudication Method for the Test Set

    The document does not explicitly state the adjudication method (e.g., 2+1, 3+1, none) used for the expert evaluation of the test set. It only mentions that "The evaluation reports from radiologists verified that DeepRecon meets the requirements of clinical diagnosis. All DeepRecon images were rated with equivalent or higher scores in terms of diagnosis quality." This suggests a qualitative review, but the specific consensus method is not detailed.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not explicitly described in the provided text in terms of quantifying human reader improvement with AI assistance. The expert evaluation focused on whether DeepRecon images met clinical diagnosis requirements and were rated equivalent or higher in quality, rather than measuring a specific effect size of AI assistance on human reader performance.

    6. Standalone (Algorithm Only) Performance

    Yes, a standalone (algorithm only) performance evaluation was done. The "Acceptance Criteria and Reported Device Performance" table directly shows the performance metrics (Image SNR, Uniformity, Resolution, Contrast, Structure Measurement) of the DeepRecon algorithm itself, compared to images without DeepRecon (NADR). This indicates a standalone assessment of the algorithm's output characteristics.

    7. Type of Ground Truth Used for the Test Set

    For the quantitative metrics (SNR, uniformity, resolution, contrast, structure measurement), the "ground truth" for comparison appears to be images without DeepRecon (NADR) as a baseline, or potentially direct measurements on those images.

    For the qualitative assessment by radiologists, the ground truth was expert opinion/consensus by American Board of Radiologists certificated physicians regarding clinical diagnosis quality.

    8. Sample Size for the Training Set

    The training set for DeepRecon consisted of data from 264 volunteers. This resulted in a total of 165,837 cases.

    9. How the Ground Truth for the Training Set Was Established

    For the training dataset, the ground truth was established by collecting multiple-averaged images with high-resolution and high SNR. These high-quality images were used as the reference against which the input images (generated by sequentially reducing the SNR and resolution of the ground-truth images) were trained. All data included for training underwent manual quality control.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1