Search Filters

Search Results

Found 6 results

510(k) Data Aggregation

    K Number
    K243547
    Device Name
    uMR Ultra
    Date Cleared
    2025-07-17

    (244 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K220332, K234154

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The uMR Ultra system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces sagittal, transverse, coronal, and oblique cross sectional images, and spectroscopic images, and that display internal anatomical structure and/or function of the head, body and extremities. These images and the physical parameters derived from the images when interpreted by a trained physician yield information that may assist the diagnosis. Contrast agents may be used depending on the region of interest of the scan.

    Device Description

    uMR Ultra is a 3T superconducting magnetic resonance diagnostic device with a 70cm size patient bore and 2 channel RF transmit system. It consists of components such as magnet, RF power amplifier, RF coils, gradient power amplifier, gradient coils, patient table, spectrometer, computer, equipment cabinets, power distribution system, internal communication system, and vital signal module etc. uMR Ultra is designed to conform to NEMA and DICOM standards.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the uMR Ultra device, based on the provided FDA 510(k) clearance letter.

    1. Table of Acceptance Criteria and Reported Device Performance

    Given the nature of the document, which focuses on device clearance, multiple features are discussed. I will present the acceptance criteria and results for the AI-powered features, as these are the most relevant to the "AI performance" aspect.

    Acceptance Criteria and Device Performance for AI-Enabled Features

    AI-Enabled FeatureAcceptance CriteriaReported Device Performance
    ACS- Ratio of error: NRMSE(output)/NRMSE(input) is always less than 1.
    • ACS has higher SNR than CS.
    • ACS has higher (standard deviation (SD) / mean value(S)) values than CS.
    • Bland-Altman analysis of image intensities acquired using fully sampled and ACS shown with less than 1% bias and all sample points falls in the 95% confidence interval.
    • Measurement differences on ACS and fully sampled images of same structures under 5% is acceptable.
    • Radiologists rate all ACS images with equivalent or higher scores in terms of diagnosis quality. | - Pass
    • Pass
    • Pass
    • Pass
    • Pass
    • Verified that ACS meets the requirements of clinical diagnosis. All ACS images were rated with equivalent or higher scores in terms of diagnosis quality. |
      | DeepRecon | - DeepRecon images achieve higher SNR compared to NADR images.
    • Uniformity difference between DeepRecon images and NADR images under 5%.
    • Intensity difference between DeepRecon images and NADR images under 5%.
    • Measurements on NADR and DeepRecon images of same structures, measurement difference under 5%.
    • Radiologists rate all DeepRecon images with equivalent or higher scores in terms of diagnosis quality. | - NADR: 343.63, DeepRecon: 496.15 (PASS)
    • 0.07% (PASS)
    • 0.2% (PASS)
    • 0% (PASS)
    • Verified that DeepRecon meets the requirements of clinical diagnosis. All DeepRecon images were rated with equivalent or higher scores in terms of diagnosis quality. |
      | EasyScan | No Fail cases and auto position success rate P1/(P1+P2+F) exceeds 80%.
      (P1: Pass with auto positioning; P2: Pass with user adjustment; F: Fail) | 99.6% |
      | t-ACS | - AI prediction (AI module output) much closer to reference compared to AI module input images.
    • Better consistency between t-ACS and reference than between CS and reference.
    • No large structural difference appeared between t-ACS and reference.
    • Motion-time curves and Bland-Altman analysis consistency between t-ACS and reference. | - Pass
    • Pass
    • Pass
    • Pass |
      | AiCo | - AiCo images exhibit improved PSNR and SSIM compared to the originals.
    • No significant structural differences from the gold standard.
    • Radiologists confirm image quality is diagnostically acceptable, fewer motion artifacts, and greater benefits for clinical diagnosis. | - Pass
    • Pass
    • Confirmed. |
      | SparkCo | - Average detection accuracy needs to be > 90%.
    • Average PSNR of spark-corrected images needs to be higher than spark images.
    • Spark artifacts need to be reduced or corrected after enabling SparkCo. | - 94%
    • 1.6 higher
    • Successfully corrected |
      | ImageGuard | Success rate P/(P+F) exceeds 90%.
      (P: Pass if prompt appears for motion / no prompt for no motion; F: Fail if prompt doesn't appear for motion / prompt appears for no motion) | 100% |
      | EasyCrop | No Fail cases and pass rate P1/(P1+P2+F) exceeds 90%.
      (P1: Other peripheral tissues cropped, meets user requirements; P2: Cropped images don't meet user requirements, but can be re-cropped; F: EasyCrop fails or original images not saved) | 100% |
      | EasyFACT | Satisfied and Acceptable ratio (S+A)/(S+A+F) exceeds 95%.
      (S: All ROIs placed correctly; A: Fewer than five ROIs placed correctly; F: ROIs positioned incorrectly or none placed) | 100% |
      | Auto TI Scout | Average frame difference between auto-calculated TI and gold standard is ≤ 1 frame, and maximum frame difference is ≤ 2 frames. | Average: 0.37-0.44 frames, Maximum: 1-2 frames (PASS) |
      | Inline MOCO | Average Dice coefficient of the left ventricular myocardium after motion correction is > 0.87. | Cardiac Perfusion Images: 0.92
      Cardiac Dark Blood Images: 0.96 |
      | Inline ED/ES Phases Recognition | The average error between the phase indices calculated by the algorithm for the ED and ES of test data and the gold standard phase indices does not exceed 1 frame. | 0.13 frames |
      | Inline ECV | No failure cases, satisfaction rate S/(S+A+F) > 95%.
      (S: Segmentation adheres to myocardial boundary, blood pool ROI correct; A: Small missing/redundant areas but blood pool ROI correct; F: Myocardial mask fails or blood pool ROI incorrect) | 100% |
      | EasyRegister (Height Estimation) | PH5 (Percentage of height error within 5%); PH15 (Percentage of height error within 15%); MEAN_H (Average error of height). (Specific numerical criteria not explicitly stated beyond these metrics) | PH5: 92.4%
      PH15: 100%
      MEAN_H: 31.53mm |
      | EasyRegister (Weight Estimation) | PW10 (Percentage of weight error within 10%); PW20 (Percentage of weight error within 20%); MEAN_W (Average error of weight). (Specific numerical criteria not explicitly stated beyond these metrics) | PW10: 68.64%
      PW20: 90.68%
      MEAN_W: 6.18kg |
      | EasyBolus | No Fail cases and success rate P1+P2/(P1+P2+F) exceeds 100%.
      (P1: Monitoring point positioning meets user requirements, frame difference ≤ 1 frame; P2: Monitoring point positioning meets user requirements, frame difference = 2 frames; F: Auto position fails or frame difference > 2 frames) | P1: 80%
      P2: 20%
      Total Failure Rate: 0%
      Pass: 100% |

    For the rest of the questions, I will consolidate the information where possible, as some aspects apply across multiple AI features.

    2. Sample Sizes Used for the Test Set and Data Provenance

    • ACS: 749 samples from 25 volunteers. Diverse demographic distributions covering various genders, age groups, ethnicity (White, Asian, Black), and BMI (Underweight, Healthy, Overweight/Obesity). Data collected from various clinical sites during separated time periods.
    • DeepRecon: 25 volunteers (nearly 2200 samples). Diverse demographic distributions covering various genders, age groups, ethnicity (White, Asian, Black), and BMI. Data collected from various clinical sites during separated time periods.
    • EasyScan: 444 cases from 116 subjects. Diverse demographic distributions covering various genders, age groups, and ethnicities. Data acquired from UIH MRI equipment (1.5T and 3T). Data provenance not explicitly stated (e.g., country of origin), but given the company location (China) and "U.S. credentials" for evaluators, it likely includes data from both. The document states "The testing dataset was collected independently from the training dataset".
    • t-ACS: 1173 cases from 60 volunteers. Diverse demographic distributions covering various genders, age groups, ethnicities (White, Black, Asian) and BMI. Data acquired by uMR Ultra scanners. Data provenance not explicitly stated, but implies global standard testing.
    • AiCo: 218 samples from 24 healthy volunteers. Diverse demographic distributions covering various genders, age groups, BMI (Under/healthy weight, Overweight/Obesity), and ethnicity (White, Black, Asian). Data provenance not explicitly stated.
    • SparkCo: 59 cases from 15 patients for real-world spark raw data testing. Diverse demographic distributions including gender, age, BMI (Underweight, Healthy, Overweight, Obesity), and ethnicity (Asian, "N.A." for White, implying not tested as irrelevant). Data acquired by uMR 1.5T and uMR 3T scanners.
    • ImageGuard: 191 cases from 80 subjects. Diverse demographic distributions covering various genders, age groups, and ethnicities (White, Black, Asian). Data acquired from UIH MRI equipment (1.5T and 3T).
    • EasyCrop: Not explicitly stated as "subjects" vs. "cases," but tested on 5 intended imaging body parts. Sample size (N=65) implies 65 cases/scans, potentially from 65 distinct subjects or fewer if subjects had multiple scans. Diverse demographic distributions covering various genders, age groups, ethnicity (Asian, Black, White). Data acquired from UIH MRI equipment (1.5T and 3T).
    • EasyFACT: 25 cases from 25 volunteers. Diverse demographic distributions covering various genders, age groups, weight, and ethnicity (Asian, White, Black).
    • Auto TI Scout: 27 patients. Diverse demographic distributions covering various genders, age groups, ethnicity (Asian, White), and BMI. Data acquired from 1.5T and 3T scanners.
    • Inline MOCO: Cardiac Perfusion Images: 105 cases from 60 patients. Cardiac Dark Blood Images: 182 cases from 33 patients. Diverse demographic distributions covering age, gender, ethnicity (Asian, White, Black, Hispanic), BMI, field strength, and disease conditions (Positive, Negative, Unknown).
    • Inline ED/ES Phases Recognition: 95 cases from 56 volunteers, covering various genders, age groups, field strength, disease conditions (NOR, MINF, DCM, HCM, ARV), and ethnicity (Asian, White, Black).
    • Inline ECV: 90 images from 28 patients. Diverse demographic distributions covering gender, age, BMI, field strength, ethnicity (Asian, White), and health status (Negative, Positive, Unknown).
    • EasyRegister (Height/Weight Estimation): 118 cases from 63 patients. Diverse ethnic groups (Chinese, US, France, Germany).
    • EasyBolus: 20 subjects. Diverse demographic distributions covering gender, age, field strength, and ethnicity (Asia).

    Data Provenance (Retrospective/Prospective and Country of Origin):
    The document states "The testing dataset was collected independently from the training dataset, with separated subjects and during different time periods." This implies a prospective collection for validation that is distinct from the training data. For ACS and DeepRecon, it explicitly mentions "US subjects" for some evaluations, but for many features, the specific country of origin for the test set is not explicitly stated beyond "diverse ethnic groups" or "Asian" which could be China (where the company is based) or other Asian populations. The use of "U.S. board-certified radiologists" and "licensed MRI technologist with U.S. credentials" for evaluation suggests the data is intended to be representative of, or directly includes, data relevant to the U.S. clinical context.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • ACS & DeepRecon: Evaluated by "American Board of Radiologists certificated physicians" (plural, implying multiple, at least 2). Not specified how many exactly, but strong qualifications.
    • EasyScan, ImageGuard, EasyCrop, EasyBolus: Evaluated by "licensed MRI technologist with U.S. credentials." For EasyBolus, it specifies "certified professionals in the United States." Number not explicitly stated beyond "the" technologist/professionals, but implying multiple for robust evaluation.
    • Inline MOCO & Inline ECV: Ground truth annotations done by a "well-trained annotator" and "finally, all ground truth was evaluated by three licensed physicians with U.S. credentials." This indicates a 3-expert consensus/adjudication.
    • SparkCo: "One experienced evaluator" for subjective image quality improvement.
    • For other features (t-ACS, EasyFACT, Auto TI Scout, Inline ED/ES Phases Recognition, EasyRegister), the ground truth seems to be based on physical measurements (for EasyRegister) or computational metrics (for t-ACS based on fully-sampled images, and for accuracy of ROI placement against defined standards), rather than human expert adjudication for ground truth.

    4. Adjudication Method (e.g., 2+1, 3+1, none) for the Test Set

    • Inline MOCO & Inline ECV: "Evaluated by three licensed physicians with U.S. credentials." This implies a 3-expert consensus method for ground truth establishment.
    • ACS, DeepRecon, AiCo: "Evaluated by American Board of Radiologists certificated physicians" (plural). While not explicitly stated as 2+1 or 3+1, it suggests a multi-reader review, where consensus was likely reached for the reported diagnostic quality.
    • SparkCo: "One experienced evaluator" was used for subjective evaluation, implying no formal multi-reader adjudication for this specific metric.
    • For features like EasyScan, ImageGuard, EasyCrop, EasyBolus (evaluated by MRI technologists) and those relying on quantitative metrics against a reference (t-ACS, EasyFACT, Auto TI Scout, EasyRegister, Inline ED/ES Phases Recognition), the "ground truth" is either defined by the system's intended function (e.g., correct auto-positioning) or a mathematically derived reference, so a traditional human adjudication method is not applicable in the same way as for diagnostic image interpretation.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done

    The document does not explicitly state that a formal MRMC comparative effectiveness study was performed to quantify the effect size of how much human readers improve with AI vs. without AI assistance.

    Instead, the evaluations for ACS, DeepRecon, and AiCo involve "American Board of Radiologists certificated physicians" who "verified that [AI feature] meets the requirements of clinical diagnosis. All [AI feature] images were rated with equivalent or higher scores in terms of diagnosis quality." For AiCo, they confirmed images "exhibit fewer motion artifacts and offer greater benefits for clinical diagnosis." This is a qualitative assessment of diagnostic quality by experts, but not a comparative effectiveness study in the sense of measuring reader accuracy or confidence change with AI assistance.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done

    Yes, for many of the AI-enabled features, a standalone performance evaluation was conducted:

    • ACS: Performance was evaluated by comparing quantitative metrics (NRMSE, SNR, Resolution, Contrast, Uniformity, Structure Measurement) against fully-sampled images or CS. This is a standalone evaluation.
    • DeepRecon: Quantitative metrics (SNR, uniformity, contrast, structure measurement) were compared between DeepRecon and NADR (without DeepRecon) images. This is a standalone evaluation.
    • t-ACS: Quantitative tests (MAE, PSNR, SSIM, structural measurements, motion-time curves) were performed comparing t-ACS and CS results against a reference. This is a standalone evaluation.
    • AiCo: PSNR and SSIM values were quantitatively compared, and structural dimensions were assessed, between AiCo processed images and original/motionless reference images. This is a standalone evaluation.
    • SparkCo: Spark detection accuracy was calculated, and PSNR of spark-corrected images was compared to original spark images. This is a standalone evaluation.
    • Inline MOCO: Evaluated using Dice coefficient, a quantitative metric for segmentation accuracy. This is a standalone evaluation.
    • Inline ED/ES Phases Recognition: Evaluated by quantifying the error between algorithm output and gold standard phase indices. This is a standalone evaluation.
    • Inline ECV: Evaluated by quantitative scoring for segmentation accuracy (S, A, F criteria). This is a standalone evaluation.
    • EasyRegister (Height/Weight): Evaluated by quantitative error metrics (PH5, PH15, MEAN_H; PW10, PW20, MEAN_W) against physical measurements. This is a standalone evaluation.

    Features like EasyScan, ImageGuard, EasyCrop, and EasyBolus involve automated workflow assistance where the direct "diagnostic" outcome isn't solely from the algorithm, but the automated function's performance is evaluated in a standalone manner against defined success criteria.

    7. The Type of Ground Truth Used

    The type of ground truth varies depending on the specific AI feature:

    • Reference/Fully-Sampled Data:
      • ACS, DeepRecon, t-ACS, AiCo: Fully-sampled k-space data transformed to image space served as "ground-truth" for training and as a reference for quantitative performance metrics in testing. For AiCo, "motionless data" served as gold standard.
      • SparkCo: Simulated spark artifacts generated from "spark-free raw data" provided ground truth for spark point locations in training.
    • Expert Consensus/Subjective Evaluation:
      • ACS, DeepRecon, AiCo: "American Board of Radiologists certificated physicians" provided qualitative assessment of diagnostic image quality ("equivalent or higher scores," "diagnostically acceptable," "fewer motion artifacts," "greater benefits for clinical diagnosis").
      • EasyScan, ImageGuard, EasyCrop, EasyBolus: "Licensed MRI technologist with U.S. credentials" or "certified professionals in the United States" performed subjective evaluation against predefined success criteria for the workflow functionality.
      • SparkCo: One experienced evaluator for subjective image quality improvement.
    • Anatomical/Physiological Measurements / Defined Standards:
      • EasyFACT: Defined criteria for ROI placement within liver parenchyma, avoiding borders/vascular structures.
      • Auto TI Scout, Inline ED/ES Phases Recognition: Gold standard phase indices were presumably established by expert review or a reference method.
      • Inline MOCO & Inline ECV: Ground truth for cardiac left ventricular myocardium segmentation was established by a "well-trained annotator" and "evaluated by three licensed physicians with U.S. credentials" (expert consensus based on anatomical boundaries).
      • EasyRegister (Height/Weight Estimation): "Precisely measured height/weight value" using "physical examination standards."

    8. The Sample Size for the Training Set

    • ACS: 1,262,912 samples (collected from variety of anatomies, image contrasts, and acceleration factors, scanned by UIH MRI systems).
    • DeepRecon: 165,837 samples (collected from 264 volunteers, scanned by UIH MRI systems for multiple body parts and clinical protocols).
    • EasyScan: Training data collection not explicitly detailed in the same way as ACS/DeepRecon (refers to "collected independently from the training dataset").
    • t-ACS: Datasets collected from 108 volunteers ("large number of samples").
    • AiCo: 140,000 images collected from 114 volunteers across multiple body parts and clinical protocols.
    • SparkCo: 24,866 spark slices generated from 61 cases collected from 10 volunteers.
    • EasyFACT, Auto TI Scout, Inline MOCO, Inline ED/ES Phases Recognition, Inline ECV, EasyRegister, EasyBolus: The document states that training data was independent of testing data but does not provide specific sample sizes for the training datasets for these features.

    9. How the Ground Truth for the Training Set was Established

    • ACS, DeepRecon, t-ACS, AiCo: "Fully-sampled k-space data were collected and transformed to image space as the ground-truth." For DeepRecon specifically, "multiple-averaged images with high-resolution and high SNR were collected as the ground-truth images." For AiCo, "motionless data" served as gold standard. All training data were "manually quality controlled."
    • SparkCo: "The training dataset... was generated by simulating spark artifacts from spark-free raw data... with the corresponding ground truth (i.e., the location of spark points)."
    • Inline MOCO & Inline ECV: The document states "all ground truth was annotated by a well-trained annotator. The annotator used an interactive tool to observe the image, and then labeled the left ventricular myocardium in the image."
    • For EasyScan, EasyFACT, Auto TI Scout, Inline ED/ES Phases Recognition, EasyRegister, and EasyBolus training ground truth establishment is not explicitly detailed, only that the testing data was independent of the training data. For EasyRegister, it implies physical measurements were the basis for ground truth.
    Ask a Question

    Ask a specific question about this device

    K Number
    K243397
    Device Name
    uMR 680
    Date Cleared
    2025-07-16

    (258 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K220332, K234154, K230152

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The uMR 680 system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces sagittal, transverse, coronal, and oblique cross sectional images, and spectroscopic images, and that display internal anatomical structure and/or function of the head, body and extremities.

    These images and the physical parameters derived from the images when interpreted by a trained physician yield information that may assist the diagnosis. Contrast agents may be used depending on the region of interest of the scan.

    Device Description

    The uMR 680 is a 1.5T superconducting magnetic resonance diagnostic device with a 70cm size patient bore. It consists of components such as magnet, RF power amplifier, RF coils, gradient power amplifier, gradient coils, patient table, spectrometer, computer, equipment cabinets, power distribution system, internal communication system, and vital signal module etc. The uMR 680 Magnetic Resonance Diagnostic Device is designed to conform to NEMA and DICOM standards.

    This traditional 510(k) is to request modifications for the cleared uMR 680(K240744). The modifications performed on the uMR 680 in this submission are due to the following changes that include:
    (1) Addition of RF coils and corresponding accessories: Breast Coil -12, Biopsy Configuration, Head Coil-16, Positioning Couch-top, Coil Support.
    (2) Deletion of VSM (Wireless UIH Gating Unit REF 453564324621, ECG module Ref 989803163121, SpO2 module Ref 989803163111).
    (3) Modification of the dimensions of Detachable table: from width 826mm, height 880mm,2578mm to width 810mm, height 880mm, length 2505mm.
    (4) Addition and modification of pulse sequences
    a) New sequences: gre_snap, gre_quick_4dncemra, gre_pass, gre_mtp, gre_trass, epi_dwi_msh, epi_dti_msh, svs_hise.
    b) Added associated options for certain sequences: fse(add Silicone-Only Imaging, MicroView, MTC, MultiBand), fse_arms(add Silicone-Only Imaging), fse_ssh(add Silicone-Only Imaging), fse_mx(add CEST, T1rho, MicroView, MTC), fse_arms_dwi(add MultiBand), asl_3d(add multi-PLD), gre(add T1rho, MTC, output phase image), gre_fsp(add FSP+), gre_bssfp(add CASS, TI Scout), gre_fsp_c(add 3D LGE, DB/GB PSIR), gre_bssfp_ucs(add real time cine), gre_fq(add 4D Flow), epi_dwi(add IVIM), epi_dti(add DKI, DSI).
    c) Added additional accessory equipment required for certain sequences: gre_bssfp(add Virtual ECG Trigger).
    d) Name change of certain sequences: gre_fine(old name: gre_bssfp_fi).
    e) Added applicable body parts: gre_ute, gre_fine, fse_mx.
    (5) Addition of imaging reconstruction methods: AI-assisted Compressed Sensing (ACS), Spark artifact Correction (SparkCo).
    (6) Addition of imaging processing methods: Inline Cardiac Function, Inline ECV, Inline MRS, Inline MOCO, 4D Flow, SNAP, CEST, T1rho, FSP+, CASS, PASS, MTP.
    (7) Addition of workflow features: TI Scout, EasyCrop, ImageGuard, Mocap, EasyFACT, Auto Bolus tracker, Breast Biopsy and uVision.
    (8) Modification of workflow features: EasyScan(add applicable body parts)

    The modification does not affect the intended use or alter the fundamental scientific technology of the device.

    AI/ML Overview

    The provided FDA 510(k) clearance letter and summary for the uMR 680 Magnetic Resonance Imaging System outlines performance data for several new features and algorithms.

    Here's an analysis of the acceptance criteria and the studies that prove the device meets them for the AI-assisted Compressed Sensing (ACS), SparkCo, Inline ED/ES Phases Recognition, and Inline MOCO algorithms.


    1. Table of Acceptance Criteria and Reported Device Performance

    Feature/AlgorithmEvaluation ItemAcceptance CriteriaReported Performance
    AI-assisted Compressed Sensing (ACS)AI Module Verification TestThe ratio of error: NRMSE(output)/ NRMSE(input) is always less than 1.Pass
    Image SNRACS has higher SNR than CS.Pass (ACS shown to perform better than CS in SNR)
    Image ResolutionACS has higher (standard deviation (SD) / mean value(S)) values than CS.Pass (ACS shown to perform better than CS in resolution)
    Image ContrastBland-Altman analysis of image intensities acquired using fully sampled and ACS was shown with less than 1% bias and all sample points falls in the 95% confidence interval.Pass (less than 1% bias, all sample points within 95% confidence interval)
    Image UniformityACS achieved significantly same image uniformities as fully sampled image.Pass
    Structure MeasurementMeasurements differences on ACS and fully sampled images of same structures under 5% is acceptable.Pass
    Clinical EvaluationAll ACS images were rated with equivalent or higher scores in terms of diagnosis quality."All ACS images were rated with equivalent or higher scores in terms of diagnosis quality" (implicitly, it passed)
    SparkCoSpark Detection AccuracyThe average detection accuracy needs to be larger than 90%.The average detection accuracy is 94%.
    Spark Correction Performance (Simulated)The average PSNR of spark-corrected images needs to be higher than the spark images. Spark artifacts need to be reduced or corrected.The average PSNR of spark-corrected images is 1.6 dB higher than the spark images. The images with spark artifacts were successfully corrected after enabling SparkCo.
    Spark Correction Performance (Real-world)Spark artifacts need to be reduced or corrected (evaluated by one experienced evaluator assessing image quality improvement).The images with spark artifacts were successfully corrected after enabling SparkCo.
    Inline ED/ES Phases RecognitionError between algorithm and gold standardThe average error does not exceed 1 frame.The error between the frame indexes calculated by the algorithm for the ED and ES of all test data and the gold standard frame index is 0.13 frames, which does not exceed 1 frame.
    Inline MOCODice Coefficient (Left Ventricular Myocardium after Motion Correction) Cardiac Perfusion ImagesThe average Dice coefficient of the left ventricular myocardium after motion correction is greater than 0.87.The average Dice coefficient of the left ventricular myocardium after motion correction is 0.92, which is greater than 0.87. Subgroup analysis also showed good generalization:
    • Age: 0.92-0.93
    • Gender: 0.92
    • Ethnicity: 0.91-0.92
    • BMI: 0.91-0.95
    • Magnetic field strength: 0.92-0.93
    • Disease conditions: 0.91-0.93 |
      | | Dice Coefficient (Left Ventricular Myocardium after Motion Correction) Cardiac Dark Blood Images | The average Dice coefficient of the left ventricular myocardium after motion correction is greater than 0.87. | The average Dice coefficient of the left ventricular myocardium after motion correction is 0.96, which is greater than 0.87. Subgroup analysis also showed good generalization:
    • Age: 0.95-0.96
    • Gender: 0.96
    • Ethnicity: 0.95-0.96
    • BMI: 0.96-0.98
    • Magnetic field strength: 0.96
    • Disease conditions: 0.96-0.97 |

    2. Sample Size Used for the Test Set and Data Provenance

    • AI-assisted Compressed Sensing (ACS):
      • Sample Size: 1724 samples from 35 volunteers.
      • Data Provenance: Diverse demographic distributions (gender, age groups, ethnicity, BMI) covering various clinical sites and separated time periods. Implied to be prospective or a carefully curated retrospective set, collected specifically for validation on the uMR 680 system, and independent of training data.
    • SparkCo:
      • Simulated Spark Testing Dataset: 159 spark slices (generated from spark-free raw data).
      • Real-world Spark Testing Dataset: 59 cases from 15 patients.
      • Data Provenance: Real-world data acquired from uMR 1.5T and uMR 3T scanners, covering representative clinical protocols. The report specifies "Asian" for 100% of the real-world dataset's ethnicity, noting that performance is "irrelevant with human ethnicity" due to the nature of spark signal detection. This is retrospective data.
    • Inline ED/ES Phases Recognition:
      • Sample Size: 95 cases from 56 volunteers.
      • Data Provenance: Includes various ages, genders, field strengths (1.5T, 3.0T), disease conditions (NOR, MINF, DCM, HCM, ARV), and ethnicities (Asian, White, Black). The data is independent of the training data. Implied to be retrospective from UIH MRI systems.
    • Inline MOCO:
      • Sample Size: 287 cases in total (105 cardiac perfusion images from 60 patients, 182 cardiac dark blood images from 33 patients).
      • Data Provenance: Acquired from 1.5T and 3T magnetic resonance imaging equipment from UIH. Covers various ages, genders, ethnicities (Asian, White, Black, Hispanic), BMI, field strengths (1.5T, 3.0T), and disease conditions (Positive, Negative, Unknown). The data is independent of the training data. Implied to be retrospective from UIH MRI systems.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts

    • AI-assisted Compressed Sensing (ACS):
      • Number of Experts: More than one (plural "radiologists" used).
      • Qualifications: American Board of Radiologists certificated physicians.
    • SparkCo:
      • Number of Experts: One expert for real-world SparkCo evaluation.
      • Qualifications: "one experienced evaluator." (Specific qualifications like board certification or years of experience are not provided for this specific evaluator).
    • Inline ED/ES Phases Recognition:
      • Number of Experts: Not explicitly stated for ground truth establishment ("gold standard phase indices"). It implies a single, established method or perhaps a consensus by a team, but details are missing.
    • Inline MOCO:
      • Number of Experts: Three licensed physicians.
      • Qualifications: U.S. credentials.

    4. Adjudication Method for the Test Set

    • AI-assisted Compressed Sensing (ACS): Not explicitly stated, but implies individual review by "radiologists" to rate diagnostic quality.
    • SparkCo: For the real-world dataset, evaluation by "one experienced evaluator."
    • Inline ED/ES Phases Recognition: Not explicitly stated; "gold standard phase indices" are referenced, implying a pre-defined or established method without detailing a multi-reader adjudication process.
    • Inline MOCO: "Finally, all ground truth was evaluated by three licensed physicians with U.S. credentials." This suggests an adjudication or confirmation process, but the specific method (e.g., 2+1, consensus) is not detailed beyond "evaluated by."

    5. If a Multi-Reader, Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

    • No MRMC comparative effectiveness study was explicitly described to evaluate human reader improvement with AI assistance. The described studies focus on the standalone performance of the algorithms or a qualitative assessment of images by radiologists for diagnostic quality.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    • Yes, standalone performance was done for all listed algorithms.
      • ACS: Evaluated quantitatively (SNR, Resolution, Contrast, Uniformity, Structure Measurement) and then qualitatively by radiologists. The quantitative metrics are standalone.
      • SparkCo: Quantitative metrics (Detection Accuracy, PSNR) and qualitative assessment by an experienced evaluator. The quantitative metrics are standalone.
      • Inline ED/ES Phases Recognition: Evaluated quantitatively as the error between algorithmic output and gold standard. This is a standalone performance metric.
      • Inline MOCO: Evaluated using the Dice coefficient, which is a standalone quantitative metric comparing algorithm output to ground truth.

    7. The Type of Ground Truth Used

    • AI-assisted Compressed Sensing (ACS):
      • Quantitative: Fully-sampled k-space data transformed to image space.
      • Clinical: Radiologist evaluation ("American Board of Radiologists certificated physicians").
    • SparkCo:
      • Spark Detection Module: Location of spark points (ground truth for simulated data).
      • Spark Correction Module: Visual assessment by "one experienced evaluator."
    • Inline ED/ES Phases Recognition: "Gold standard phase indices" (method for establishing this gold standard is not detailed, but implies expert-derived or a highly accurate reference).
    • Inline MOCO: Left ventricular myocardium segmentation annotated by a "well-trained annotator" and "evaluated by three licensed physicians with U.S. credentials." This is an expert consensus/pathology-like ground truth.

    8. The Sample Size for the Training Set

    • AI-assisted Compressed Sensing (ACS): 1,262,912 samples (from a variety of anatomies, image contrasts, and acceleration factors).
    • SparkCo: 24,866 spark slices (generated from 61 spark-free cases from 10 volunteers).
    • Inline ED/ES Phases Recognition: Not explicitly provided, but stated to be "independent of the data used to test the algorithm."
    • Inline MOCO: Not explicitly provided, but stated to be "independent of the data used to test the algorithm."

    9. How the Ground Truth for the Training Set Was Established

    • AI-assisted Compressed Sensing (ACS): Fully-sampled k-space data were collected and transformed to image space as the ground-truth. All data were manually quality controlled.
    • SparkCo: "The training dataset for the AI module in SparkCo was generated by simulating spark artifacts from spark-free raw data... a total of 24,866 spark slices, along with the corresponding ground truth (i.e., the location of spark points), were generated for training." This indicates a hybrid approach using real spark-free data to simulate and generate the ground truth for spark locations.
    • Inline ED/ES Phases Recognition: Not explicitly provided.
    • Inline MOCO: Not explicitly provided.
    Ask a Question

    Ask a specific question about this device

    K Number
    K243122
    Device Name
    uMR Omega
    Date Cleared
    2025-05-21

    (233 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K220332, K234154

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The uMR Omega system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces sagittal, transverse, coronal, and oblique cross sectional images, and spectroscopic images, and that display internal anatomical structure and/or function of the head, body and extremities.

    These images and the physical parameters derived from the images when interpreted by a trained physician yield information that may assist the diagnosis. Contrast agents may be used depending on the region of interest of the scan.

    Device Description

    The uMR Omega is a 3.0T superconducting magnetic resonance diagnostic device with a 75cm size patient bore. It consists of components such as magnet, RF power amplifier, RF coils, gradient power amplifier, gradient coils, patient table, spectrometer, computer, equipment cabinets, power distribution system, internal communication system, and vital signal module etc. The uMR Omega Magnetic Resonance Diagnostic Device is designed to conform to NEMA and DICOM standards.

    This traditional 510(k) is to request modifications for the cleared uMR Omega(K240540). The modifications performed on the uMR Omega in this submission are due to the following changes that include:

    1. Addition of RF coils and corresponding accessories: Breast Coil - 12, Biopsy Configuration, Head Coil - 16, Positioning Couch-top, Coil Support, Tx/Rx Head Coil.

    2. Modification of the mmw component name: from mmw100 to mmw101.

    3. Modification of the dimensions of detachable table: from width 826mm, height 880mm, length 2578mm to width 810mm, height 880mm, length 2505mm.

    4. Addition and modification of pulse sequences:

      • a) New sequences: gre_pass, gre_mtp, epi_dti_msh, gre_fsp_c(3D LGE).

      • b) Added Associated options for certain sequences: fse(MicroView), fse_mx(MicroView), gre(Output phase image), gre_swi(QSM),
        gre_fsp_c(DB/GB PSIR), gre_bssfp(TI Scout), gre_bssfp_ucs(Real Time Cine), epi_dwi(IVIM), epi_dti(DSI, DKI).

      • c) Added Additional accessory equipment required for certain sequences: gre_bssfp (Virtual ECG Trigger).

      • d) Added applicable body parts: epi_dwi_msh, gre_fine, fse_mx.

    5. Addition of imaging processing methods: Inline Cardiac function, Inline ECV, Inline MRS, Inline MOCO and MTP.

    6. Addition of workflow features: EasyFACT, TI Scout, EasyCrop, ImageGuard, MoCap and Breast Biopsy.

    7. Addition of image reconstruction methods: SparkCo.

    8. Modification of function: uVision (add Body Part Recognization), EasyScan(add applicable body parts).

    The modification does not affect the intended use or alter the fundamental scientific technology of the device.

    AI/ML Overview

    The provided text describes modifications to an existing MR diagnostic device (uMR Omega) and performs non-clinical testing to demonstrate substantial equivalence to predicate devices. It specifically details the acceptance criteria and study results for two components: SparkCo (an AI algorithm for spark artifact correction) and Inline ECV (an image processing method for extracellular volume fraction calculation).

    Here's a breakdown of the requested information:


    Acceptance Criteria and Device Performance for uMR Omega

    1. Table of Acceptance Criteria and Reported Device Performance

    For SparkCo (Spark artifact Correction):

    Test PartTest MethodsAcceptance CriteriaReported Device Performance
    Spark detection accuracyBased on the real-world testing dataset, calculating the detection accuracy by comparing the spark detection results with the ground-truth.The average detection accuracy needs to be larger than 90%.The average detection accuracy is 94%.
    Spark correction performance1. Based on the simulated spark testing dataset, calculating the PSNR (Peak signal-to-noise ratio) of the spark-corrected images and original spark images.
    1. Based on the real-world spark dataset, evaluating the image quality improvement between the spark-corrected images and spark images by one experienced evaluator. | 1. The average PSNR of spark-corrected images needs to be higher than the spark images.
    2. Spark artifacts need to be reduced or corrected after enabling SparkCo. | 1. The average PSNR of spark-corrected images is 1.6 higher than the spark images.
    3. The images with spark artifacts were successfully corrected after enabling the SparkCo. |

    For Inline ECV (Extracellular Volume Fraction):

    Validation TypeAcceptance CriteriaReported Device Performance (Summary from Subgroup Analysis)
    Passing rateTo verify the effectiveness of the algorithm, the subjective evaluation method was used. The segmentation result of each case was obtained with the algorithm, and the segmentation mask was evaluated with the following criteria. The test pass criteria was: no failure cases, satisfaction rate S/(S+A+F) exceeding 95%.

    The criteria is as follows:
    Satisfied (S): the segmentation myocardial boundary adheres to the myocardial boundary and blood pool ROI is within the blood pool excluding the papillary muscles.
    Acceptable (A): These are small missing or redundant areas in the myocardial segmentation but not obviously and the blood pool ROI is within the blood pool excluding the papillary muscles.
    Fail (F): The myocardial mask does not adhere to the myocardial boundary or the blood pool ROI is not within the blood pool, or the blood pool ROI contains papillary muscles. | The segmentation algorithm performed as expected in different subgroups.
    Total satisfaction Rate (S): 100% for all monitored demographic and acquisition subgroups, which means no failure cases (F) or acceptable cases (A) were reported. |


    2. Sample Size Used for the Test Set and Data Provenance

    For SparkCo:

    • Test Set Sample Size:
      • Simulated Spark Testing Dataset: 159 spark slices.
      • Real-world Spark Testing Dataset: 59 cases from 15 patients.
    • Data Provenance:
      • Simulated Spark Testing Dataset: Generated by simulating spark artifacts from spark-free raw data (61 cases from 10 volunteers, various body parts and MRI sequences).
      • Real-world Spark Testing Dataset: Acquired using uMR 1.5T and uMR 3T scanners, covering representative clinical protocols (T1, T2, PD with/without fat saturation) from 15 patients. The ethnicity for this dataset is 100% Asian, and the data originates from an unspecified location, but given the manufacturer's location (Shanghai, China), it is highly likely to be China. This appears to be retrospective as patients data is mentioned.

    For Inline ECV:

    • Test Set Sample Size: 90 images from 28 patients.
    • Data Provenance: The distribution table shows data from patients with magnetic field strengths of 1.5T and 3T. Ethn_icity is broken down into "Asia" (17 patients) and "USA" (11 patients). This indicates a combined dataset potentially from multiple geographical locations, and appears to be retrospective.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    For SparkCo:

    • Spark detection accuracy: The ground truth for spark detection accuracy was established by comparing to "ground-truth" spark locations, which were generated as part of the simulation process for the training data and likely also for evaluating the testing set during the simulation step. For the real-world dataset, the document mentions "comparing the spark detection results with the ground-truth" implying an existing ground truth, but doesn't specify how it was established or how many experts were involved.
    • Spark correction performance: "One experienced evaluator" was used for subjective evaluation of image quality improvement on the real-world spark dataset. No specific qualifications are provided for this evaluator beyond "experienced".

    For Inline ECV:

    • The document states, "The segmentation result of each case was obtained with the algorithm, and the segmentation mask was evaluated with the following criteria." It does not explicitly mention human experts establishing a distinct "ground truth" for each segmentation mask for the purpose of the acceptance criteria. Instead, the evaluation seems to be a subjective assessment against predefined criteria. No number of experts or qualifications are provided.

    4. Adjudication Method for the Test Set

    For SparkCo:

    • For spark detection accuracy, the comparison was against a presumed inherent "ground-truth" (likely derived from the simulation process).
    • For spark correction performance, a single "experienced evaluator" made the subjective assessment, implying no adjudication method (e.g., 2+1, 3+1) was explicitly used among multiple experts.

    For Inline ECV:

    • The evaluation was a "subjective evaluation method" against specific criteria. No information about multiple evaluators or an adjudication method is provided. It implies a single evaluator or an internal consensus without formal adjudication.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done

    • No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance was not explicitly mentioned for either SparkCo or Inline ECV. The studies were focused on the standalone performance of the algorithms.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done

    • Yes, for both SparkCo and Inline ECV, the studies described are standalone algorithm performance evaluations.
      • SparkCo focused on the algorithm's ability to detect and correct spark artifacts (objective metrics like PSNR and subjective assessment by one evaluator).
      • Inline ECV focused on the algorithm's segmentation accuracy (subjective evaluation of segmentation masks against criteria).

    7. The Type of Ground Truth Used

    For SparkCo:

    • Spark detection accuracy: Ground truth was generated by simulating spark artifacts from spark-free raw data, implying a simulated/synthetic ground truth for training and a comparison against this for testing. For real-world data, the "ground-truth" for detection is implied but not explicitly detailed how it was established.
    • Spark correction performance: For PSNR, the "ground truth" for comparison is the original spark images. For subjective evaluation, it's against the "spark images" and the expectation of correction, suggesting human expert judgment (by one evaluator) rather than a pre-established clinical ground truth for each case.

    For Inline ECV:

    • The ground truth for Inline ECV appears to be a subjective expert assessment (though the number of experts is not specified) of the algorithm's automatically generated segmentation masks against predefined "Satisfied," "Acceptable," and "Fail" criteria. It is not an independent, pre-established ground truth like pathology or outcomes data.

    8. The Sample Size for the Training Set

    For SparkCo:

    • Training dataset for the AI module: 61 cases from 10 volunteers. From this, a total of 24,866 spark slices along with corresponding "ground truth" (location of spark points) were generated for training.

    For Inline ECV:

    • The document states, "The training data used for the training of the cardiac ventricular segmentation algorithm is independent of the data used to test the algorithm." However, the sample size for the training set itself is not explicitly provided in the given text.

    9. How the Ground Truth for the Training Set Was Established

    For SparkCo:

    • The ground truth for the SparkCo training set was established by simulating spark artifacts from spark-free raw data. This simulation process directly provided the "location of spark points" as the ground truth.

    For Inline ECV:

    • The document mentions that the training data is independent of the test data, but it does not describe how the ground truth for the training set of the Inline ECV algorithm was established.
    Ask a Question

    Ask a specific question about this device

    K Number
    K234154
    Device Name
    uPMR 790
    Date Cleared
    2024-05-24

    (147 days)

    Product Code
    Regulation Number
    892.1200
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K220332, K230152, K210001, K193241

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The uPMR 790 system combines magnetic resonance diagnostic devices (MRDD) and Positron Emission Tomography (PET) scanners that provide registration and fusion of high resolution physiologic and anatomic information, acquired simultaneously and iso-centrically. The combined system maintains independent functionality of the MR and PET devices, allowing for single modality MR and/or PET imaging. The MR is intended to produce sagittal, transverse, coronal, and oblique cross sectional images, and spectroscopic images, and that display internal anatomical structure and/or function of the head, body and extremities. Contrast agents may be used depending on the reqion of interest of the scan. The PET provides distribution information of PET radiopharmaceuticals within the human body to assist healthcare providers in assessing the metabolic and physiological functions. The combined system utilizes the MR for radiation-free attenuation correction maps for PET studies. The system provides inherent anatomical reference for the fused PET and MR images due to precisely aligned MR and PET image coordinate systems.

    Device Description

    The uPMR 790 system is a combined Magnetic Resonance Diagnostic Device (MRDD) and Positron Emission Tomography (PET) scanner. It consists of components such as PET detector, 3.0T superconducting magnet, RF power amplifier, RF coils, gradient power amplifier, gradient coils, patient table, spectrometer, computer, equipment cabinets, power distribution system, internal communication system, vital signal module, and software etc.

    The uPMR 790 system provides simultaneous acquisition of high resolution metabolic and anatomic information from PET and MR. PET detectors are integrated into the MR bore for simultaneous, precisely aligned whole body MR and PET acquisition. The PET subsystem supports Time of Flight (ToF). The system software is used for patient management, data management, scan control, image reconstruction, and image archive. The uPMR 790 system is designed to conform to NEMA and DICOM standards.

    This traditional 510(k) is to request modifications for the cleared uPMR 790(K222540). The modifications performed on the uPMR 790 (K222540) in this submission are due to the following changes that include:

    • (1) Addition of RF coils: SuperFlex Body 24, SuperFlex Large -12, SuperFlex Small -12.
    • (2) Addition and modification of pulse sequences:
      • (a) New sequences: gre fine, fse arms dwi, fse dwi, fse mars sle, grase, gre_bssfp_ucs, gre_fq, gre_pass, gre_quick_4dncemra, gre_snap, gre_trass, gre_rufis, epi_dwi_msh, svs_wfs, svs_stme.
      • (b) Added Associated options for certain sequences: QScan, MultiBand, Silicon-Only Imaging, MoCap-Monitoring, T1rho, CEST, Inline T2 mapping, CASS, inline FACT, uCSR, FSP+, whole heart coronary angiography imaging, mPLD (Only output original control/labeling images and PDw(Proton Density weighted) images, no quantification images are output).
      • (c) Name change of certain sequences: gre ute(old name: gre ute sp), svs_press(old name: press),svs_steam(old name: steam), csi_press(old name: press), csi hise(old name: hise).
    • (3) Addition of MR imaging processing methods: 2D Flow, 4D Flow, SNAP, CEST, T1rho, FSP+, CASS, PASS, Inline T2 Mapping and DeepRecon.
    • (4) Addition and modification of PET imaging processing methods:
      • (a) The new PET imaging processing methods: Hyper DPR (also named HYPER AiR) and Digital Gating (also named Self Gating).
      • (b) The modified method: HYPER Iterative.
    • (5) Addition of MR image reconstruction methods: AI-assisted Compressed Sensing (ACS).
    • (6) Addition and modification of workflow features:
      • (a) The new workflow features: EasyCrop, MoCap-Monitoring and QGuard-Imaging.
      • (b) The modified workflow feature: EasyScan.
    • (7) Addition Spectroscopy: Liver Spectroscopy, Breast Spectroscopy.
    • (8) Additional function: MR conditional implant mode.
    AI/ML Overview

    The provided text does not contain detailed acceptance criteria for the uPMR 790 device in the format of a table, nor does it describe a specific study proving the device meets these criteria in a comparative effectiveness study or standalone performance study as would typically be presented for an AI/ML medical device.

    The document is a 510(k) summary, which focuses on demonstrating substantial equivalence to a predicate device rather than providing a detailed clinical study report with specific performance metrics against acceptance criteria.

    However, based on the information available, I can extract and infer some aspects related to acceptance criteria and the performance study:

    Inferred Acceptance Criteria and Reported Device Performance (based on provided text):

    The device is an integrated MR-PET system. The modifications primarily involve new RF coils, pulse sequences, imaging processing methods, and workflow features. The performance data section describes non-clinical testing to verify that the proposed device met design specifications and is Substantially Equivalent (SE) to the predicate device.

    While explicit quantitative acceptance criteria are not tabulated, the text implies that the performance of the modified device (uPMR 790) must be at least equivalent to, or better than, the predicate and reference devices regarding image quality and functionality.

    Specifically for the new or modified features related to AI/ML (DeepRecon and ACS), the implicit acceptance criteria appear to be:

    • DeepRecon:
      • Equivalence in performance to DeepRecon on the uMR Omega.
      • Better performance than NADR (No DeepRecon) in SNR and resolution.
      • Maintenance of image qualities (contrast, uniformity).
      • Significantly same structural measurements between DeepRecon and NADR images.
    • ACS:
      • Equivalence in performance to ACS on the uMR Omega (K220332).
      • Better performance than CS in SNR and resolution.
      • Maintenance of image qualities (contrast, uniformity) compared to fully sampled data (golden standard).
      • Significantly same structural measurements between ACS and fully sampled images.

    Table of Inferred Acceptance Criteria and Reported Device Performance:

    Feature/MetricAcceptance Criteria (Inferred)Reported Device Performance
    Overall DeviceSubstantial Equivalence (SE) to predicate device (K222540) in performance, safety, and effectiveness.Found to have a safety and effectiveness profile similar to the predicate device.
    Image PerformanceMeet all design specifications; generate diagnostic quality images.Diagnostic quality images in accordance with MR guidance.
    DeepRecon (general)Equivalent to DeepRecon on uMR Omega.Performs equivalently to DeepRecon on uMR Omega.
    DeepRecon (SNR/Resolution)Better than NADR.Performs better than NADR.
    DeepRecon (Quality)Maintain image qualities (contrast, uniformity).Maintained image qualities (contrast, uniformity).
    DeepRecon (Structures)Significantly same structural measurements as NADR.Significantly same structural measurements as NADR.
    ACS (general)Equivalent to ACS on uMR Omega (K220332).Performs equivalently to ACS on uMR Omega.
    ACS (SNR/Resolution)Better than CS.Performs better than CS.
    ACS (Quality)Maintain image qualities (contrast, uniformity) as compared to fully sampled data.Maintained image qualities (contrast, uniformity) compared to fully sampled data.
    ACS (Structures)Significantly same structural measurements as fully sampled data.Significantly same structural measurements as fully sampled images.

    Breakdown of the Study as described in the 510(k) Summary:

    2. Sample size used for the test set and the data provenance:

    • DeepRecon:

      • "The testing dataset for performance testing was collected independently from the training dataset, with separated subjects and during different time periods."
      • The exact sample size (number of subjects/cases) for the DeepRecon test set is not specified beyond being "independent."
      • Data Provenance: Implied to be from UIH MRI systems, likely from clinical or volunteer scans. No specific country of origin or retrospective/prospective nature is stated for the test datasets, but training data was "collected from 264 volunteers" and "165,837 cases" using "UIH MRI systems," which suggests internal company data, likely from China where the company is based. The testing data is independently collected.
    • ACS:

      • "The training and test datasets are collected from 35 volunteers, including 24 males and 11 females, ages ranging from 18 to 60. The samples from these volunteers are distributed randomly into training and test datasets."
      • "The validation dataset is collected from 15 volunteers, including 10 males and 5 females, whose ages range from 18 to 60."
      • It specifies "35 volunteers" for training+test and "15 volunteers" for validation. The text states "testing dataset for performance testing was collected independently from the training dataset," which contradicts the "distributed randomly into training and test datasets" statement for the 35 volunteers. This requires clarification, but assuming the 35 volunteers contributed to both, the total number used for testing is not explicitly broken out from the 35. The "validation dataset" of 15 volunteers seems to be an additional independent test set.
      • Data Provenance: Implied to be from UIH MRI systems. No specific country of origin or retrospective/prospective nature is stated.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Expert Review: "Sample clinical images for all clinical sequences and coils were reviewed by U.S. board-certified radiologist comparing the proposed device and predicate device."
      • Number of experts: Not specified, only "radiologist" (singular or plural not clear).
      • Qualifications: "U.S. board-certified radiologist." No mention of years of experience.
    • Quantitative/Objective Ground Truth: For DeepRecon and ACS, ground truth was not established by experts but rather by specific technical methods:
      • DeepRecon: "multiple-averaged images with high-resolution and high SNR were collected as the ground-truth images."
      • ACS: "Fully-sampled k-space data were collected and transformed to image space as the ground-truth."

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    • The document implies a technical assessment for AI performance (SNR, resolution, structural measurements). For the "U.S. board-certified radiologist" review, no specific adjudication method (e.g., 2+1 consensus) is mentioned.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No MRMC comparative effectiveness study involving human readers and AI assistance is described. The performance evaluation focuses on the technical imaging characteristics and comparison to the predicate device or baseline (NADR/CS). The "U.S. board-certified radiologist" review seems to be a qualitative assessment of diagnostic image quality rather than a structured MRMC study with quantitative outcomes.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    • Yes, the performance tests for DeepRecon and ACS are described as standalone evaluations of the algorithms' effects on image quality (SNR, resolution, contrast, uniformity, structural measurements) by comparing them to NA (No Algorithm) or baseline (CS) methods.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • DeepRecon: "multiple-averaged images with high-resolution and high SNR" (objective, technical ground truth representing optimal image quality).
    • ACS: "Fully-sampled k-space data" (objective, technical ground truth representing complete data).
    • For the qualitative review by the radiologist, the "diagnostic quality images" from the predicate device implicitly served as a reference or ground truth for comparison.

    8. The sample size for the training set:

    • DeepRecon: "264 volunteers" resulting in "165,837 cases."
    • ACS: "35 volunteers" (randomly distributed into training and test datasets). The exact split for training is not specified but is part of this 35.

    9. How the ground truth for the training set was established:

    • DeepRecon: "the multiple-averaged images with high-resolution and high SNR were collected as the ground-truth images." "All data were manually quality controlled before included for training."
    • ACS: "Fully-sampled k-space data were collected and transformed to image space as the ground-truth." "All data were manually quality controlled before included for training."

    In summary, the provided document focuses on demonstrating technical equivalence and improved image characteristics for the AI components (DeepRecon, ACS) through non-clinical testing against technically derived ground truths, rather than a clinical multi-reader study with expert consensus ground truth or outcomes data. The human reader involvement seems to be a qualitative review of diagnostic image quality rather than a formal MRMC study.

    Ask a Question

    Ask a specific question about this device

    K Number
    K233186
    Device Name
    uOmnispace.MR
    Date Cleared
    2024-04-17

    (202 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K220332, K141480, K230152, K113456

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    uOmnispace.MR is a software solution intended to be used for viewing, manipulating and analyzing medical images. It supports interpretation and evaluation of examinations within healthcare institutions. It has the following additional indications:

    The uOmnispace.MR Stitching is intended to create full-format images from overlapping MR volume data sets acquired at multiple stages.

    The uOmnispace.MR Dynamic application is intended to provide a general postprocessing tool for time course studies.

    The uOmnispace.MR MRS (MR Spectroscopy) is intended to evaluate the molecule constitution and spatial distribution of cell metabolism. It provides a set of tools to view, process, and analyze the complex MRS data. This application supports the analysis for both SVS (Single Voxel Spectroscopy) and CSI (Chemical Shift Imaging) data.

    The uOmnispace.MR MAPs application is intended to provide a number of arithmetic and statistical functions for evaluating dynamic processes and images. These functions are applied to the grayscale values of medical images.

    The uOmnispace.MR Breast Evaluation application provides the user a tool to calculate parameter maps from contrast-enhanced time-course images.

    The uOmnispace.MR Brain Perfusion application is intended to allow the visualization of temporal variations in the dynamic susceptibility time series of MR datasets.

    · MR uOmnispace.MR Vessel Analysis is intended to provide a tool for viewing, manipulating, and evaluating MR vascular images.

    The uOmnispace.MR DCE analysis is intended to view, manipulate, and evaluate dynamic contrast-enhanced MRI images.

    The uOmnispace.MR United Neuro is intended to view, manipulate MR neurological images.

    ■ The uOmnispace.MR Cardiac Function is intended to view, evaluate functional analysis of cardiac MR images.

    The uOmnispace.MR Flow Analysis is intended to view, evaluate flow analysis of flow MR images.

    Device Description

    The uOmnispace.MR is a post-processing software based on the uOmnispace platform (cleared in K230039) for viewing, manipulating, evaluating and analyzing MR images, can run alone or with other advanced commercially cleared applications.

    This proposed device contains the following applications:

    • uOmnispace.MR Stitching
    • uOmnispace.MR Dynamic
    • uOmnispace.MR MRS
    • uOmnispace.MR MAPs
    • uOmnispace.MR Breast Evaluation
    • . uOmnispace.MR Brain Perfusion
    • uOmnispace.MR Vessel Analysis
    • uOmnispace.MR DCE Analysis
    • uOmnispace.MR United Neuro
    • uOmnispace.MR Cardiac Analysis
    • uOmnispace.MR Flow Analysis
    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    Validation TypeAcceptance CriteriaReported Device Performance
    DiceTo evaluate the proposed device of automatic ventricular segmentation, we compared the results with those of the cardiac function application of predicate device. The Sørensen-Dice coefficient is used to evaluate consistency. If dice > 0.95, it is considered consistent between the two devices.1.00

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: 114 samples from 114 different patients.
    • Data Provenance: The data includes patients of various genders (35 Male, 20 Female, 59 Unknown), ages (5 between 14-25, 12 between 25-40, 22 between 40-60, 13 between 60-79, 62 Unknown), and ethnicities (50 Europe, 53 Asia, 11 USA). The data was acquired using MR scanners from various manufacturers: UIH (58), GE (2), Philips (2), Siemens (52), and with different magnetic field strengths: 1.5T (23), 3.0T (41), 50 Unknown. The text does not explicitly state if the data was retrospective or prospective, but the mention of a "deep learning-based Automatic ventricular segmentation Algorithm for the LV&RV Contour Segmentation feature" and "The performance testing for deep learning-based Automatic ventricular segmentation Algorithm was performed on 114 subjects...during the product development" implies a retrospective study using existing data to validate the developed algorithm.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The test set's ground truth was established by comparing the proposed device's results with those of the predicate device. The text does not explicitly state that human experts established the ground truth for the test set by manually segmenting the images for direct comparison against the algorithm's output. Instead, it seems the predicate device's output serves as the "ground truth" for the comparison of the new device's algorithm.

    However, for the training ground truth, the following was stated:

    • Number of Experts: Two cardiologists.
    • Qualifications: Both cardiologists had "more than 10 years of experience each."

    4. Adjudication Method for the Test Set

    The study does not describe an adjudication method for the test set in the conventional sense of multiple human readers independently assessing the cases. Instead, the comparison is made between the proposed device's algorithm output and the predicate device's output.

    For the training ground truth, the following adjudication method was used:

    • Manual tracing was performed by an experienced user.
    • Validation of these contours was done by two independent experts (more than 10 years experience).
    • If there was a disagreement, a consensus between the experts was reached.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size

    No MRMC comparative effectiveness study was done to assess how much human readers improve with AI vs without AI assistance. The study focuses on comparing the proposed device's algorithm performance directly against a predicate device's cardiac function application based on the Dice coefficient.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    Yes, a standalone performance study was done for the "deep learning-based Automatic ventricular segmentation Algorithm" for the LV&RV Contour Segmentation feature. The device's algorithm output was directly compared to the output of the predicate device's cardiac function application using the Dice coefficient.

    7. The Type of Ground Truth Used

    For the test set, the "ground truth" for comparison was the output of the cardiac function application of the predicate device.

    For the training set, the ground truth was expert consensus based on manual tracing by an experienced user and validated by two independent cardiologists with over 10 years of experience.

    8. The Sample Size for the Training Set

    The document states: "The training data used for the training of the cardiac ventricular segmentation algorithm is independent of the data used to test the algorithm." However, it does not provide the specific sample size for the training set.

    9. How the Ground Truth for the Training Set Was Established

    The ground truth for the training set was established through manual annotation and expert consensus:

    • It was "manually drawn on short axis slices in diastole and systole by two cardiologists with more than 10 years of experience each."
    • "Manual tracing of the cardiac was performed by an experienced user."
    • "The validation of these contours was done by two independent expert (more than 10 years) in this domain."
    • "If there is a disagreement, a consensus between the experts was done."
    Ask a Question

    Ask a specific question about this device

    K Number
    K230152
    Device Name
    uMR Omega
    Date Cleared
    2023-05-23

    (124 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K220332

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The uMR Omega system is indicated for use as a magnetic device (MRDD) that produces sagittal. transverse, coronal, and oblique cross sectional images, and that display internal anatomical structure and/or function of the head, body and extremities.

    These images and the physical parameters derived from the interpreted by a trained physician yield information that may assist the diagnosis. Contrast agents may be used depending on the region of interest of the scan.

    Device Description

    The uMR Omega is a 3.0T superconducting magnetic resonance diagnostic device with a 75cm size patient bore. It consists of components such as magnet, RF power amplifier, RF coils, gradient power amplifier, gradient coils, patient table, spectrometer, computer, equipment cabinets, power distribution system, internal communication system, and vital signal module etc. The uMR Omega Magnetic Resonance Diagnostic Device is designed to conform to NEMA and DICOM standards.

    AI/ML Overview

    This document is a 510(k) premarket notification for the uMR Omega Magnetic Resonance Diagnostic Device. It outlines modifications to a previously cleared device (K220332) and claims substantial equivalence to that predicate device. The information provided heavily focuses on technical characteristics and safety standards rather than detailed clinical performance studies with specific acceptance criteria related to diagnostic accuracy.

    Therefore, many of the requested details, such as specific acceptance criteria for diagnostic performance, exact device performance metrics against those criteria, details of a test set for diagnostic accuracy (sample size, provenance, expert qualifications, adjudication method), human-in-the-loop studies (MRMC), or a standalone performance study in the context of diagnostic accuracy, are not explicitly present in the provided text.

    The document primarily discusses technical specifications, safety, and the additions/modifications to the device. The "Performance Data" section mentions "Clinical performance evaluation" and "Performance evaluation report" for various sequences and imaging processing methods (4D Flow, MRE, CEST, T1rho, mPLD ASL, silica gel imaging). However, it does not provide the acceptance criteria for these evaluations or the results against such criteria in terms of diagnostic accuracy or clinical utility metrics. Instead, it concludes generally that "The test results demonstrated that the device performs as expected."

    Based on the provided text, here's what can be extracted and what is missing:


    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state acceptance criteria for diagnostic performance in terms of sensitivity, specificity, accuracy, etc. for any specific medical condition. The reported performance is general compliance with technical standards and the device performing "as expected."

    Acceptance Criteria Category (Implied/General)Stated Performance (General)Specific Value/Threshold (If available)
    Electrical SafetyComply with ES 60601-1ES 60601-1
    EMC (Electromagnetic Compatibility)Comply with IEC 60601-1-2IEC 60601-1-2
    SAR (Specific Absorption Rate)Comply with IEC 60601-2-33IEC 60601-2-33
    dB/dt (Time Rate of Change of Magnetic Field)Comply with IEC 60601-2-33IEC 60601-2-33
    BiocompatibilityTested and demonstrated no cytotoxicity, irritation, and sensitizationISO 10993-5, ISO 10993-10 (results imply compliance)
    Surface HeatingNEMA MS 14NEMA MS 14
    SNR (Signal-to-Noise Ratio)Compliance with standards acknowledgedMS 1-2008(R2020), MS 6-2008(R2014), MS 9-2008(R2020) (no specific values reported)
    Image UniformityCompliance with standards acknowledgedMS 3-2008(R2020) (no specific values reported)
    Positioning Error (with uVision)$\leq \pm 5cm$$\leq \pm 5cm$
    Overall Device PerformancePerforms as expected and is substantially equivalent to predicateGeneral statement, no specific metrics of diagnostic accuracy or clinical utility are provided.

    2. Sample size used for the test set and the data provenance

    Not explicitly stated for diagnostic performance evaluations. The "Clinical performance evaluation" and "Performance evaluation report" are mentioned, but details on the patient cohort (sample size, retrospective/prospective, country of origin) are missing. These mentions are likely referring to technical performance characteristics rather than clinical diagnostic accuracy.


    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    Not explicitly stated. Given the nature of the submission (510(k) for a device with modifications, primarily focusing on technical specifications and safety standards), a detailed ground truth establishment process for diagnostic accuracy studies is not commonly part of this type of documentation unless new clinical claims or algorithms affecting diagnostic interpretation are being introduced and require such validation. The document states that images are "interpreted by a trained physician," but this is a general statement about usage, not about expert panel for ground truth.


    4. Adjudication method for the test set

    Not explicitly stated.


    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No indication of an MRMC study. The document describes a Magnetic Resonance Diagnostic Device (MRI machine itself) and its embedded imaging processing methods, not an AI-assisted diagnostic tool that would typically undergo such a study.


    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    No indication of a standalone diagnostic algorithm performance study. The listed "imaging processing methods" (4D Flow Quantification, MRE, SNAP, CEST, T1Rho, FSP+) are features of the MRI system, and their performance is implied to be evaluated as part of the overall system's technical function and image quality, not as standalone diagnostic algorithms with their own "ground truth" performance metrics.


    7. The type of ground truth used

    Not explicitly stated for diagnostic accuracy. For the technical performance aspects, the "ground truth" would be measurements against established physical standards and phantom data to ensure image quality, signal integrity, and safety parameters meet specifications.


    8. The sample size for the training set

    Not applicable/Not mentioned. This document describes an MRI machine, not a machine learning or AI algorithm that would typically have a separate training set. The "imaging processing methods" are embedded features or techniques, not typically AI models trained on large datasets in the way common AI diagnostics are.


    9. How the ground truth for the training set was established

    Not applicable/Not mentioned. (See point 8).

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1