Search Filters

Search Results

Found 5 results

510(k) Data Aggregation

    K Number
    K243122
    Device Name
    uMR Omega
    Date Cleared
    2025-05-21

    (233 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The uMR Omega system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces sagittal, transverse, coronal, and oblique cross sectional images, and spectroscopic images, and that display internal anatomical structure and/or function of the head, body and extremities.

    These images and the physical parameters derived from the images when interpreted by a trained physician yield information that may assist the diagnosis. Contrast agents may be used depending on the region of interest of the scan.

    Device Description

    The uMR Omega is a 3.0T superconducting magnetic resonance diagnostic device with a 75cm size patient bore. It consists of components such as magnet, RF power amplifier, RF coils, gradient power amplifier, gradient coils, patient table, spectrometer, computer, equipment cabinets, power distribution system, internal communication system, and vital signal module etc. The uMR Omega Magnetic Resonance Diagnostic Device is designed to conform to NEMA and DICOM standards.

    This traditional 510(k) is to request modifications for the cleared uMR Omega(K240540). The modifications performed on the uMR Omega in this submission are due to the following changes that include:

    1. Addition of RF coils and corresponding accessories: Breast Coil - 12, Biopsy Configuration, Head Coil - 16, Positioning Couch-top, Coil Support, Tx/Rx Head Coil.

    2. Modification of the mmw component name: from mmw100 to mmw101.

    3. Modification of the dimensions of detachable table: from width 826mm, height 880mm, length 2578mm to width 810mm, height 880mm, length 2505mm.

    4. Addition and modification of pulse sequences:

      • a) New sequences: gre_pass, gre_mtp, epi_dti_msh, gre_fsp_c(3D LGE).

      • b) Added Associated options for certain sequences: fse(MicroView), fse_mx(MicroView), gre(Output phase image), gre_swi(QSM),
        gre_fsp_c(DB/GB PSIR), gre_bssfp(TI Scout), gre_bssfp_ucs(Real Time Cine), epi_dwi(IVIM), epi_dti(DSI, DKI).

      • c) Added Additional accessory equipment required for certain sequences: gre_bssfp (Virtual ECG Trigger).

      • d) Added applicable body parts: epi_dwi_msh, gre_fine, fse_mx.

    5. Addition of imaging processing methods: Inline Cardiac function, Inline ECV, Inline MRS, Inline MOCO and MTP.

    6. Addition of workflow features: EasyFACT, TI Scout, EasyCrop, ImageGuard, MoCap and Breast Biopsy.

    7. Addition of image reconstruction methods: SparkCo.

    8. Modification of function: uVision (add Body Part Recognization), EasyScan(add applicable body parts).

    The modification does not affect the intended use or alter the fundamental scientific technology of the device.

    AI/ML Overview

    The provided text describes modifications to an existing MR diagnostic device (uMR Omega) and performs non-clinical testing to demonstrate substantial equivalence to predicate devices. It specifically details the acceptance criteria and study results for two components: SparkCo (an AI algorithm for spark artifact correction) and Inline ECV (an image processing method for extracellular volume fraction calculation).

    Here's a breakdown of the requested information:


    Acceptance Criteria and Device Performance for uMR Omega

    1. Table of Acceptance Criteria and Reported Device Performance

    For SparkCo (Spark artifact Correction):

    Test PartTest MethodsAcceptance CriteriaReported Device Performance
    Spark detection accuracyBased on the real-world testing dataset, calculating the detection accuracy by comparing the spark detection results with the ground-truth.The average detection accuracy needs to be larger than 90%.The average detection accuracy is 94%.
    Spark correction performance1. Based on the simulated spark testing dataset, calculating the PSNR (Peak signal-to-noise ratio) of the spark-corrected images and original spark images. 2. Based on the real-world spark dataset, evaluating the image quality improvement between the spark-corrected images and spark images by one experienced evaluator.1. The average PSNR of spark-corrected images needs to be higher than the spark images. 2. Spark artifacts need to be reduced or corrected after enabling SparkCo.1. The average PSNR of spark-corrected images is 1.6 higher than the spark images. 2. The images with spark artifacts were successfully corrected after enabling the SparkCo.

    For Inline ECV (Extracellular Volume Fraction):

    Validation TypeAcceptance CriteriaReported Device Performance (Summary from Subgroup Analysis)
    Passing rateTo verify the effectiveness of the algorithm, the subjective evaluation method was used. The segmentation result of each case was obtained with the algorithm, and the segmentation mask was evaluated with the following criteria. The test pass criteria was: no failure cases, satisfaction rate S/(S+A+F) exceeding 95%. The criteria is as follows: • Satisfied (S): the segmentation myocardial boundary adheres to the myocardial boundary and blood pool ROI is within the blood pool excluding the papillary muscles. • Acceptable (A): These are small missing or redundant areas in the myocardial segmentation but not obviously and the blood pool ROI is within the blood pool excluding the papillary muscles. • Fail (F): The myocardial mask does not adhere to the myocardial boundary or the blood pool ROI is not within the blood pool, or the blood pool ROI contains papillary muscles.The segmentation algorithm performed as expected in different subgroups. Total satisfaction Rate (S): 100% for all monitored demographic and acquisition subgroups, which means no failure cases (F) or acceptable cases (A) were reported.

    2. Sample Size Used for the Test Set and Data Provenance

    For SparkCo:

    • Test Set Sample Size:
      • Simulated Spark Testing Dataset: 159 spark slices.
      • Real-world Spark Testing Dataset: 59 cases from 15 patients.
    • Data Provenance:
      • Simulated Spark Testing Dataset: Generated by simulating spark artifacts from spark-free raw data (61 cases from 10 volunteers, various body parts and MRI sequences).
      • Real-world Spark Testing Dataset: Acquired using uMR 1.5T and uMR 3T scanners, covering representative clinical protocols (T1, T2, PD with/without fat saturation) from 15 patients. The ethnicity for this dataset is 100% Asian, and the data originates from an unspecified location, but given the manufacturer's location (Shanghai, China), it is highly likely to be China. This appears to be retrospective as patients data is mentioned.

    For Inline ECV:

    • Test Set Sample Size: 90 images from 28 patients.
    • Data Provenance: The distribution table shows data from patients with magnetic field strengths of 1.5T and 3T. Ethn_icity is broken down into "Asia" (17 patients) and "USA" (11 patients). This indicates a combined dataset potentially from multiple geographical locations, and appears to be retrospective.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    For SparkCo:

    • Spark detection accuracy: The ground truth for spark detection accuracy was established by comparing to "ground-truth" spark locations, which were generated as part of the simulation process for the training data and likely also for evaluating the testing set during the simulation step. For the real-world dataset, the document mentions "comparing the spark detection results with the ground-truth" implying an existing ground truth, but doesn't specify how it was established or how many experts were involved.
    • Spark correction performance: "One experienced evaluator" was used for subjective evaluation of image quality improvement on the real-world spark dataset. No specific qualifications are provided for this evaluator beyond "experienced".

    For Inline ECV:

    • The document states, "The segmentation result of each case was obtained with the algorithm, and the segmentation mask was evaluated with the following criteria." It does not explicitly mention human experts establishing a distinct "ground truth" for each segmentation mask for the purpose of the acceptance criteria. Instead, the evaluation seems to be a subjective assessment against predefined criteria. No number of experts or qualifications are provided.

    4. Adjudication Method for the Test Set

    For SparkCo:

    • For spark detection accuracy, the comparison was against a presumed inherent "ground-truth" (likely derived from the simulation process).
    • For spark correction performance, a single "experienced evaluator" made the subjective assessment, implying no adjudication method (e.g., 2+1, 3+1) was explicitly used among multiple experts.

    For Inline ECV:

    • The evaluation was a "subjective evaluation method" against specific criteria. No information about multiple evaluators or an adjudication method is provided. It implies a single evaluator or an internal consensus without formal adjudication.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done

    • No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance was not explicitly mentioned for either SparkCo or Inline ECV. The studies were focused on the standalone performance of the algorithms.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done

    • Yes, for both SparkCo and Inline ECV, the studies described are standalone algorithm performance evaluations.
      • SparkCo focused on the algorithm's ability to detect and correct spark artifacts (objective metrics like PSNR and subjective assessment by one evaluator).
      • Inline ECV focused on the algorithm's segmentation accuracy (subjective evaluation of segmentation masks against criteria).

    7. The Type of Ground Truth Used

    For SparkCo:

    • Spark detection accuracy: Ground truth was generated by simulating spark artifacts from spark-free raw data, implying a simulated/synthetic ground truth for training and a comparison against this for testing. For real-world data, the "ground-truth" for detection is implied but not explicitly detailed how it was established.
    • Spark correction performance: For PSNR, the "ground truth" for comparison is the original spark images. For subjective evaluation, it's against the "spark images" and the expectation of correction, suggesting human expert judgment (by one evaluator) rather than a pre-established clinical ground truth for each case.

    For Inline ECV:

    • The ground truth for Inline ECV appears to be a subjective expert assessment (though the number of experts is not specified) of the algorithm's automatically generated segmentation masks against predefined "Satisfied," "Acceptable," and "Fail" criteria. It is not an independent, pre-established ground truth like pathology or outcomes data.

    8. The Sample Size for the Training Set

    For SparkCo:

    • Training dataset for the AI module: 61 cases from 10 volunteers. From this, a total of 24,866 spark slices along with corresponding "ground truth" (location of spark points) were generated for training.

    For Inline ECV:

    • The document states, "The training data used for the training of the cardiac ventricular segmentation algorithm is independent of the data used to test the algorithm." However, the sample size for the training set itself is not explicitly provided in the given text.

    9. How the Ground Truth for the Training Set Was Established

    For SparkCo:

    • The ground truth for the SparkCo training set was established by simulating spark artifacts from spark-free raw data. This simulation process directly provided the "location of spark points" as the ground truth.

    For Inline ECV:

    • The document mentions that the training data is independent of the test data, but it does not describe how the ground truth for the training set of the Inline ECV algorithm was established.
    Ask a Question

    Ask a specific question about this device

    K Number
    K240540
    Device Name
    uMR Omega
    Date Cleared
    2024-03-22

    (25 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The uMR Omega system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces sagittal, transverse, coronal, and obligue cross sectional images, and spectroscopic images, and that display internal anatomical structure and/or function of the head, body and extremities.

    These images and the physical parameters derived from the images when interpreted by a trained physician vield information that may assist the diagnosis. Contrast agents may be used depending on the region of interest of the scan.

    Device Description

    The uMR Omega is a 3.0T superconducting magnetic resonance diagnostic device with a 75cm size patient bore. It consists of components such as magnet, RF power amplifier, RF coils, gradient power amplifier, gradient coils, patient table, spectrometer, computer, equipment cabinets, power distribution system, internal communication system, and vital signal module etc. The uMR Omega Magnetic Resonance Diagnostic Device is designed to conform to NEMA and DICOM standards.

    uMR Omega has been previously cleared by FDA via K230152. The modification performed on uMR Omega (K230152) in this submission is due to the addition of Breast Coil -24.

    The modification does not affect the intended use or alter the fundamental scientific technology of the device.

    AI/ML Overview

    The provided text describes the regulatory clearance of the uMR Omega magnetic resonance diagnostic device, specifically focusing on the addition of a new Breast Coil - 24. The submission demonstrates substantial equivalence to a previously cleared predicate device (uMR Omega, K230152).

    Here's an analysis of the acceptance criteria and the study that proves the device meets them, based only on the provided text:

    1. A table of acceptance criteria and the reported device performance

    ItemAcceptance CriteriaReported Device Performance
    Surface heatingThe maximum temperature of all temperature probes shall not exceed 41℃.Pass
    General electrical/mechanical safetyConform with ANSI/AAMI ES60601-1Pass
    SNR and UniformitySNR and Uniformity shall fulfill the design specification.Pass
    BiocompatibilityMaterials of construction and manufacturing materials exempt from testing according to the Biocompatibility guidance..., the 510(k) numbers for devices where these materials have been previously approved, or full biocompatibility report... for components that have direct contact with the patient.All the materials of patient-contacting components for the Breast Coil - 24 are identical to uMR Omega which was cleared in K230152 in formulation, processing, sterilization, and geometry, and no other chemicals have been added.
    EMC-immunity, electrostatic dischargeConform with IEC 60601-1-2 and IEC 60601-4-2Pass
    Clinical image qualityImage quality is sufficient for diagnostic use.The U.S. Board Certified radiologist approves that image quality is sufficient for diagnostic use.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document does not provide explicit details on the sample size for any of the tests, nor does it specify the data provenance (e.g., country of origin, retrospective or prospective) for the clinical image quality evaluation or any other performance tests.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    For the "Clinical image quality" test, the document states: "The U.S. Board Certified radiologist approves that image quality is sufficient for diagnostic use."

    • Number of experts: Singular ("The U.S. Board Certified radiologist"), implying one expert.
    • Qualifications of experts: "U.S. Board Certified radiologist." No information on years of experience is provided.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Based on the mention of "The U.S. Board Certified radiologist approves," the adjudication method for clinical image quality appears to be none, as a single expert's approval is noted. There is no indication of multiple reviewers or an adjudication process for disagreement.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    There is no mention of a multi-reader multi-case (MRMC) comparative effectiveness study being done. This submission focuses on the performance of a medical device (uMR Omega with a new coil) rather than an AI-assisted diagnostic tool.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    This document refers to a Magnetic Resonance Diagnostic Device (MRDD), which produces images for interpretation by a trained physician. It does not describe an algorithm with standalone performance, but rather a hardware component (MRI system with a new coil). Therefore, a standalone (algorithm only) performance study is not applicable and was not done in the context described.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    For the "Clinical image quality" test, the ground truth is established by expert opinion/approval ("The U.S. Board Certified radiologist approves that image quality is sufficient for diagnostic use."). Other tests (Surface heating, electrical/mechanical safety, SNR and Uniformity, EMC) rely on defined physical and engineering standards/specifications. Biocompatibility refers to material identity with a previously approved device.

    8. The sample size for the training set

    This document does not mention a training set. The device is a Magnetic Resonance Diagnostic Device (hardware), not an algorithm that requires a training set in the conventional sense. The "training set" concept is typically relevant for machine learning or AI algorithms.

    9. How the ground truth for the training set was established

    As no training set is mentioned (see point 8), this information is not provided and is not applicable to the context of this device submission.

    Ask a Question

    Ask a specific question about this device

    K Number
    K230152
    Device Name
    uMR Omega
    Date Cleared
    2023-05-23

    (124 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The uMR Omega system is indicated for use as a magnetic device (MRDD) that produces sagittal. transverse, coronal, and oblique cross sectional images, and that display internal anatomical structure and/or function of the head, body and extremities.

    These images and the physical parameters derived from the interpreted by a trained physician yield information that may assist the diagnosis. Contrast agents may be used depending on the region of interest of the scan.

    Device Description

    The uMR Omega is a 3.0T superconducting magnetic resonance diagnostic device with a 75cm size patient bore. It consists of components such as magnet, RF power amplifier, RF coils, gradient power amplifier, gradient coils, patient table, spectrometer, computer, equipment cabinets, power distribution system, internal communication system, and vital signal module etc. The uMR Omega Magnetic Resonance Diagnostic Device is designed to conform to NEMA and DICOM standards.

    AI/ML Overview

    This document is a 510(k) premarket notification for the uMR Omega Magnetic Resonance Diagnostic Device. It outlines modifications to a previously cleared device (K220332) and claims substantial equivalence to that predicate device. The information provided heavily focuses on technical characteristics and safety standards rather than detailed clinical performance studies with specific acceptance criteria related to diagnostic accuracy.

    Therefore, many of the requested details, such as specific acceptance criteria for diagnostic performance, exact device performance metrics against those criteria, details of a test set for diagnostic accuracy (sample size, provenance, expert qualifications, adjudication method), human-in-the-loop studies (MRMC), or a standalone performance study in the context of diagnostic accuracy, are not explicitly present in the provided text.

    The document primarily discusses technical specifications, safety, and the additions/modifications to the device. The "Performance Data" section mentions "Clinical performance evaluation" and "Performance evaluation report" for various sequences and imaging processing methods (4D Flow, MRE, CEST, T1rho, mPLD ASL, silica gel imaging). However, it does not provide the acceptance criteria for these evaluations or the results against such criteria in terms of diagnostic accuracy or clinical utility metrics. Instead, it concludes generally that "The test results demonstrated that the device performs as expected."

    Based on the provided text, here's what can be extracted and what is missing:


    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state acceptance criteria for diagnostic performance in terms of sensitivity, specificity, accuracy, etc. for any specific medical condition. The reported performance is general compliance with technical standards and the device performing "as expected."

    Acceptance Criteria Category (Implied/General)Stated Performance (General)Specific Value/Threshold (If available)
    Electrical SafetyComply with ES 60601-1ES 60601-1
    EMC (Electromagnetic Compatibility)Comply with IEC 60601-1-2IEC 60601-1-2
    SAR (Specific Absorption Rate)Comply with IEC 60601-2-33IEC 60601-2-33
    dB/dt (Time Rate of Change of Magnetic Field)Comply with IEC 60601-2-33IEC 60601-2-33
    BiocompatibilityTested and demonstrated no cytotoxicity, irritation, and sensitizationISO 10993-5, ISO 10993-10 (results imply compliance)
    Surface HeatingNEMA MS 14NEMA MS 14
    SNR (Signal-to-Noise Ratio)Compliance with standards acknowledgedMS 1-2008(R2020), MS 6-2008(R2014), MS 9-2008(R2020) (no specific values reported)
    Image UniformityCompliance with standards acknowledgedMS 3-2008(R2020) (no specific values reported)
    Positioning Error (with uVision)$\leq \pm 5cm$$\leq \pm 5cm$
    Overall Device PerformancePerforms as expected and is substantially equivalent to predicateGeneral statement, no specific metrics of diagnostic accuracy or clinical utility are provided.

    2. Sample size used for the test set and the data provenance

    Not explicitly stated for diagnostic performance evaluations. The "Clinical performance evaluation" and "Performance evaluation report" are mentioned, but details on the patient cohort (sample size, retrospective/prospective, country of origin) are missing. These mentions are likely referring to technical performance characteristics rather than clinical diagnostic accuracy.


    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    Not explicitly stated. Given the nature of the submission (510(k) for a device with modifications, primarily focusing on technical specifications and safety standards), a detailed ground truth establishment process for diagnostic accuracy studies is not commonly part of this type of documentation unless new clinical claims or algorithms affecting diagnostic interpretation are being introduced and require such validation. The document states that images are "interpreted by a trained physician," but this is a general statement about usage, not about expert panel for ground truth.


    4. Adjudication method for the test set

    Not explicitly stated.


    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No indication of an MRMC study. The document describes a Magnetic Resonance Diagnostic Device (MRI machine itself) and its embedded imaging processing methods, not an AI-assisted diagnostic tool that would typically undergo such a study.


    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    No indication of a standalone diagnostic algorithm performance study. The listed "imaging processing methods" (4D Flow Quantification, MRE, SNAP, CEST, T1Rho, FSP+) are features of the MRI system, and their performance is implied to be evaluated as part of the overall system's technical function and image quality, not as standalone diagnostic algorithms with their own "ground truth" performance metrics.


    7. The type of ground truth used

    Not explicitly stated for diagnostic accuracy. For the technical performance aspects, the "ground truth" would be measurements against established physical standards and phantom data to ensure image quality, signal integrity, and safety parameters meet specifications.


    8. The sample size for the training set

    Not applicable/Not mentioned. This document describes an MRI machine, not a machine learning or AI algorithm that would typically have a separate training set. The "imaging processing methods" are embedded features or techniques, not typically AI models trained on large datasets in the way common AI diagnostics are.


    9. How the ground truth for the training set was established

    Not applicable/Not mentioned. (See point 8).

    Ask a Question

    Ask a specific question about this device

    K Number
    K220332
    Date Cleared
    2022-10-27

    (265 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The uMR Omega system is indicated for use as a magnetic device (MRDD) that produces sagittal, transverse, coronal, and oblique cross sectional images, and that display internal anatomical structure and/or function of the head, body and extremities.

    These images and the physical parameters derived from the images when interpreted by a trained physician yield information that may assist the diagnosis. Contrast agents may be used depending on the region of interest of the scan.

    u WS-MR is a software solution intended to be used for viewing, manipulation, and storage of medical images. It supports interpretation and evaluation of examinations within healthcare institutions. It has the following additional indications:

    The MR Stitching is intended to create full-format images from overlapping MR volume data sets acquired at multiple stages.

    The Dynamic application is intended to provide a general post-processing tool for time course studies.

    The Image Fusion application is intended to combine two different image series so that the displayed anatomical structures match in both series.

    MRS (MR Spectroscopy) is intended to evaluate the molecule constitution and spatial distribution of cell metabolism. It provides a set of tools to view, process, and analyze the complex MRS data. This application supports the analysis for both SVS (Single Voxel Spectroscopy) and CSI (Chemical Shift Imaging) data.

    The MAPs application is intended to provide a number of arithmetic and statistical functions for evaluating dynamic processes and images. These functions are applied to the grayscale values of medical images.

    The MR Breast Evaluation application provides the user a tool to calculate parameter maps from contrast-enhanced timecourse images.

    The Brain Perfusion application is intended to allow the visualizations in the dynamic susceptibility time series of MR datasets.

    MR Vessel Analysis is intended to provide a tool for viewing, and evaluating MR vascular images.

    The Inner view application is intended to perform a virtual camera view through hollow structures (cavities), such as vessels.

    The DCE analysis is intended to view, manipulate, and evaluate dynamic contrast-enhanced MRI images.

    The United Neuro is intended to view, manipulate, and evaluate MR neurological images.

    The MR Cardiac Analysis application is intended to be used for viewing, post-processing and quantitative evaluation of cardiac magnetic resonance data.

    Device Description

    The uMR Omega is a 3.0T superconducting magnetic resonance diagnostic device with a 75cm size patient bore. It consists of components such as magnet, RF power amplifier, RF coils, gradient power amplifier, gradient coils, patient table, spectrometer, computer, equipment cabinets, power distribution system, internal communication system, and vital signal module etc. The uMR Omega Magnetic Resonance Diagnostic Device is designed to conform to NEMA and DICOM standards.

    uWS-MR is a comprehensive software solution designed to process, review and analyze MR (Magnetic Resonance Imaging) studies. It can be used as a stand-alone SaMD or a post processing application option for cleared UIH (Shanghai United Imaging Healthcare Co.,Ltd.) MR Scanners.

    The uMR 780 is a 3.0T superconducting magnetic resonance diagnostic device with a 65cm size patient bore. It consists of components such as magnet, RF power amplifier. RF coils, gradient power amplifier, gradient coils, patient table, spectrometer, computer, equipment cabinets, power distribution system, internal communication system, and vital signal module etc. The uMR 780 Magnetic Resonance Diagnostic Device is designed to conform to NEMA and DICOM standards.

    AI/ML Overview

    The document describes the performance testing for the "DeepRecon" feature, an artificial intelligence (AI)-assisted image processing algorithm, of the uMR Omega with uWS-MR-MRS device.

    Here's a breakdown of the requested information:

    1. A table of acceptance criteria and the reported device performance

    Evaluation ItemAcceptance CriteriaReported Device Performance (Test Result)Results
    Image SNRDeepRecon images achieve higher SNR compared to the images without DeepRecon (NADR)NADR: 209.41±1.08, DeepRecon: 302.48±0.78PASS
    Image uniformityUniformity difference between DeepRecon images and NADR images under 5%0.15%PASS
    Image contrastIntensity difference between DeepRecon images and NADR images under 5%0.9%PASS
    Structure MeasurementsMeasurements on NADR and DeepRecon images of same structures, measurement difference under 5%0%PASS

    2. Sample size used for the test set and the data provenance

    • Sample Size for Test Set: 77 US subjects.
    • Data Provenance: The testing data was collected from various clinical sites in the US, ensuring diverse demographic distributions covering various genders, age groups, ethnicities, and BMI groups. The data was collected during separated time periods and on subjects different from the training data, making it completely independent and having no overlap with the training data.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Number of Experts: Not explicitly stated, but the document mentions "American Board of Radiologists certificated physicians" evaluated the DeepRecon images. This implies a group of such experts.
    • Qualifications of Experts: American Board of Radiologists certificated physicians.

    4. Adjudication method for the test set

    • The document states that "All DeepRecon images were rated with equivalent or higher scores in terms of diagnosis quality" by the radiologists. This suggests a consensus or rating process, but the specific adjudication method (e.g., majority vote, sequential review) is not detailed.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and if so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • The document implies a human-in-the-loop evaluation as "DeepRecon images were evaluated by American Board of Radiologists certificated physicians," and they "verified that DeepRecon meets the requirements of clinical diagnosis." It also states "All DeepRecon images were rated with equivalent or higher scores in terms of diagnosis quality." However, this is not explicitly described as a formal MRMC comparative effectiveness study designed to quantify human reader improvement with vs. without AI assistance. The focus seems to be on the diagnostic quality of the DeepRecon images themselves. No specific effect size is provided for human reader improvement with AI assistance.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Yes, a standalone (algorithm only) performance evaluation was conducted based on objective metrics like Image SNR, Image Uniformity, Image Contrast, and Structure Measurements, as detailed in Table b. The radiologist evaluation appears to be a subsequent step to confirm clinical utility.

    7. The type of ground truth used

    • For the objective performance metrics (SNR, uniformity, contrast, structure measurements), the ground truth for comparison appears to be the images "without DeepRecon (NADR)".
    • For the expert evaluation, the ground truth is implicitly based on the expert consensus of the American Board of Radiologists certificated physicians regarding the diagnostic quality of the images.
    • For the training data ground truth (see point 9), it was established using "multiple-averaged images with high-resolution and high SNR."

    8. The sample size for the training set

    • The training data for DeepRecon was collected from 264 volunteers.

    9. How the ground truth for the training set was established

    • The ground truth for the training dataset was established by collecting "multiple-averaged images with high-resolution and high SNR" from each subject. The input images for training were then generated by sequentially reducing the SNR and resolution of these high-quality ground-truth images. All data used for training underwent manual quality control.
    Ask a Question

    Ask a specific question about this device

    K Number
    K193200
    Device Name
    uMR Omega
    Date Cleared
    2020-03-27

    (128 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The uMR Omega system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces sagittal, transverse, coronal, and oblique cross sectional images, and that display internal anatomical structure and/or function of the head, body and extremities.

    These images and the physical parameters derived from the images when interpreted by a trained physician yield information that may assist the diagnosis. Contrast agents may be used depending on the region of the scan.

    Device Description

    The uMR Omega is a 3.0T superconducting magnetic resonance diagnostic device with a ultra-wide patient bore size design. It consists of components such as magnet, RF power amplifier, RF coils, gradient power amplifier, gradient coils, patient table, spectrometer, computer, equipment cabinets, power distribution system, internal communication system, and vital signal module etc. The uMR Omega Magnetic Resonance Diagnostic Device is designed to conform to NEMA and DICOM standards.

    AI/ML Overview

    The provided text is a 510(k) Summary for the uMR Omega Magnetic Resonance Diagnostic Device. It compares the proposed device to a predicate device (uMR 780) to demonstrate substantial equivalence, rather than detailing a study that proves the device meets specific acceptance criteria based on performance metrics such as sensitivity, specificity, or accuracy.

    The document focuses on technological characteristics, safety, and effectiveness comparisons to a previously cleared device. It does not contain information about acceptance criteria in the typical sense of quantitative performance metrics for disease detection or diagnosis, nor does it describe a study specifically designed to meet such criteria. Therefore, most of the requested information cannot be extracted from this document.

    However, based on the non-clinical and clinical tests mentioned, some inferences can be made, though they do not directly answer all the questions.

    Information that CANNOT be extracted from the provided text:

    • A table of acceptance criteria and the reported device performance: The document does not define specific acceptance criteria in terms of diagnostic performance metrics (e.g., sensitivity, specificity, AUC) for the uMR Omega. It primarily focuses on showing that its technical specifications and image quality are comparable to or better than the predicate device.
    • Sample size used for the test set and the data provenance: Not mentioned.
    • Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not mentioned.
    • Adjudication method for the test set: Not mentioned.
    • If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and the effect size of how much human readers improve with AI vs without AI assistance: Not mentioned. This device is a magnetic resonance diagnostic device, not an AI-assisted diagnostic tool in the sense of aiding human readers.
    • If a standalone (i.e., algorithm only without human-in-the-loop performance) was done: Not applicable, as this is a diagnostic imaging device, not primarily an AI algorithm for interpretation.
    • The type of ground truth used (expert consensus, pathology, outcomes data, etc.): Not mentioned for any performance evaluation.
    • The sample size for the training set: Not applicable, as this is primarily a hardware/software system for imaging, not a machine learning algorithm that requires a "training set" in the conventional sense.
    • How the ground truth for the training set was established: Not applicable for the reasons above.

    Information that can be extracted or inferred:

    1. A table of acceptance criteria and the reported device performance

    As mentioned, there isn't a table for diagnostic performance acceptance criteria. However, the document does list technical parameters and safety compliance as criteria. The "Reported Device Performance" here would refer to the device meeting these technical and safety standards, and demonstrating image quality comparable to the predicate.

    Acceptance Criteria CategorySpecific Criteria (Inferred/Stated)Reported Device Performance (Inferred/Stated)
    Safety StandardsCompliance with: - ES60601-1 (Medical Electrical Equipment - Basic Safety) - IEC 60601-1-2 (EMC)- IEC 60601-2-33 (MR Equipment Particular Requirements)- IEC 60825-1 (Laser Safety)- ISO 10993-5 (Cytotoxicity)- ISO 10993-10 (Irritation & Sensitization)Demonstrated compliance with all listed standards.- Patient-contact materials tested and showed no cytotoxicity, irritation, or sensitization.
    Image Quality (Non-clinical)Compliance with standards for: - Signal-to-Noise Ratio (SNR) (MS 1, 6)- Geometric Distortion (MS 2)- Image Uniformity (MS 3, 6)- Slice Thickness (MS 5)Testing based on NEMA Standards (MS 1, 2, 3, 5, 6, 9) was performed. (The document states: "The test results demonstrated that the device performs as expected...") The technical specifications like magnet homogeneity (e.g., 2.3ppm @ 50cm DSV) and gradient amplitude (45mT/m) are improvements or comparable to the predicate and contribute to image quality.
    Physical Parameters- Field Strength: 3.0 Tesla- Magnet Type: Superconducting- Patient Bore: 75cm- Max Gradient Amplitude: 45mT/m- Max Slew Rate: 200T/m/s- Receive Channels: Up to 96- Max Support Patient Weight: 310 kg- 3.0 Tesla (Met)- Superconducting (Met)- 75cm (Met; larger than predicate 65cm)- 45mT/m (Met; higher than predicate 42mT/m)- 200T/m/s (Met; same as predicate)- Up to 96 (Met; more than predicate 48)- 310kg (Met; increased from predicate 250kg)
    Clinical PerformanceAbility to generate diagnostic quality images.- "A volunteer study was used to determine the safety limits associated with gradient-A induced nerve stimulation." - "Sample clinical images were provided to support the ability of uMR Omega to generate diagnostic quality images in accordance with the MR guidance on premarket notification submissions."

    2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Sample size for clinical image review: Not specified, only "Sample clinical images were provided".
    • Data provenance: Not specified. The sponsor is Shanghai United Imaging Healthcare Co., Ltd. in China, but the origin of clinical images is not stated. It mentions a "volunteer study" but the number of volunteers is not specified.
    • Retrospective or prospective: Not specified for the clinical images, though the "volunteer study" would be prospective.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Number of experts: Not specified. The document states "when interpreted by a trained physician" in its Indications for Use, but this is general for MRDDs, not for the specific evaluation study.
    • Qualifications of experts: Not specified beyond "trained physician."

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    • For the non-clinical tests (e.g., SNR, uniformity), the "ground truth" is based on adherence to NEMA standards and physical measurements.
    • For the "clinical images," the document only states they were provided to "support the ability... to generate diagnostic quality images." It does not specify how "diagnostic quality" was formally assessed or what independent ground truth (e.g., correlating with pathology, outcomes, or expert consensus) was used to validate the diagnostic information derived from these images.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1