Search Filters

Search Results

Found 58 results

510(k) Data Aggregation

    K Number
    K252371
    Device Name
    uMR 680
    Date Cleared
    2025-09-25

    (57 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The uMR 680 system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces sagittal, transverse, coronal, and oblique cross sectional images, and spectroscopic images, and that display internal anatomical structure and/or function of the head, body and extremities.

    These images and the physical parameters derived from the images when interpreted by a trained physician yield information that may assist the diagnosis. Contrast agents may be used depending on the region of interest of the scan.

    Device Description

    The uMR 680 is a 1.5T superconducting magnetic resonance diagnostic device with a 70cm size patient bore. It consists of components such as magnet, RF power amplifier, RF coils, gradient power amplifier, gradient coils, patient table, spectrometer, computer, equipment cabinets, power distribution system, internal communication system, and vital signal module etc. The uMR 680 Magnetic Resonance Diagnostic Device is designed to conform to NEMA and DICOM standards.

    This special 510(k) is to request modifications for the cleared uMR 680(K243397). The modifications performed on the uMR 680 in this submission are due to the following changes that include:

    (1) Addition of RF coils: Tx/Rx Head Coil.
    (2) Addition of a mobile configuration.

    AI/ML Overview

    N/A

    Ask a Question

    Ask a specific question about this device

    K Number
    K250045
    Device Name
    uWS-Angio Pro
    Date Cleared
    2025-09-24

    (257 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The uWS-Angio Pro is a post-processing software based on the uWS-Angio Basic for viewing, processing, evaluating and analyzing CT, MR, 3D XA images, and overlaying 2D medical images to support image guidance during interventional procedures. It has the following application:

    The uWS-Angio Pro uTarget view application is intended to provide tools for adding the trajectory to the lesion. It also supports overlay of 3D datasets on 2D fluoroscopy images.

    The uWS-Angio Pro uEmbo View application is intended to provide users with tools for planning and guiding interventional procedures. It offers a suite of features to mark targets, detect feeder vessels, and overlay the 3D image with the feeders and the corresponding anatomical X-ray image. When the overlay function is activated, it supports automatic 2D-3D image registration specifically for head neuroimaging, while the manual registration function is not restricted to any specific anatomical region.

    The uWS-Angio Pro uCardiac View application is intended to provide users tools to plan and guide the interventional procedures. This application can use CT angiography images for vascular access analysis, aortic root analysis and angle management, and overlay registration display with 2D XA images.

    Device Description

    uWS-Angio Pro is a software intended for viewing, processing, evaluating and analyzing medical images that comply with the DICOM 3.0 protocol. It supports interpretation and evaluation of examinations within healthcare institutions. It can be deployed on independent hardware such as a stand-alone diagnostic review and post-processing workstation. It can also be configured within a network to send and receive DICOM data. Furthermore, it can be deployed on systems of the United Imaging Healthcare Angiography system family.

    uWS-Angio Pro contains the following applications:

    • uWS-Angio Pro uTarget View
    • uWS-Angio Pro uEmbo View
    • uWS-Angio Pro uCardiac View
    AI/ML Overview

    N/A

    Ask a Question

    Ask a specific question about this device

    K Number
    K251167
    Device Name
    uDR Aurora CX
    Date Cleared
    2025-09-19

    (157 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    uDR Aurora CX is intended to acquire X-ray images of the human body by a qualified technician, examples include acquiring two-dimensional X-ray images of the skull, spinal column, chest, abdomen, extremities, limbs and trunk. The visualization of such anatomical structures provide visual evidence to radiologists and clinicians in making diagnostic decisions. This device is not intended for mammography.

    Device Description

    uDR Aurora CX is a model of Digital Medical X-ray Imaging System developed and manufactured by Shanghai United Imaging Healthcare Co., Ltd(UIH). It includes X-ray Generator, X-ray Imaging System. The X-ray Generator produces controlled X-rays by high-voltage generator and X-ray tube assembly, ensuring stable energy output for human body penetration. The X-ray Imaging System converts X-ray photons into electrical signals by detectors, and generates DICOM-standard images by workstation to reflecting density variations of human body.

    AI/ML Overview

    This document describes the acceptance criteria and study details for two features of the uDR Aurora CX device: uVision and uAid.


    1. Acceptance Criteria and Reported Device Performance

    FeatureAcceptance CriteriaReported Device Performance
    uVisionWhen users employ the uVision function for automatic positioning, the automatically set system position and field size will meet clinical technicians' criteria with 95% compliance. This demonstrates that uVision can effectively assist clinical technicians in positioning tasks, specifically by aiming to reduce retake rates attributed to incorrect positioning (which studies indicate can range from 9% to 28%).In 95% of patient positioning processes, the light field and equipment position automatically set by uVision met clinical positioning and shooting requirements for chest PA, whole-spine, and whole-lower-limb stitching exams. In the remaining 5% of cases, manual adjustments by technicians were needed.
    uAidThe accuracy of non-standard image recognition (specifically, the rate of "Grade A" images recognized) should meet a 90% pass rate, aligning with industry standards derived from guidelines like those from European Radiology and ACR-AAPM-SPR Practice Parameter (which indicate Grade A image rates between 80% and 90% in public hospitals). This demonstrates that uAid can effectively assist clinical technicians in managing standardized image quality.Overall Performance: The uAid function can correctly identify four types of results (Foreign object, Incomplete lung fields, Unexposed shoulder blades, and Centerline deviation) and classify images into Green (qualified), Yellow (secondary), or Red (waste). It meets the requirement for checking examination and positioning quality.
    Specific Metric/Quantitative Performance (from "Summary"):- Average algorithm time: 1.359 seconds (longest not exceeding 2 seconds).- Maximum memory occupation: Not more than 2G.- For foreign body, lung field integrity, and scapula opening, both sensitivity and specificity for recognition exceed 0.9.

    2. Sample Size and Data Provenance for the Test Set

    FeatureSample Size for Test SetData Provenance
    uVision348 cases (328 Chest PA cases + 20 Full Spine or Full Lower Limb Stitching cases) collected over one week from 2024.12.17 to 2024.12.23. The device had been installed for over a year, with an average daily volume of ~80 patients, ~45 chest X-rays/day, and ~10-20 stitching cases/week.Prospective/Retrospective Hybrid: The data was collected prospectively from a device (serial number 11XT7E0001) in clinical use after installation and commissioning over a year prior to the reported test period. It was collected from individuals of all genders and varying heights (able to stand independently). The testing was conducted in a real-world clinical setting. Country of Origin: Not explicitly stated, but the company is in Shanghai, China, suggesting the data is likely from China.
    uAidNot explicitly stated as a single total number of cases. Instead, the data distribution is provided, indicating various counts for different conditions across gender and age groups. For example, "lung field segmentation" had 465 negative and 31 positive cases. "Foreign object" had 1078 negative and 3080 positive cases. The sum of these individual counts suggests a total dataset of several thousand images.Retrospective: Data collection for uAid started in October 2017, with a wide range of data sources, including different cooperative hospitals. The data was cleaned and stored in DICOM format. Country of Origin: Not explicitly stated, but the company is in Shanghai, China, suggesting the data is likely from China.

    3. Number and Qualifications of Experts for Ground Truth (Test Set)

    FeatureNumber of ExpertsQualifications of Experts
    uVisionNot explicitly stated. The statement says, "The results automatically set by the system are then statistically analyzed by clinical experts.""Clinical experts." No specific qualifications (e.g., years of experience, specialty) are provided.
    uAidNot explicitly stated. The document mentions "The study was approved by the institutional review board of the hospitals," which implies expert review but does not detail the number or roles of experts in establishing the ground truth labels for the specific image characteristics tested.Not explicitly stated for establishing ground truth labels.

    4. Adjudication Method (Test Set)

    FeatureAdjudication Method
    uVisionNot explicitly stated. The data was "statistically analyzed by clinical experts." It does not specify if multiple experts reviewed cases or how disagreements were resolved.
    uAidNot explicitly stated. The process mentions data cleaning and sorting, and IRB approval, but not the specific adjudication method for individual image labels.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • uVision: No MRMC comparative effectiveness study was done to compare human readers with and without AI assistance. The study evaluates the AI's direct assistance in positioning, measured by compliance with clinical criteria, rather than comparing diagnostic performance of human readers.
    • uAid: No MRMC comparative effectiveness study was done. The study focuses on the standalone performance of the algorithm in identifying image quality issues, not on how it impacts human reader diagnostic accuracy or efficiency.

    6. Standalone Performance (Algorithm Only)

    • uVision: Yes, a standalone performance study was done. The "95% compliance" rate refers to the algorithm's direct ability to set system position and FOV that meet clinical technician criteria without a human actively adjusting or guiding the initial AI-generated settings during the compliance evaluation. Technicians could manually adjust those settings if needed.
    • uAid: Yes, a standalone performance study was done. The algorithm processes images and outputs a quality classification (Green, Yellow, Red) and identifies specific issues (foreign object, incomplete lung fields, etc.). Its sensitivity and specificity metrics are standalone performance indicators.

    7. Type of Ground Truth Used

    • uVision: Expert Consensus/Clinical Criteria: The ground truth for uVision's performance (i.e., whether the automatically set position/FOV was "compliant") was established by "clinical experts" based on "clinical technicians' criteria" for proper positioning and shooting requirements.
    • uAid: Expert Consensus/Manual Labeling: The ground truth for uAid's evaluation (e.g., presence of foreign objects, complete lung fields, open scapula, centerline deviation) was established through a "classification" process, implying manual labeling or consensus by experts after data collection and cleaning. The document mentions "negative" and "positive" data distributions for each criterion.

    8. Sample Size for the Training Set

    • uVision: Not explicitly stated in the provided text. The testing data was confirmed to be "collected independently from the training dataset, with separated subjects and during different time periods."
    • uAid: Not explicitly stated in the provided text. The document mentions "The data collection started in October 2017, with a wide range of data sources" for training, but does not provide specific numbers for the training set size.

    9. How Ground Truth for Training Set was Established

    • uVision: Not explicitly stated for the training set. It can be inferred that a similar process to the test set, involving expert review against clinical criteria, would have been used.
    • uAid: Not explicitly stated for the training set. Given that the data was collected from "different cooperative hospitals," "multiple cleaning and sorting" was performed, and the study was "approved by the institutional review board," it is highly likely that the ground truth for the training set involved manual labeling by clinical experts/radiologists, followed by a review process (potentially consensus-based or single-expert) to establish the labels for image characteristics and quality.
    Ask a Question

    Ask a specific question about this device

    K Number
    K250040
    Device Name
    uWS-Angio
    Date Cleared
    2025-09-12

    (247 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    uWS-Angio is a post-processing software based on the uWS-Angio Basic for viewing, processing, evaluating and analyzing XA images. It has the following application:

    • The uWS-Angio 2D Vessel Analysis is intended to provide users a tool for quantifying the dimensions of coronary and peripheral vessels from 2D angiographic images. It supports vessel segmentation and output associated diameter parameters and curves.

    • The uWS-Angio Left Ventricular Analysis application is intended to provide users a tool for analyzing left ventricular functions. It supports editing contours of ventricular and outputs a set of parameters to show the ventricular functions and wall motion.

    • The uWS-Angio 2D Blood Flow Analysis application is intended to provide users a tool for analyzing blood flow of region of interest in peripheral vessels. It supports adding interest region and output multiple blood flow parameters for further perfusion analysis.

    • The uWS-Angio Stitching is intended to provide users a tool for lower extremities stitching bolus-chase acquired images into a stitching image.

    Device Description

    uWS-Angio is a post-processing software intended for viewing, processing, evaluating, and analyzing medical images that comply with the DICOM 3.0 protocol. It supports interpretation and evaluation of examinations within healthcare institutions. It can be deployed on independent hardware such as a stand-alone diagnostic review and post-processing workstation. It can also be configured within a network to send and receive DICOM data. Furthermore, it can be deployed on systems of the United Imaging Healthcare Angiography system family.

    uWS-Angio contains the following applications:

    • uWS-Angio 2D Vessel Analysis
    • uWS-Angio Left Ventricular Analysis
    • uWS-Angio 2D Blood Flow Analysis
    • uWS-Angio Stitching
    AI/ML Overview

    N/A

    Ask a Question

    Ask a specific question about this device

    K Number
    K250246
    Device Name
    uMR Jupiter
    Date Cleared
    2025-08-05

    (190 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The uMR Jupiter system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces sagittal, transverse, coronal, and oblique cross sectional images, and spectroscopic images, and that display internal anatomical structure and/or function of the head, body and extremities.

    These images and the physical parameters derived from the images when interpreted by a trained physician yield information that may assist the diagnosis. Contrast agents may be used depending on the region of interest of the scan.

    The device is intended for patients > 20 kg/44 lbs.

    Device Description

    uMR Jupiter is a 5T superconducting magnetic resonance diagnostic device with a 60cm size patient bore and 8 channel RF transmit system. It consists of components such as magnet, RF power amplifier, RF coils, gradient power amplifier, gradient coils, patient table, spectrometer, computer, equipment cabinets, power distribution system, internal communication system, and vital signal module etc. uMR Jupiter is designed to conform to NEMA and DICOM standards.

    The modification performed on the uMR Jupiter in this submission is due to the following changes that include:

    1. Addition of RF coils: SuperFlex Large - 24 and Foot & Ankle Coil - 24.

    2. Addition of applied body part for certain coil: SuperFlex Small-24 (add imaging of ankle).

    3. Addition and modification of pulse sequences:

      • a) New sequences: fse_wfi, gre_fsp_c (3D), gre_bssfp_ucs, epi_fid(3D), epi_dti_msh.
      • b) Added Associated options for certain sequences: asl_3d (add mPLD) (Only output original images and no quantification images are output), gre_fsp_c (add Cardiac Cine, Cardiac Perfusion, PSIR, Cardiac mapping), gre_quick(add WFI, MRCA), gre_bssfp(add Cardiac Cine, Cardiac mapping), epi_dwi(add IVIM) (Only output original images and no quantification images are output).
    4. Addition of function: EasyScan, EasyCrop, t-ACS, QScan, tFAST, DeepRecon and WFI.

    5. Addition of workflow: EasyFACT.

    AI/ML Overview

    This FDA 510(k) summary (K250246) for the uMR Jupiter provides details on several new AI-assisted features. Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    Important Note: The document is not a MRMC study comparing human readers with and without AI. Instead, it focuses on the performance of individual AI modules and their integration into the MRI system, often verified by radiologists' review of image quality.


    Acceptance Criteria and Reported Device Performance

    The document presents acceptance criteria implicitly through the "Test Result" or "Performance Verification" sections for each AI feature. The "Performance" column below summarizes the device's reported achievement for these criteria.

    FeatureAcceptance Criteria (Implicit)Reported Device Performance
    WFIExpected to produce diagnostic quality images and effectively overcome water-fat swap artifacts, providing accurate initialization for the RIPE algorithm. Modes (default, standard, fast) should meet clinical diagnosis requirements."Based on the clinical evaluation of this independent testing dataset by three U.S. certificated radiologists, all three WFI modes meet the requirements for clinical diagnosis. In summary, the WFI performed as intended and passed all performance evaluations."
    t-ACSAI Module Test: AI prediction output should be much closer to the reference compared to the AI module input images. Integration Test: Better consistency between t-ACS and reference than CS and reference; no large structural differences; motion-time curves and Bland-Altman analysis showing consistency.AI Module Test: "AI prediction (AI module output) was much closer to the reference comparing to the AI module input images in all t-ACS application types." Integration Test: 1. "A better consistency between t-ACS and the reference than that between CS and the reference was shown in all t-ACS application types." 2. "No large structural difference appeared between t-ACS and the reference in all t-ACS application types." 3. "The motion-time curves and Bland-Altman analysis showed the consistency between t-ACS and the reference based on simulated and real acquired data in all t-ACS application types." Overall: "The t-ACS on uMR Jupiter was shown to perform better than traditional Compressed Sensing in the sense of discrepancy from fully sampled images and PSNR using images from various age groups, BMIs, ethnicities and pathological variations. The structure measurements on paired images verified that same structures of t-ACS and reference were significantly the same. And t-ACS integration tests in two applications proved that t-ACS had good agreement with the reference."
    DeepReconExpected to provide image de-noising and super-resolution, resulting in diagnostic quality images, with equivalent or higher scores than reference images in terms of diagnostic quality."The DeepRecon has been validated to provide image de-nosing and super-resolution processing using various ethnicities, age groups, BMIs, and pathological variations. In addition, DeepRecon images were evaluated by American Board of Radiologists certificated physicians, covering a range of protocols and body parts. The evaluation reports from radiologists verified that DeepRecon meets the requirements of clinical diagnosis. All DeepRecon images were rated with equivalent or higher scores in terms of diagnosis quality."
    EasyFACTExpected to effectively automate ROI placement and numerical statistics for FF and R2* values, with results subjectively evaluated as effective."The subjective evaluation method was used [to verify effectiveness]." "The proposal of algorithm acceptance criteria and score processing are conducted by the licensed physicians with U.S. credentials." (Implied successful verification from context)
    EasyScanPass criteria of 99.3% for automatic slice group positioning, meeting safety and effectiveness requirements."The pass criteria of EasyScan feature is 99.3%, and the results evaluated by the licenced MRI technologist with U.S. credentials. Therefore, EasyScan meets the criteria for safety and effectiveness, and EasyScan can meet the requirements for automatic positioning locates slice groups." (Implied reaching or exceeding 99.3%.)
    EasyCropPass criteria of 100% for automatic image cropping, meeting safety and effectiveness requirements."The pass criteria of EasyCrop feature is 100%, and the results evaluated by the licenced MRI technologist with U.S. credentials. Therefore, EasCrop meets the criteria for safety and effectiveness, and EasCrop can meet the requirements for automatic cropping." (Implied reaching or exceeding 100%.)

    Study Details

    1. Sample sizes used for the test set and the data provenance:

      • WFI: 144 cases from 28 volunteers. Data collected from UIH Jupiter. "Completely separated from the previous mentioned training dataset by collecting from different volunteers and during different time periods." (Retrospective for testing, though specific country of origin beyond "UIH MRI systems" is not explicitly stated for testing data, training data has "Asian" majority.)
      • t-ACS: 35 subjects (data from 76 volunteers used for overall training/validation/test split). Test data collected independently from the training data, with separated subjects and during different time periods. "White," "Black," and "Asian" ethnicities mentioned, implying potentially multi-country or diverse internal dataset.
      • DeepRecon: 20 subjects (2216 cases). "Diverse demographic distributions" including "White" and "Asian" ethnicities. "Collecting testing data from various clinical sites and during separated time periods."
      • EasyFACT: 5 subjects. "Data were acquired from 5T magnetic resonance imaging equipment from UIH," and "Asia" ethnicity is listed.
      • EasyScan: 30 cases from 18 "Asia" subjects (initial testing); 40 cases from 8 "Asia" subjects (validation on uMR Jupiter system).
      • EasyCrop: 5 subjects. "Data were acquired from 5T magnetic resonance imaging equipment from UIH," and "Asia" ethnicity is listed.

      Data provenance isn't definitively "retrospective" or "prospective" for the test sets, but the emphasis on "completely separated" and "independent" from training data collected at "different time periods" suggests these were distinct, potentially newly acquired or curated sets for evaluation. The presence of multiple ethnicities (White, Black, Asian) suggests potentially broader geographical origins than just China where the company is based, or a focus on creating diverse internal datasets.

    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • WFI: Three U.S. certificated radiologists. (Qualifications: U.S. board-certified radiologists).
      • t-ACS: No separate experts establishing ground truth for the test set performance evaluation are mentioned beyond the quantitative metrics (MAE, PSNR, SSIM) compared against "fully sampled images" (reference/ground truth). The document states that fully-sampled k-space data transformed into image domain served as the reference.
      • DeepRecon: American Board of Radiologists certificated physicians. (Qualifications: American Board of Radiologists certificated physicians).
      • EasyFACT: Licensed physicians with U.S. credentials. (Qualifications: Licensed physicians with U.S. credentials).
      • EasyScan: Licensed MRI technologist with U.S. credentials. (Qualifications: Licensed MRI technologist with U.S. credentials).
      • EasyCrop: Licensed MRI technologist with U.S. credentials. (Qualifications: Licensed MRI technologist with U.S. credentials).
    3. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

      The document does not explicitly state an adjudication method (like 2+1 or 3+1) for conflict resolution among readers. For WFI, DeepRecon, EasyFACT, EasyScan, and EasyCrop, it implies a consensus or majority opinion model based on the "evaluation reports from radiologists/technologists." For t-ACS, the evaluation of the algorithm's output is based on quantitative metrics against a reference image ground truth.

    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      No, a traditional MRMC comparative effectiveness study where human readers interpret cases with AI assistance versus without AI assistance was not described. The studies primarily validated the AI features' standalone performance (e.g., image quality, accuracy of automated functions) or their output's equivalence/superiority to traditional methods, often through expert review of the AI-generated images. Therefore, no effect size of human reader improvement with AI assistance is provided.

    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      Yes, standalone performance was the primary focus for most AI features mentioned, though the output was often subject to human expert review.

      • WFI: The AI network provides initialization for the RIPE algorithm. The output image quality was then reviewed by radiologists.
      • t-ACS: Performance was evaluated quantitatively against fully sampled images (reference/ground truth), indicating a standalone algorithm evaluation.
      • DeepRecon: Evaluated based on images processed by the algorithm, with expert review of the output images.
      • EasyFACT, EasyScan, EasyCrop: These are features that automate parts of the workflow. Their output (e.g., ROI placement, slice positioning, cropping) was evaluated, often subjectively by experts, but the automation itself is algorithm-driven.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • WFI: Expert consensus/review by three U.S. certificated radiologists for "clinical diagnosis" quality. No "ground truth" for water-fat separation accuracy itself is explicitly stated, but the problem being solved (water-fat swap artifacts) implies the improved stability of the algorithm's output.
      • t-ACS: "Fully-sampled k-space data were collected and transformed into image domain as reference." This serves as the "true" or ideal image for comparison, not derived from expert interpretation or pathology.
      • DeepRecon: "Multiple-averaged images with high-resolution and high SNR were collected as the ground-truth images." Expert review confirms diagnostic quality of processed images.
      • EasyFACT: Subjective evaluation by licensed physicians with U.S. credentials, implying their judgment regarding the correctness of ROI placement and numerical statistics.
      • EasyScan: Evaluation by a licensed MRI technologist with U.S. credentials against the "correctness" of automatic slice positioning.
      • EasyCrop: Evaluation by a licensed MRI technologist with U.S. credentials against the "correctness" of automatic cropping.
    7. The sample size for the training set:

      • WFI AI module: 59 volunteers (2604 cases). Each scanned for multiple body parts and WFI protocols.
      • t-ACS AI module: Not specified as a distinct number, but "collected from a variety of anatomies, image contrasts, and acceleration factors... resulting in a large number of cases." The overall dataset for training, validation, and testing was 76 volunteers.
      • DeepRecon: 317 volunteers.
      • EasyFACT, EasyScan, EasyCrop: "The training data used for the training of the EasyFACT algorithm is independent of the data used to test the algorithm." For EasyScan and EasyCrop, it states "The testing dataset was collected independently from the training dataset," but does not provide specific training set sizes for these workflow features.
    8. How the ground truth for the training set was established:

      • WFI AI module: The AI network was trained to provide accurate initialization for the RIPE algorithm. The document implies that the RIPE algorithm itself with human oversight or internal validation would have been used to establish correct water/fat separation for training.
      • t-ACS AI module: "Fully-sampled k-space data were collected and transformed into image domain as reference." This served as the ground truth against which the AI was trained to reconstruct undersampled data.
      • DeepRecon: "The multiple-averaged images with high-resolution and high SNR were collected as the ground-truth images." This indicates that high-quality, non-denoised, non-super-resolved images were used as the ideal target for the AI.
      • EasyFACT, EasyScan, EasyCrop: Not explicitly detailed beyond stating that training data ground truth was established to enable the algorithms for automatic ROI placement, slice group positioning, and image cropping, respectively. It implies a process of manually annotating or identifying the correct ROIs/positions/crops on training data for the AI to learn from.
    Ask a Question

    Ask a specific question about this device

    K Number
    K251839
    Date Cleared
    2025-07-17

    (31 days)

    Product Code
    Regulation Number
    892.1200
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The uMI Panvivo is a PET/CT system designed for providing anatomical and functional images. The PET provides the distribution of specific radiopharmaceuticals. CT provides diagnostic tomographic anatomical information as well as photon attenuation information for the scanned region. PET and CT scans can be performed separately. The system is intended for assessing metabolic (molecular) and physiologic functions in various parts of the body. When used with radiopharmaceuticals approved by the regulatory authority in the country of use, the uMI Panvivo system generates images depicting the distribution of these radiopharmaceuticals. The images produced by the uMI Panvivo are intended for analysis and interpretation by qualified medical professionals. They can serve as an aid in detection, localization, evaluation, diagnosis, staging, re-staging, monitoring, and/or follow-up of abnormalities, lesions, tumors, inflammation, infection, organ function, disorders, and/or diseases, in several clinical areas such as oncology, cardiology, neurology, infection and inflammation. The images produced by the system can also be used by the physician to aid in radiotherapy treatment planning and interventional radiology procedures.

    The CT system can be used for low dose CT lung cancer screening for the early detection of lung nodules that may represent cancer. The screening must be performed within the established inclusion criteria of programs / protocols that have been approved and published by either a governmental body or professional medical society.

    Device Description

    The proposed device uMI Panvivo combines a 295/235 mm axial field of view (FOV) PET and 160-slice CT system to provide high quality functional and anatomical images, fast PET/CT imaging and better patient experience. The system includes PET system, CT system, patient table, power distribution unit, control and reconstruction system (host, monitor, and reconstruction computer, system software, reconstruction software), vital signal module and other accessories.

    The uMI Panvivo has been previously cleared by FDA via K243538. The main modifications performed on the uMI Panvivo (K243538) in this submission are due to the addition of Deep MAC(also named AI MAC), Digital Gating(also named Self-gating), OncoFocus(also named uExcel Focus and RMC), NeuroFocus(also named HMC), DeepRecon.PET (also named as HYPER DLR or DLR), uExcel DPR (also named HYPER DPR or HYPER AiR)and uKinetics. Details about the modifications are listed as below:

    • Deep MAC, Deep Learning-based Metal Artifact Correction (also named AI MAC) is an image reconstruction algorithm that combines physical beam hardening correction and deep learning technology. It is intended to correct the artifact caused by metal implants and external metal objects.

    • Digital Gating (also named Self-gating, cleared via K232712) can automatically extract a respiratory motion signal from the list-mode data during acquisition which called data-driven (DD) method. The respiratory motion signal was calculated by tracking the location of center-of-distribution(COD) in body cavity mask. By using the respiratory motion signal, system can perform gate reconstruction without respiratory capture device.

    • OncoFocus (also named uExcel Focus and RMC, cleared via K232712) is an AI-based algorithm to reduce respiratory motion artifacts in PET/CT images and at the same time reduce the PET/CT misalignment.

    • NeuroFocus (also named HMC) is head motion correction solution, which employs a statistics-based head motion correction method that correct motion artifacts automatically using the centroid-of-distribution (COD) without manual parameter tuning to generate motion free images.

    • DeepRecon.PET (also named as HYPER DLR or DLR, cleared via K193210) uses a deep learning technique to produce better SNR (signal-to-noise-ratio) image in post-processing procedure.

    • uExcel DPR (also named HYPER DPR or HYPER AiR, cleared via K232712) is a deep learning-based PET reconstruction algorithm designed to enhance the SNR of reconstructed images. High-SNR images improve clinical diagnostic efficacy, particularly under low-count acquisition conditions (e.g., low-dose radiotracer administration or fast scanning protocols).

    • uKinetics(cleared via K232712) is a kinetic modeling toolkit for indirect dynamic image parametric analysis and direct parametric analysis of multipass dynamic data. Image-derived input function (IDIF) can be extracted from anatomical CT images and dynamic PET images. Both IDIF and populated based input function (PBIF) can be used as input function of Patlak model to generate kinetic images which reveal biodistribution map of the metabolized molecule using indirect and direct methods.

    AI/ML Overview

    The provided FDA 510(k) clearance letter describes the uMI Panvivo PET/CT System and mentions several new software functionalities (Deep MAC, Digital Gating, OncoFocus, NeuroFocus, DeepRecon.PET, uExcel DPR, and uKinetics). The document includes performance data for four of these functionalities: DeepRecon.PET, uExcel DPR, OncoFocus, and DeepMAC.

    The following analysis focuses on the acceptance criteria and study details for these four AI-based image processing/reconstruction algorithms as detailed in the document. The document presents these as "performance verification" studies.


    Overview of Acceptance Criteria and Device Performance (for DeepRecon.PET, uExcel DPR, OncoFocus, DeepMAC)

    The document details the evaluation of four specific software functionalities: DeepRecon.PET, uExcel DPR, OncoFocus, and DeepMAC. Each of these has its own set of acceptance criteria and reported performance results, detailed below.

    1. Table of Acceptance Criteria and Reported Device Performance

    Software FunctionalityEvaluation ItemEvaluation MethodAcceptance CriteriaReported Performance
    DeepRecon.PETImage consistencyMeasuring mean SUV of phantom background and liver ROIs (regions of interest) and calculating bias. Used to evaluate image bias.The bias is less than 5%.Pass
    Image background noisea) Background variation (BV) in the IQ phantom.b) Liver and white matter signal-to-noise ratio (SNR) in the patient case. Used to evaluate noise reduction performance.DeepRecon.PET has lower BV and higher SNR than OSEM with Gaussian filtering.Pass
    Image contrast to noise ratioa) Contrast to noise ratio (CNR) of the hot spheres in the IQ phantom.b) Contrast to noise ratio of lesions. CNR is a measure of the signal level in the presence of noise. Used to evaluate lesion detectability.DeepRecon.PET has higher CNR than OSEM with Gaussian filtering.Pass
    uExcel DPRQuantitative evaluationContrast recovery (CR), background variability (BV), and contrast-to-noise ratio (CNR) calculated using NEMA IQ phantom data reconstructed with uExcel DPR and OSEM methods under acquisition conditions of 1 to 5 minutes per bed.Coefficient of Variation (COV) calculated using uniform cylindrical phantom data on images reconstructed with both uExcel DPR and OSEM methods.The averaged CR, BV, and CNR of the uExcel DPR images should be superior to those of the OSEM images.uExcel DPR requires fewer counts to achieve a matched COV compared to OSEM.Pass.- NEMA IQ Phantom Analysis: an average noise reduction of 81% and an average SNR enhancement of 391% were observed.- Uniform cylindrical Analysis: 1/10 of the counts can obtain the matching noise level.
    Qualitative evaluationuExcel DPR images reconstructed at lower counts qualitatively compared with full-count OSEM images.uExcel DPR reconstructions with reduced count levels demonstrate comparable or superior image quality relative to higher-count OSEM reconstructions.Pass.- 1.72.5 MBq/kg radiopharmaceutical injection conditions, combined with 23 minutes whole-body scanning (4~6 bed positions), achieves comparable diagnostic image quality.- Clinical evaluation by radiologists showed images sufficient for clinical diagnosis, with uExcel DPR exhibiting lower noise, better contrast, and superior sharpness compared to OSEM.
    OncoFocusVolume relative to no motion correction (∆Volume).Calculate the volume relative to no motion correction images.The ∆Volume value is less than 0%.Pass
    Maximal standardized uptake value relative to no motion correction (∆SUVmax)Calculate the SUVmax relative to no motion correction images.The ∆SUVmax value is larger than 0%.Pass
    DeepMACQuantitative evaluationFor PMMA phantom data, the average CT value in the affected area of the metal substance and the same area of the control image before and after DeepMAC was compared.After using DeepMAC, the difference between the average CT value in the affected area of the metal substance and the same area of the control image does not exceed 10HU.Pass

    2. Sample Sizes Used for the Test Set and Data Provenance

    • DeepRecon.PET:

      • Phantoms: NEMA IQ phantoms.
      • Clinical Patients: 20 volunteers.
      • Data Provenance: "collected from various clinical sites" and explicitly stated to be "different from the training data." The document does not specify country of origin or if it's retrospective/prospective, but "volunteers were enrolled" suggests prospective collection for the test set.
    • uExcel DPR:

      • Phantoms: Two NEMA IQ phantom datasets, two uniform cylindrical phantom datasets.
      • Clinical Patients: 19 human subjects.
      • Data Provenance: "derived from uMI Panvivo and uMI Panvivo S," "collected from various clinical sites and during separated time periods," and "different from the training data." "Study cohort" and "human subjects" imply prospective collection for the test set.
    • OncoFocus:

      • Clinical Patients: 50 volunteers.
      • Data Provenance: "collected from general clinical scenarios" and explicitly stated to be "on cases different from the training data." "Volunteers were enrolled" suggests prospective collection for the test set.
    • DeepMAC:

      • Phantoms: PMMA phantom datasets.
      • Clinical Patients: 20 human subjects.
      • Data Provenance: "from uMI Panvivo and uMI Panvivo S," "collected from various clinical sites" and explicitly stated to be "different from the training data." "Volunteers were enrolled" suggests prospective collection for the test set.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The document does not explicitly state that experts established "ground truth" for the quantitative metrics (e.g., SUV, CNR, BV, CR, ∆Volume, ∆SUVmax, HU differences) for the test sets. These seem to be derived from physical measurements on phantoms or calculations from patient image data using established methods.

    • For qualitative evaluation/clinical diagnosis assessment:

      • DeepRecon.PET: Two American Board of Radiologists certified physicians.
      • uExcel DPR: Two American board-certified nuclear medicine physicians.
      • OncoFocus: Two American Board of Radiologists-certified physicians.
      • DeepMAC: Two American Board of Radiologists certified physicians.

      The exact years of experience for these experts are not provided, only their board certification status.

    4. Adjudication Method for the Test Set

    The document states that the radiologists/physicians evaluated images "independently" (uExcel DPR) or simply "were evaluated by" (DeepRecon.PET, OncoFocus, DeepMAC). There is no mention of an adjudication method (such as 2+1 or 3+1 consensus) for discrepancies between reader evaluations for any of the functionalities. The evaluations appear to be separate assessments, with no stated consensus mechanism.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

    • The document describes qualitative evaluations by radiologists/physicians comparing the AI-processed images to conventionally processed images (OSEM/no motion correction/no MAC). These are MRMC comparative studies in the sense that multiple readers evaluated multiple cases.
    • However, these studies were designed to evaluate the image quality (e.g., diagnostic sufficiency, noise, contrast, sharpness, lesion detectability, artifact reduction) of the AI-processed images compared to baseline images, rather than to measure an improvement in human reader performance (e.g., diagnostic accuracy, sensitivity, specificity, reading time) when assisted by AI vs. without AI.
    • Therefore, the studies were not designed as comparative effectiveness studies measuring the effect size of human reader improvement with AI assistance. They focus on the perceived quality of the AI-processed images themselves.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    • Yes, for DeepRecon.PET, uExcel DPR, OncoFocus, and DeepMAC, quantitative (phantom and numerical) evaluations were conducted that represent the standalone performance of the algorithms in terms of image metrics (e.g., SUV bias, BV, SNR, CNR, CR, COV, ∆Volume, ∆SUVmax, HU differences). These quantitative results are directly attributed to the algorithm's output without human intervention for the measurement/calculation.
    • The qualitative evaluations by the physicians (described in point 3 above) also assess the output of the algorithm, but with human interpretation.

    7. The Type of Ground Truth Used

    • For Quantitative Evaluations:

      • Phantoms: The "ground truth" for phantom studies is implicitly the known physical properties and geometry of the NEMA IQ and PMMA phantoms, allowing for quantitative measurements (e.g., true SUV, true CR, true signal-to-noise).
      • Clinical Data (DeepRecon.PET, uExcel DPR): For these reconstruction algorithms, "ground-truth images were reconstructed from fully-sampled raw data" for the training set. For the test set, comparisons seem to be made against OSEM with Gaussian filtering or full-count OSEM images as reference/comparison points, rather than an independent "ground truth" established by an external standard.
      • Clinical Data (OncoFocus): Comparisons are made relative to "no motion correction images" (∆Volume and ∆SUVmax), implying these are the baseline for comparison, not necessarily an absolute ground truth.
      • Clinical Data (DeepMAC): Comparisons are made to a "control image" without metal artifacts for quantitative assessment of HU differences.
    • For Qualitative Evaluations:

      • The "ground truth" is based on the expert consensus / qualitative assessment by the American Board-certified radiologists/nuclear medicine physicians, who compared images for attributes like noise, contrast, sharpness, motion artifact reduction, and diagnostic sufficiency. This suggests a form of expert consensus, although no specific adjudication is described. There's no mention of pathology or outcomes data as ground truth.

    8. The Sample Size for the Training Set

    The document provides the following for the training sets:

    • DeepRecon.PET: "image samples with different tracers, covering a wide and diverse range of clinical scenarios." No specific number provided.
    • uExcel DPR: "High statistical properties of the PET data acquired by the Long Axial Field-of-View (LAFOV) PET/CT system enable the model to better learn image features. Therefore, the training dataset for the AI module in the uExcel DPR system is derived from the uEXPLORER and uMI Panorama GS PET/CT systems." No specific number provided.
    • OncoFocus: "The training dataset of the segmentation network (CNN-BC) and the mumap synthesis network (CNN-AC) in OncoFocus was collected from general clinical scenarios. Each subject was scanned by UIH PET/CT systems for clinical protocols. All the acquisitions ensure whole-body coverage." No specific number provided.
    • DeepMAC: Not explicitly stated for the training set. Only validation dataset details are given.

    9. How the Ground Truth for the Training Set Was Established

    • DeepRecon.PET: "Ground-truth images were reconstructed from fully-sampled raw data. Training inputs were generated by reconstructing subsampled data at multiple down-sampling factors." This implies that the "ground truth" for training was derived from high-quality, fully-sampled (and likely high-dose) PET data.
    • uExcel DPR: "Full-sampled data is used as the ground truth, while corresponding down-sampled data with varying down-sampling factors serves as the training input." Similar to DeepRecon.PET, high-quality, full-sampled data served as the ground truth.
    • OncoFocus:
      • For CNN-BC (body cavity segmentation network): "The input data of CNN-BC are CT-derived attenuation coefficient maps, and the target data of the network are body cavity region images." This suggests the target (ground truth) was pre-defined body cavity regions.
      • For CNN-AC (attenuation map (umap) synthesis network): "The input data are non-attenuation-corrected (NAC) PET reconstruction images, and the target data of the network are the reference CT attenuation coefficient maps." The ground truth was "reference CT attenuation coefficient maps," likely derived from actual CT scans.
    • DeepMAC: Not explicitly stated for the training set. The mention of pre-trained neural networks suggests an established training methodology, but the specific ground truth establishment is not detailed.
    Ask a Question

    Ask a specific question about this device

    K Number
    K243547
    Device Name
    uMR Ultra
    Date Cleared
    2025-07-17

    (244 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The uMR Ultra system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces sagittal, transverse, coronal, and oblique cross sectional images, and spectroscopic images, and that display internal anatomical structure and/or function of the head, body and extremities. These images and the physical parameters derived from the images when interpreted by a trained physician yield information that may assist the diagnosis. Contrast agents may be used depending on the region of interest of the scan.

    Device Description

    uMR Ultra is a 3T superconducting magnetic resonance diagnostic device with a 70cm size patient bore and 2 channel RF transmit system. It consists of components such as magnet, RF power amplifier, RF coils, gradient power amplifier, gradient coils, patient table, spectrometer, computer, equipment cabinets, power distribution system, internal communication system, and vital signal module etc. uMR Ultra is designed to conform to NEMA and DICOM standards.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the uMR Ultra device, based on the provided FDA 510(k) clearance letter.

    1. Table of Acceptance Criteria and Reported Device Performance

    Given the nature of the document, which focuses on device clearance, multiple features are discussed. I will present the acceptance criteria and results for the AI-powered features, as these are the most relevant to the "AI performance" aspect.

    Acceptance Criteria and Device Performance for AI-Enabled Features

    AI-Enabled FeatureAcceptance CriteriaReported Device Performance
    ACS- Ratio of error: NRMSE(output)/NRMSE(input) is always less than 1. - ACS has higher SNR than CS. - ACS has higher (standard deviation (SD) / mean value(S)) values than CS. - Bland-Altman analysis of image intensities acquired using fully sampled and ACS shown with less than 1% bias and all sample points falls in the 95% confidence interval. - Measurement differences on ACS and fully sampled images of same structures under 5% is acceptable. - Radiologists rate all ACS images with equivalent or higher scores in terms of diagnosis quality.- Pass - Pass - Pass - Pass - Pass - Verified that ACS meets the requirements of clinical diagnosis. All ACS images were rated with equivalent or higher scores in terms of diagnosis quality.
    DeepRecon- DeepRecon images achieve higher SNR compared to NADR images. - Uniformity difference between DeepRecon images and NADR images under 5%. - Intensity difference between DeepRecon images and NADR images under 5%. - Measurements on NADR and DeepRecon images of same structures, measurement difference under 5%. - Radiologists rate all DeepRecon images with equivalent or higher scores in terms of diagnosis quality.- NADR: 343.63, DeepRecon: 496.15 (PASS) - 0.07% (PASS) - 0.2% (PASS) - 0% (PASS) - Verified that DeepRecon meets the requirements of clinical diagnosis. All DeepRecon images were rated with equivalent or higher scores in terms of diagnosis quality.
    EasyScanNo Fail cases and auto position success rate P1/(P1+P2+F) exceeds 80%. (P1: Pass with auto positioning; P2: Pass with user adjustment; F: Fail)99.6%
    t-ACS- AI prediction (AI module output) much closer to reference compared to AI module input images. - Better consistency between t-ACS and reference than between CS and reference. - No large structural difference appeared between t-ACS and reference. - Motion-time curves and Bland-Altman analysis consistency between t-ACS and reference.- Pass - Pass - Pass - Pass
    AiCo- AiCo images exhibit improved PSNR and SSIM compared to the originals. - No significant structural differences from the gold standard. - Radiologists confirm image quality is diagnostically acceptable, fewer motion artifacts, and greater benefits for clinical diagnosis.- Pass - Pass - Confirmed.
    SparkCo- Average detection accuracy needs to be > 90%. - Average PSNR of spark-corrected images needs to be higher than spark images. - Spark artifacts need to be reduced or corrected after enabling SparkCo.- 94% - 1.6 higher - Successfully corrected
    ImageGuardSuccess rate P/(P+F) exceeds 90%. (P: Pass if prompt appears for motion / no prompt for no motion; F: Fail if prompt doesn't appear for motion / prompt appears for no motion)100%
    EasyCropNo Fail cases and pass rate P1/(P1+P2+F) exceeds 90%. (P1: Other peripheral tissues cropped, meets user requirements; P2: Cropped images don't meet user requirements, but can be re-cropped; F: EasyCrop fails or original images not saved)100%
    EasyFACTSatisfied and Acceptable ratio (S+A)/(S+A+F) exceeds 95%. (S: All ROIs placed correctly; A: Fewer than five ROIs placed correctly; F: ROIs positioned incorrectly or none placed)100%
    Auto TI ScoutAverage frame difference between auto-calculated TI and gold standard is ≤ 1 frame, and maximum frame difference is ≤ 2 frames.Average: 0.37-0.44 frames, Maximum: 1-2 frames (PASS)
    Inline MOCOAverage Dice coefficient of the left ventricular myocardium after motion correction is > 0.87.Cardiac Perfusion Images: 0.92 Cardiac Dark Blood Images: 0.96
    Inline ED/ES Phases RecognitionThe average error between the phase indices calculated by the algorithm for the ED and ES of test data and the gold standard phase indices does not exceed 1 frame.0.13 frames
    Inline ECVNo failure cases, satisfaction rate S/(S+A+F) > 95%. (S: Segmentation adheres to myocardial boundary, blood pool ROI correct; A: Small missing/redundant areas but blood pool ROI correct; F: Myocardial mask fails or blood pool ROI incorrect)100%
    EasyRegister (Height Estimation)PH5 (Percentage of height error within 5%); PH15 (Percentage of height error within 15%); MEAN_H (Average error of height). (Specific numerical criteria not explicitly stated beyond these metrics)PH5: 92.4% PH15: 100% MEAN_H: 31.53mm
    EasyRegister (Weight Estimation)PW10 (Percentage of weight error within 10%); PW20 (Percentage of weight error within 20%); MEAN_W (Average error of weight). (Specific numerical criteria not explicitly stated beyond these metrics)PW10: 68.64% PW20: 90.68% MEAN_W: 6.18kg
    EasyBolusNo Fail cases and success rate P1+P2/(P1+P2+F) exceeds 100%. (P1: Monitoring point positioning meets user requirements, frame difference ≤ 1 frame; P2: Monitoring point positioning meets user requirements, frame difference = 2 frames; F: Auto position fails or frame difference > 2 frames)P1: 80% P2: 20% Total Failure Rate: 0% Pass: 100%

    For the rest of the questions, I will consolidate the information where possible, as some aspects apply across multiple AI features.

    2. Sample Sizes Used for the Test Set and Data Provenance

    • ACS: 749 samples from 25 volunteers. Diverse demographic distributions covering various genders, age groups, ethnicity (White, Asian, Black), and BMI (Underweight, Healthy, Overweight/Obesity). Data collected from various clinical sites during separated time periods.
    • DeepRecon: 25 volunteers (nearly 2200 samples). Diverse demographic distributions covering various genders, age groups, ethnicity (White, Asian, Black), and BMI. Data collected from various clinical sites during separated time periods.
    • EasyScan: 444 cases from 116 subjects. Diverse demographic distributions covering various genders, age groups, and ethnicities. Data acquired from UIH MRI equipment (1.5T and 3T). Data provenance not explicitly stated (e.g., country of origin), but given the company location (China) and "U.S. credentials" for evaluators, it likely includes data from both. The document states "The testing dataset was collected independently from the training dataset".
    • t-ACS: 1173 cases from 60 volunteers. Diverse demographic distributions covering various genders, age groups, ethnicities (White, Black, Asian) and BMI. Data acquired by uMR Ultra scanners. Data provenance not explicitly stated, but implies global standard testing.
    • AiCo: 218 samples from 24 healthy volunteers. Diverse demographic distributions covering various genders, age groups, BMI (Under/healthy weight, Overweight/Obesity), and ethnicity (White, Black, Asian). Data provenance not explicitly stated.
    • SparkCo: 59 cases from 15 patients for real-world spark raw data testing. Diverse demographic distributions including gender, age, BMI (Underweight, Healthy, Overweight, Obesity), and ethnicity (Asian, "N.A." for White, implying not tested as irrelevant). Data acquired by uMR 1.5T and uMR 3T scanners.
    • ImageGuard: 191 cases from 80 subjects. Diverse demographic distributions covering various genders, age groups, and ethnicities (White, Black, Asian). Data acquired from UIH MRI equipment (1.5T and 3T).
    • EasyCrop: Not explicitly stated as "subjects" vs. "cases," but tested on 5 intended imaging body parts. Sample size (N=65) implies 65 cases/scans, potentially from 65 distinct subjects or fewer if subjects had multiple scans. Diverse demographic distributions covering various genders, age groups, ethnicity (Asian, Black, White). Data acquired from UIH MRI equipment (1.5T and 3T).
    • EasyFACT: 25 cases from 25 volunteers. Diverse demographic distributions covering various genders, age groups, weight, and ethnicity (Asian, White, Black).
    • Auto TI Scout: 27 patients. Diverse demographic distributions covering various genders, age groups, ethnicity (Asian, White), and BMI. Data acquired from 1.5T and 3T scanners.
    • Inline MOCO: Cardiac Perfusion Images: 105 cases from 60 patients. Cardiac Dark Blood Images: 182 cases from 33 patients. Diverse demographic distributions covering age, gender, ethnicity (Asian, White, Black, Hispanic), BMI, field strength, and disease conditions (Positive, Negative, Unknown).
    • Inline ED/ES Phases Recognition: 95 cases from 56 volunteers, covering various genders, age groups, field strength, disease conditions (NOR, MINF, DCM, HCM, ARV), and ethnicity (Asian, White, Black).
    • Inline ECV: 90 images from 28 patients. Diverse demographic distributions covering gender, age, BMI, field strength, ethnicity (Asian, White), and health status (Negative, Positive, Unknown).
    • EasyRegister (Height/Weight Estimation): 118 cases from 63 patients. Diverse ethnic groups (Chinese, US, France, Germany).
    • EasyBolus: 20 subjects. Diverse demographic distributions covering gender, age, field strength, and ethnicity (Asia).

    Data Provenance (Retrospective/Prospective and Country of Origin):
    The document states "The testing dataset was collected independently from the training dataset, with separated subjects and during different time periods." This implies a prospective collection for validation that is distinct from the training data. For ACS and DeepRecon, it explicitly mentions "US subjects" for some evaluations, but for many features, the specific country of origin for the test set is not explicitly stated beyond "diverse ethnic groups" or "Asian" which could be China (where the company is based) or other Asian populations. The use of "U.S. board-certified radiologists" and "licensed MRI technologist with U.S. credentials" for evaluation suggests the data is intended to be representative of, or directly includes, data relevant to the U.S. clinical context.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • ACS & DeepRecon: Evaluated by "American Board of Radiologists certificated physicians" (plural, implying multiple, at least 2). Not specified how many exactly, but strong qualifications.
    • EasyScan, ImageGuard, EasyCrop, EasyBolus: Evaluated by "licensed MRI technologist with U.S. credentials." For EasyBolus, it specifies "certified professionals in the United States." Number not explicitly stated beyond "the" technologist/professionals, but implying multiple for robust evaluation.
    • Inline MOCO & Inline ECV: Ground truth annotations done by a "well-trained annotator" and "finally, all ground truth was evaluated by three licensed physicians with U.S. credentials." This indicates a 3-expert consensus/adjudication.
    • SparkCo: "One experienced evaluator" for subjective image quality improvement.
    • For other features (t-ACS, EasyFACT, Auto TI Scout, Inline ED/ES Phases Recognition, EasyRegister), the ground truth seems to be based on physical measurements (for EasyRegister) or computational metrics (for t-ACS based on fully-sampled images, and for accuracy of ROI placement against defined standards), rather than human expert adjudication for ground truth.

    4. Adjudication Method (e.g., 2+1, 3+1, none) for the Test Set

    • Inline MOCO & Inline ECV: "Evaluated by three licensed physicians with U.S. credentials." This implies a 3-expert consensus method for ground truth establishment.
    • ACS, DeepRecon, AiCo: "Evaluated by American Board of Radiologists certificated physicians" (plural). While not explicitly stated as 2+1 or 3+1, it suggests a multi-reader review, where consensus was likely reached for the reported diagnostic quality.
    • SparkCo: "One experienced evaluator" was used for subjective evaluation, implying no formal multi-reader adjudication for this specific metric.
    • For features like EasyScan, ImageGuard, EasyCrop, EasyBolus (evaluated by MRI technologists) and those relying on quantitative metrics against a reference (t-ACS, EasyFACT, Auto TI Scout, EasyRegister, Inline ED/ES Phases Recognition), the "ground truth" is either defined by the system's intended function (e.g., correct auto-positioning) or a mathematically derived reference, so a traditional human adjudication method is not applicable in the same way as for diagnostic image interpretation.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done

    The document does not explicitly state that a formal MRMC comparative effectiveness study was performed to quantify the effect size of how much human readers improve with AI vs. without AI assistance.

    Instead, the evaluations for ACS, DeepRecon, and AiCo involve "American Board of Radiologists certificated physicians" who "verified that [AI feature] meets the requirements of clinical diagnosis. All [AI feature] images were rated with equivalent or higher scores in terms of diagnosis quality." For AiCo, they confirmed images "exhibit fewer motion artifacts and offer greater benefits for clinical diagnosis." This is a qualitative assessment of diagnostic quality by experts, but not a comparative effectiveness study in the sense of measuring reader accuracy or confidence change with AI assistance.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done

    Yes, for many of the AI-enabled features, a standalone performance evaluation was conducted:

    • ACS: Performance was evaluated by comparing quantitative metrics (NRMSE, SNR, Resolution, Contrast, Uniformity, Structure Measurement) against fully-sampled images or CS. This is a standalone evaluation.
    • DeepRecon: Quantitative metrics (SNR, uniformity, contrast, structure measurement) were compared between DeepRecon and NADR (without DeepRecon) images. This is a standalone evaluation.
    • t-ACS: Quantitative tests (MAE, PSNR, SSIM, structural measurements, motion-time curves) were performed comparing t-ACS and CS results against a reference. This is a standalone evaluation.
    • AiCo: PSNR and SSIM values were quantitatively compared, and structural dimensions were assessed, between AiCo processed images and original/motionless reference images. This is a standalone evaluation.
    • SparkCo: Spark detection accuracy was calculated, and PSNR of spark-corrected images was compared to original spark images. This is a standalone evaluation.
    • Inline MOCO: Evaluated using Dice coefficient, a quantitative metric for segmentation accuracy. This is a standalone evaluation.
    • Inline ED/ES Phases Recognition: Evaluated by quantifying the error between algorithm output and gold standard phase indices. This is a standalone evaluation.
    • Inline ECV: Evaluated by quantitative scoring for segmentation accuracy (S, A, F criteria). This is a standalone evaluation.
    • EasyRegister (Height/Weight): Evaluated by quantitative error metrics (PH5, PH15, MEAN_H; PW10, PW20, MEAN_W) against physical measurements. This is a standalone evaluation.

    Features like EasyScan, ImageGuard, EasyCrop, and EasyBolus involve automated workflow assistance where the direct "diagnostic" outcome isn't solely from the algorithm, but the automated function's performance is evaluated in a standalone manner against defined success criteria.

    7. The Type of Ground Truth Used

    The type of ground truth varies depending on the specific AI feature:

    • Reference/Fully-Sampled Data:
      • ACS, DeepRecon, t-ACS, AiCo: Fully-sampled k-space data transformed to image space served as "ground-truth" for training and as a reference for quantitative performance metrics in testing. For AiCo, "motionless data" served as gold standard.
      • SparkCo: Simulated spark artifacts generated from "spark-free raw data" provided ground truth for spark point locations in training.
    • Expert Consensus/Subjective Evaluation:
      • ACS, DeepRecon, AiCo: "American Board of Radiologists certificated physicians" provided qualitative assessment of diagnostic image quality ("equivalent or higher scores," "diagnostically acceptable," "fewer motion artifacts," "greater benefits for clinical diagnosis").
      • EasyScan, ImageGuard, EasyCrop, EasyBolus: "Licensed MRI technologist with U.S. credentials" or "certified professionals in the United States" performed subjective evaluation against predefined success criteria for the workflow functionality.
      • SparkCo: One experienced evaluator for subjective image quality improvement.
    • Anatomical/Physiological Measurements / Defined Standards:
      • EasyFACT: Defined criteria for ROI placement within liver parenchyma, avoiding borders/vascular structures.
      • Auto TI Scout, Inline ED/ES Phases Recognition: Gold standard phase indices were presumably established by expert review or a reference method.
      • Inline MOCO & Inline ECV: Ground truth for cardiac left ventricular myocardium segmentation was established by a "well-trained annotator" and "evaluated by three licensed physicians with U.S. credentials" (expert consensus based on anatomical boundaries).
      • EasyRegister (Height/Weight Estimation): "Precisely measured height/weight value" using "physical examination standards."

    8. The Sample Size for the Training Set

    • ACS: 1,262,912 samples (collected from variety of anatomies, image contrasts, and acceleration factors, scanned by UIH MRI systems).
    • DeepRecon: 165,837 samples (collected from 264 volunteers, scanned by UIH MRI systems for multiple body parts and clinical protocols).
    • EasyScan: Training data collection not explicitly detailed in the same way as ACS/DeepRecon (refers to "collected independently from the training dataset").
    • t-ACS: Datasets collected from 108 volunteers ("large number of samples").
    • AiCo: 140,000 images collected from 114 volunteers across multiple body parts and clinical protocols.
    • SparkCo: 24,866 spark slices generated from 61 cases collected from 10 volunteers.
    • EasyFACT, Auto TI Scout, Inline MOCO, Inline ED/ES Phases Recognition, Inline ECV, EasyRegister, EasyBolus: The document states that training data was independent of testing data but does not provide specific sample sizes for the training datasets for these features.

    9. How the Ground Truth for the Training Set was Established

    • ACS, DeepRecon, t-ACS, AiCo: "Fully-sampled k-space data were collected and transformed to image space as the ground-truth." For DeepRecon specifically, "multiple-averaged images with high-resolution and high SNR were collected as the ground-truth images." For AiCo, "motionless data" served as gold standard. All training data were "manually quality controlled."
    • SparkCo: "The training dataset... was generated by simulating spark artifacts from spark-free raw data... with the corresponding ground truth (i.e., the location of spark points)."
    • Inline MOCO & Inline ECV: The document states "all ground truth was annotated by a well-trained annotator. The annotator used an interactive tool to observe the image, and then labeled the left ventricular myocardium in the image."
    • For EasyScan, EasyFACT, Auto TI Scout, Inline ED/ES Phases Recognition, EasyRegister, and EasyBolus training ground truth establishment is not explicitly detailed, only that the testing data was independent of the training data. For EasyRegister, it implies physical measurements were the basis for ground truth.
    Ask a Question

    Ask a specific question about this device

    K Number
    K243397
    Device Name
    uMR 680
    Date Cleared
    2025-07-16

    (258 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The uMR 680 system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces sagittal, transverse, coronal, and oblique cross sectional images, and spectroscopic images, and that display internal anatomical structure and/or function of the head, body and extremities.

    These images and the physical parameters derived from the images when interpreted by a trained physician yield information that may assist the diagnosis. Contrast agents may be used depending on the region of interest of the scan.

    Device Description

    The uMR 680 is a 1.5T superconducting magnetic resonance diagnostic device with a 70cm size patient bore. It consists of components such as magnet, RF power amplifier, RF coils, gradient power amplifier, gradient coils, patient table, spectrometer, computer, equipment cabinets, power distribution system, internal communication system, and vital signal module etc. The uMR 680 Magnetic Resonance Diagnostic Device is designed to conform to NEMA and DICOM standards.

    This traditional 510(k) is to request modifications for the cleared uMR 680(K240744). The modifications performed on the uMR 680 in this submission are due to the following changes that include:
    (1) Addition of RF coils and corresponding accessories: Breast Coil -12, Biopsy Configuration, Head Coil-16, Positioning Couch-top, Coil Support.
    (2) Deletion of VSM (Wireless UIH Gating Unit REF 453564324621, ECG module Ref 989803163121, SpO2 module Ref 989803163111).
    (3) Modification of the dimensions of Detachable table: from width 826mm, height 880mm,2578mm to width 810mm, height 880mm, length 2505mm.
    (4) Addition and modification of pulse sequences
    a) New sequences: gre_snap, gre_quick_4dncemra, gre_pass, gre_mtp, gre_trass, epi_dwi_msh, epi_dti_msh, svs_hise.
    b) Added associated options for certain sequences: fse(add Silicone-Only Imaging, MicroView, MTC, MultiBand), fse_arms(add Silicone-Only Imaging), fse_ssh(add Silicone-Only Imaging), fse_mx(add CEST, T1rho, MicroView, MTC), fse_arms_dwi(add MultiBand), asl_3d(add multi-PLD), gre(add T1rho, MTC, output phase image), gre_fsp(add FSP+), gre_bssfp(add CASS, TI Scout), gre_fsp_c(add 3D LGE, DB/GB PSIR), gre_bssfp_ucs(add real time cine), gre_fq(add 4D Flow), epi_dwi(add IVIM), epi_dti(add DKI, DSI).
    c) Added additional accessory equipment required for certain sequences: gre_bssfp(add Virtual ECG Trigger).
    d) Name change of certain sequences: gre_fine(old name: gre_bssfp_fi).
    e) Added applicable body parts: gre_ute, gre_fine, fse_mx.
    (5) Addition of imaging reconstruction methods: AI-assisted Compressed Sensing (ACS), Spark artifact Correction (SparkCo).
    (6) Addition of imaging processing methods: Inline Cardiac Function, Inline ECV, Inline MRS, Inline MOCO, 4D Flow, SNAP, CEST, T1rho, FSP+, CASS, PASS, MTP.
    (7) Addition of workflow features: TI Scout, EasyCrop, ImageGuard, Mocap, EasyFACT, Auto Bolus tracker, Breast Biopsy and uVision.
    (8) Modification of workflow features: EasyScan(add applicable body parts)

    The modification does not affect the intended use or alter the fundamental scientific technology of the device.

    AI/ML Overview

    The provided FDA 510(k) clearance letter and summary for the uMR 680 Magnetic Resonance Imaging System outlines performance data for several new features and algorithms.

    Here's an analysis of the acceptance criteria and the studies that prove the device meets them for the AI-assisted Compressed Sensing (ACS), SparkCo, Inline ED/ES Phases Recognition, and Inline MOCO algorithms.


    1. Table of Acceptance Criteria and Reported Device Performance

    Feature/AlgorithmEvaluation ItemAcceptance CriteriaReported Performance
    AI-assisted Compressed Sensing (ACS)AI Module Verification TestThe ratio of error: NRMSE(output)/ NRMSE(input) is always less than 1.Pass
    Image SNRACS has higher SNR than CS.Pass (ACS shown to perform better than CS in SNR)
    Image ResolutionACS has higher (standard deviation (SD) / mean value(S)) values than CS.Pass (ACS shown to perform better than CS in resolution)
    Image ContrastBland-Altman analysis of image intensities acquired using fully sampled and ACS was shown with less than 1% bias and all sample points falls in the 95% confidence interval.Pass (less than 1% bias, all sample points within 95% confidence interval)
    Image UniformityACS achieved significantly same image uniformities as fully sampled image.Pass
    Structure MeasurementMeasurements differences on ACS and fully sampled images of same structures under 5% is acceptable.Pass
    Clinical EvaluationAll ACS images were rated with equivalent or higher scores in terms of diagnosis quality."All ACS images were rated with equivalent or higher scores in terms of diagnosis quality" (implicitly, it passed)
    SparkCoSpark Detection AccuracyThe average detection accuracy needs to be larger than 90%.The average detection accuracy is 94%.
    Spark Correction Performance (Simulated)The average PSNR of spark-corrected images needs to be higher than the spark images. Spark artifacts need to be reduced or corrected.The average PSNR of spark-corrected images is 1.6 dB higher than the spark images. The images with spark artifacts were successfully corrected after enabling SparkCo.
    Spark Correction Performance (Real-world)Spark artifacts need to be reduced or corrected (evaluated by one experienced evaluator assessing image quality improvement).The images with spark artifacts were successfully corrected after enabling SparkCo.
    Inline ED/ES Phases RecognitionError between algorithm and gold standardThe average error does not exceed 1 frame.The error between the frame indexes calculated by the algorithm for the ED and ES of all test data and the gold standard frame index is 0.13 frames, which does not exceed 1 frame.
    Inline MOCODice Coefficient (Left Ventricular Myocardium after Motion Correction) Cardiac Perfusion ImagesThe average Dice coefficient of the left ventricular myocardium after motion correction is greater than 0.87.The average Dice coefficient of the left ventricular myocardium after motion correction is 0.92, which is greater than 0.87. Subgroup analysis also showed good generalization: - Age: 0.92-0.93 - Gender: 0.92 - Ethnicity: 0.91-0.92 - BMI: 0.91-0.95 - Magnetic field strength: 0.92-0.93 - Disease conditions: 0.91-0.93
    Dice Coefficient (Left Ventricular Myocardium after Motion Correction) Cardiac Dark Blood ImagesThe average Dice coefficient of the left ventricular myocardium after motion correction is greater than 0.87.The average Dice coefficient of the left ventricular myocardium after motion correction is 0.96, which is greater than 0.87. Subgroup analysis also showed good generalization: - Age: 0.95-0.96 - Gender: 0.96 - Ethnicity: 0.95-0.96 - BMI: 0.96-0.98 - Magnetic field strength: 0.96 - Disease conditions: 0.96-0.97

    2. Sample Size Used for the Test Set and Data Provenance

    • AI-assisted Compressed Sensing (ACS):
      • Sample Size: 1724 samples from 35 volunteers.
      • Data Provenance: Diverse demographic distributions (gender, age groups, ethnicity, BMI) covering various clinical sites and separated time periods. Implied to be prospective or a carefully curated retrospective set, collected specifically for validation on the uMR 680 system, and independent of training data.
    • SparkCo:
      • Simulated Spark Testing Dataset: 159 spark slices (generated from spark-free raw data).
      • Real-world Spark Testing Dataset: 59 cases from 15 patients.
      • Data Provenance: Real-world data acquired from uMR 1.5T and uMR 3T scanners, covering representative clinical protocols. The report specifies "Asian" for 100% of the real-world dataset's ethnicity, noting that performance is "irrelevant with human ethnicity" due to the nature of spark signal detection. This is retrospective data.
    • Inline ED/ES Phases Recognition:
      • Sample Size: 95 cases from 56 volunteers.
      • Data Provenance: Includes various ages, genders, field strengths (1.5T, 3.0T), disease conditions (NOR, MINF, DCM, HCM, ARV), and ethnicities (Asian, White, Black). The data is independent of the training data. Implied to be retrospective from UIH MRI systems.
    • Inline MOCO:
      • Sample Size: 287 cases in total (105 cardiac perfusion images from 60 patients, 182 cardiac dark blood images from 33 patients).
      • Data Provenance: Acquired from 1.5T and 3T magnetic resonance imaging equipment from UIH. Covers various ages, genders, ethnicities (Asian, White, Black, Hispanic), BMI, field strengths (1.5T, 3.0T), and disease conditions (Positive, Negative, Unknown). The data is independent of the training data. Implied to be retrospective from UIH MRI systems.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts

    • AI-assisted Compressed Sensing (ACS):
      • Number of Experts: More than one (plural "radiologists" used).
      • Qualifications: American Board of Radiologists certificated physicians.
    • SparkCo:
      • Number of Experts: One expert for real-world SparkCo evaluation.
      • Qualifications: "one experienced evaluator." (Specific qualifications like board certification or years of experience are not provided for this specific evaluator).
    • Inline ED/ES Phases Recognition:
      • Number of Experts: Not explicitly stated for ground truth establishment ("gold standard phase indices"). It implies a single, established method or perhaps a consensus by a team, but details are missing.
    • Inline MOCO:
      • Number of Experts: Three licensed physicians.
      • Qualifications: U.S. credentials.

    4. Adjudication Method for the Test Set

    • AI-assisted Compressed Sensing (ACS): Not explicitly stated, but implies individual review by "radiologists" to rate diagnostic quality.
    • SparkCo: For the real-world dataset, evaluation by "one experienced evaluator."
    • Inline ED/ES Phases Recognition: Not explicitly stated; "gold standard phase indices" are referenced, implying a pre-defined or established method without detailing a multi-reader adjudication process.
    • Inline MOCO: "Finally, all ground truth was evaluated by three licensed physicians with U.S. credentials." This suggests an adjudication or confirmation process, but the specific method (e.g., 2+1, consensus) is not detailed beyond "evaluated by."

    5. If a Multi-Reader, Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

    • No MRMC comparative effectiveness study was explicitly described to evaluate human reader improvement with AI assistance. The described studies focus on the standalone performance of the algorithms or a qualitative assessment of images by radiologists for diagnostic quality.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    • Yes, standalone performance was done for all listed algorithms.
      • ACS: Evaluated quantitatively (SNR, Resolution, Contrast, Uniformity, Structure Measurement) and then qualitatively by radiologists. The quantitative metrics are standalone.
      • SparkCo: Quantitative metrics (Detection Accuracy, PSNR) and qualitative assessment by an experienced evaluator. The quantitative metrics are standalone.
      • Inline ED/ES Phases Recognition: Evaluated quantitatively as the error between algorithmic output and gold standard. This is a standalone performance metric.
      • Inline MOCO: Evaluated using the Dice coefficient, which is a standalone quantitative metric comparing algorithm output to ground truth.

    7. The Type of Ground Truth Used

    • AI-assisted Compressed Sensing (ACS):
      • Quantitative: Fully-sampled k-space data transformed to image space.
      • Clinical: Radiologist evaluation ("American Board of Radiologists certificated physicians").
    • SparkCo:
      • Spark Detection Module: Location of spark points (ground truth for simulated data).
      • Spark Correction Module: Visual assessment by "one experienced evaluator."
    • Inline ED/ES Phases Recognition: "Gold standard phase indices" (method for establishing this gold standard is not detailed, but implies expert-derived or a highly accurate reference).
    • Inline MOCO: Left ventricular myocardium segmentation annotated by a "well-trained annotator" and "evaluated by three licensed physicians with U.S. credentials." This is an expert consensus/pathology-like ground truth.

    8. The Sample Size for the Training Set

    • AI-assisted Compressed Sensing (ACS): 1,262,912 samples (from a variety of anatomies, image contrasts, and acceleration factors).
    • SparkCo: 24,866 spark slices (generated from 61 spark-free cases from 10 volunteers).
    • Inline ED/ES Phases Recognition: Not explicitly provided, but stated to be "independent of the data used to test the algorithm."
    • Inline MOCO: Not explicitly provided, but stated to be "independent of the data used to test the algorithm."

    9. How the Ground Truth for the Training Set Was Established

    • AI-assisted Compressed Sensing (ACS): Fully-sampled k-space data were collected and transformed to image space as the ground-truth. All data were manually quality controlled.
    • SparkCo: "The training dataset for the AI module in SparkCo was generated by simulating spark artifacts from spark-free raw data... a total of 24,866 spark slices, along with the corresponding ground truth (i.e., the location of spark points), were generated for training." This indicates a hybrid approach using real spark-free data to simulate and generate the ground truth for spark locations.
    • Inline ED/ES Phases Recognition: Not explicitly provided.
    • Inline MOCO: Not explicitly provided.
    Ask a Question

    Ask a specific question about this device

    K Number
    K243666
    Device Name
    uOmniscan
    Date Cleared
    2025-06-17

    (202 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    uOmniscan is a software application for providing real-time communication between remote and local users, providing remote read-only or fully controlled access to connected medical imaging devices, including the ability to remotely initiate MR scans. It is also used for training medical personnel in the use of medical imaging devices. It is a vendor-neutral solution. Access must be authorized by the onsite user operating the system. Images reviewed remotely are not for diagnostic use.

    Device Description

    uOmniscan is a medical software designed to address the skill differences among technicians and their need for immediate support, allowing them to interact directly with remote experts connected to the hospital network. By collaboration between on-site technicians and remote experts, it enables technicians or radiologists located in different geographic areas to remotely assist in operating medical imaging devices. uOmniscan provides healthcare professionals with a private, secure communication platform for real-time image viewing and collaboration across multiple sites and organizations.

    uOmniscan establishes remote connections with Modality through application, KVM (Keyboard, Video, Mouse) switch, or UIH's proprietary access tool uRemote Assistant. The connection can be made in full control or read-only mode, assisting on-site technicians in seeking guidance and real-time support on scan-related issues, including but not limited to training, protocol evaluation, and scan parameter management, with the capability to remotely initiate scans for MR imaging equipment. In addition to supporting remote access and control of modality scanners, uOmniscan also supports common communication methods including real-time video, as well as audio calls and text chats between users.

    Images viewed remotely are not for diagnostic purposes.

    It is a vendor-neutral solution compatible with existing multimodality equipment in healthcare networks, allowing healthcare professionals to share expertise and increase work efficiency, while enhancing the communication capabilities among healthcare professionals at different locations.

    AI/ML Overview

    The provided FDA 510(k) clearance letter for the uOmniscan device focuses primarily on demonstrating substantial equivalence to a predicate device, as opposed to proving novel clinical efficacy or diagnostic accuracy. Therefore, the "acceptance criteria" and "study that proves the device meets the acceptance criteria" in this context are related to performance verification and usability testing to ensure the device performs its intended functions safely and effectively, similar to the predicate device.

    The document states that "No clinical study was required," and "No animal study was required." This indicates that the device's function (remote control and communication for medical imaging) does not require a traditional clinical outcomes study in the same way a diagnostic AI algorithm might. Therefore, the "study that proves the device meets the acceptance criteria" refers to the software verification and validation testing, performance verification, and usability studies conducted.

    Here's a breakdown of the requested information based on the provided text:

    Acceptance Criteria and Device Performance

    The acceptance criteria for this type of device are primarily functional and usability-based, ensuring it performs its tasks reliably and is safe for user interaction.

    Acceptance Criterion (Implicit from Performed Tests)Reported Device Performance
    Functional Verification:
    1. Establish real-time communication between remote and local usersTesting conducted; results indicate successful establishment of real-time communication.
    2. Establish fully controlled session with medical image device via uRemote AssistantTesting conducted; results indicate successful establishment of fully controlled sessions.
    3. Establish fully controlled session with medical image device via KVM SwitchTesting conducted; results indicate successful establishment of fully controlled sessions.
    4. Network Status IdentificationTesting conducted; results indicate successful network status identification.
    5. Performance evaluation for different network conditions/speedsTesting conducted; results indicate satisfactory performance across varying network conditions.
    6. Indicating network state to usersTesting conducted; results indicate successful indication of network state to users.
    Usability Verification:
    1. Design of user interface and manual effectively decrease probability of use errorsUsability study results: "Design of user interface and manual effectively decrease the probability of use errors."
    2. All risk control measures are implementable and understood across user expertise levelsUsability study results: "All risk control measures are implementable and understood across user expertise levels."
    3. No unacceptable residual use-related risksUsability study results: "The product has no unacceptable residual use-related risks."

    Study Details

    Given the nature of the device and the FDA's clearance pathway, the "study" referred to is a series of engineering and usability tests rather than a clinical trial.

    2. Sample size used for the test set and the data provenance

    • Test Set Sample Size: The document does not specify quantitative sample sizes for the functional performance or usability testing datasets (e.g., number of communication sessions tested, specific network conditions, or number of users in usability testing). It broadly states that "Evaluation testing was conducted to verify the functions" and "Expert review for formative evaluation and usability testing for summative evaluation were conducted."
    • Data Provenance: Not explicitly stated, but given the manufacturer is "Shanghai United Imaging Healthcare Co., Ltd." in China, it's reasonable to infer that the testing likely occurred in a controlled environment by the manufacturer or their designated testing facilities, potentially in China or other regions where their systems are developed/used. The tests conducted (software V&V, performance verification, usability) are typically internal, controlled studies rather than real-world data collection. The data would be prospective in the sense that the tests were designed and executed to evaluate the device.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Ground Truth Establishment: For functional and usability testing of a remote control/communication software, "ground truth" isn't established in the clinical sense (e.g., disease presence). Instead, the "truth" is whether the software correctly performs its programmed functions and is usable.
    • Experts: The usability study involved "participation of experts and user representatives." The specific number or detailed qualifications of these "experts" (e.g., 'radiologist with 10 years experience') are not specified in this summary. They would likely be human factors engineers, software testers, and potentially medical professionals (radiologists, technologists) acting as user representatives.

    4. Adjudication method for the test set

    • Adjudication Method: Not applicable or specified in the traditional sense of medical image interpretation (e.g., 2+1 radiology review). For software functional testing, results are typically binary (pass/fail) based on predefined test cases. For usability, a consensus or qualitative analysis of user feedback and observations would be used, but a formal "adjudication method" as seen in clinical reading studies is not mentioned.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC Study: No, a multi-reader, multi-case (MRMC) comparative effectiveness study was not done. The document explicitly states: "No clinical study was required." This type of study is typically performed for AI or CAD devices that assist with diagnostic interpretation, which is not the primary function of uOmniscan. The device is for remote control and communication, and "Images reviewed remotely are not for diagnostic use."

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Standalone Performance: The "Performance Verification" section details tests of the software's functional capabilities (e.g., establishing communication, network status). These could be considered "standalone" in the sense that they verify the software's inherent ability to perform these tasks. However, the device's core purpose is "real-time communication between remote and local users" and "access to connected medical imaging devices," implying a human-in-the-loop for its intended use. The testing confirms the software's readiness for this human-in-the-loop interaction rather than a pure standalone diagnostic performance.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • Type of Ground Truth: As noted, traditional "ground truth" (e.g., pathology, clinical outcomes, expert consensus on disease) is not applicable for this device's verification. Instead, the ground truth for performance testing is the predefined functional requirements and expected system behavior, as well as the principles of human factors engineering and usability standards for the usability study.

    8. The sample size for the training set

    • Training Set Sample Size: Not applicable/not specified. The uOmniscan device is described as "software only solution" for remote control and communication. There is no mention of it being an AI/ML algorithm that requires a "training set" in the context of machine learning model development. The verification and validation data are for testing the implemented software features, not for training a model.

    9. How the ground truth for the training set was established

    • Ground Truth for Training Set Establishment: Not applicable. As the device does not appear to be an AI/ML system requiring a training set, the concept of establishing ground truth for a training set does not apply here.
    Ask a Question

    Ask a specific question about this device

    K Number
    K243122
    Device Name
    uMR Omega
    Date Cleared
    2025-05-21

    (233 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The uMR Omega system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces sagittal, transverse, coronal, and oblique cross sectional images, and spectroscopic images, and that display internal anatomical structure and/or function of the head, body and extremities.

    These images and the physical parameters derived from the images when interpreted by a trained physician yield information that may assist the diagnosis. Contrast agents may be used depending on the region of interest of the scan.

    Device Description

    The uMR Omega is a 3.0T superconducting magnetic resonance diagnostic device with a 75cm size patient bore. It consists of components such as magnet, RF power amplifier, RF coils, gradient power amplifier, gradient coils, patient table, spectrometer, computer, equipment cabinets, power distribution system, internal communication system, and vital signal module etc. The uMR Omega Magnetic Resonance Diagnostic Device is designed to conform to NEMA and DICOM standards.

    This traditional 510(k) is to request modifications for the cleared uMR Omega(K240540). The modifications performed on the uMR Omega in this submission are due to the following changes that include:

    1. Addition of RF coils and corresponding accessories: Breast Coil - 12, Biopsy Configuration, Head Coil - 16, Positioning Couch-top, Coil Support, Tx/Rx Head Coil.

    2. Modification of the mmw component name: from mmw100 to mmw101.

    3. Modification of the dimensions of detachable table: from width 826mm, height 880mm, length 2578mm to width 810mm, height 880mm, length 2505mm.

    4. Addition and modification of pulse sequences:

      • a) New sequences: gre_pass, gre_mtp, epi_dti_msh, gre_fsp_c(3D LGE).

      • b) Added Associated options for certain sequences: fse(MicroView), fse_mx(MicroView), gre(Output phase image), gre_swi(QSM),
        gre_fsp_c(DB/GB PSIR), gre_bssfp(TI Scout), gre_bssfp_ucs(Real Time Cine), epi_dwi(IVIM), epi_dti(DSI, DKI).

      • c) Added Additional accessory equipment required for certain sequences: gre_bssfp (Virtual ECG Trigger).

      • d) Added applicable body parts: epi_dwi_msh, gre_fine, fse_mx.

    5. Addition of imaging processing methods: Inline Cardiac function, Inline ECV, Inline MRS, Inline MOCO and MTP.

    6. Addition of workflow features: EasyFACT, TI Scout, EasyCrop, ImageGuard, MoCap and Breast Biopsy.

    7. Addition of image reconstruction methods: SparkCo.

    8. Modification of function: uVision (add Body Part Recognization), EasyScan(add applicable body parts).

    The modification does not affect the intended use or alter the fundamental scientific technology of the device.

    AI/ML Overview

    The provided text describes modifications to an existing MR diagnostic device (uMR Omega) and performs non-clinical testing to demonstrate substantial equivalence to predicate devices. It specifically details the acceptance criteria and study results for two components: SparkCo (an AI algorithm for spark artifact correction) and Inline ECV (an image processing method for extracellular volume fraction calculation).

    Here's a breakdown of the requested information:


    Acceptance Criteria and Device Performance for uMR Omega

    1. Table of Acceptance Criteria and Reported Device Performance

    For SparkCo (Spark artifact Correction):

    Test PartTest MethodsAcceptance CriteriaReported Device Performance
    Spark detection accuracyBased on the real-world testing dataset, calculating the detection accuracy by comparing the spark detection results with the ground-truth.The average detection accuracy needs to be larger than 90%.The average detection accuracy is 94%.
    Spark correction performance1. Based on the simulated spark testing dataset, calculating the PSNR (Peak signal-to-noise ratio) of the spark-corrected images and original spark images. 2. Based on the real-world spark dataset, evaluating the image quality improvement between the spark-corrected images and spark images by one experienced evaluator.1. The average PSNR of spark-corrected images needs to be higher than the spark images. 2. Spark artifacts need to be reduced or corrected after enabling SparkCo.1. The average PSNR of spark-corrected images is 1.6 higher than the spark images. 2. The images with spark artifacts were successfully corrected after enabling the SparkCo.

    For Inline ECV (Extracellular Volume Fraction):

    Validation TypeAcceptance CriteriaReported Device Performance (Summary from Subgroup Analysis)
    Passing rateTo verify the effectiveness of the algorithm, the subjective evaluation method was used. The segmentation result of each case was obtained with the algorithm, and the segmentation mask was evaluated with the following criteria. The test pass criteria was: no failure cases, satisfaction rate S/(S+A+F) exceeding 95%. The criteria is as follows: • Satisfied (S): the segmentation myocardial boundary adheres to the myocardial boundary and blood pool ROI is within the blood pool excluding the papillary muscles. • Acceptable (A): These are small missing or redundant areas in the myocardial segmentation but not obviously and the blood pool ROI is within the blood pool excluding the papillary muscles. • Fail (F): The myocardial mask does not adhere to the myocardial boundary or the blood pool ROI is not within the blood pool, or the blood pool ROI contains papillary muscles.The segmentation algorithm performed as expected in different subgroups. Total satisfaction Rate (S): 100% for all monitored demographic and acquisition subgroups, which means no failure cases (F) or acceptable cases (A) were reported.

    2. Sample Size Used for the Test Set and Data Provenance

    For SparkCo:

    • Test Set Sample Size:
      • Simulated Spark Testing Dataset: 159 spark slices.
      • Real-world Spark Testing Dataset: 59 cases from 15 patients.
    • Data Provenance:
      • Simulated Spark Testing Dataset: Generated by simulating spark artifacts from spark-free raw data (61 cases from 10 volunteers, various body parts and MRI sequences).
      • Real-world Spark Testing Dataset: Acquired using uMR 1.5T and uMR 3T scanners, covering representative clinical protocols (T1, T2, PD with/without fat saturation) from 15 patients. The ethnicity for this dataset is 100% Asian, and the data originates from an unspecified location, but given the manufacturer's location (Shanghai, China), it is highly likely to be China. This appears to be retrospective as patients data is mentioned.

    For Inline ECV:

    • Test Set Sample Size: 90 images from 28 patients.
    • Data Provenance: The distribution table shows data from patients with magnetic field strengths of 1.5T and 3T. Ethn_icity is broken down into "Asia" (17 patients) and "USA" (11 patients). This indicates a combined dataset potentially from multiple geographical locations, and appears to be retrospective.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    For SparkCo:

    • Spark detection accuracy: The ground truth for spark detection accuracy was established by comparing to "ground-truth" spark locations, which were generated as part of the simulation process for the training data and likely also for evaluating the testing set during the simulation step. For the real-world dataset, the document mentions "comparing the spark detection results with the ground-truth" implying an existing ground truth, but doesn't specify how it was established or how many experts were involved.
    • Spark correction performance: "One experienced evaluator" was used for subjective evaluation of image quality improvement on the real-world spark dataset. No specific qualifications are provided for this evaluator beyond "experienced".

    For Inline ECV:

    • The document states, "The segmentation result of each case was obtained with the algorithm, and the segmentation mask was evaluated with the following criteria." It does not explicitly mention human experts establishing a distinct "ground truth" for each segmentation mask for the purpose of the acceptance criteria. Instead, the evaluation seems to be a subjective assessment against predefined criteria. No number of experts or qualifications are provided.

    4. Adjudication Method for the Test Set

    For SparkCo:

    • For spark detection accuracy, the comparison was against a presumed inherent "ground-truth" (likely derived from the simulation process).
    • For spark correction performance, a single "experienced evaluator" made the subjective assessment, implying no adjudication method (e.g., 2+1, 3+1) was explicitly used among multiple experts.

    For Inline ECV:

    • The evaluation was a "subjective evaluation method" against specific criteria. No information about multiple evaluators or an adjudication method is provided. It implies a single evaluator or an internal consensus without formal adjudication.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done

    • No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance was not explicitly mentioned for either SparkCo or Inline ECV. The studies were focused on the standalone performance of the algorithms.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done

    • Yes, for both SparkCo and Inline ECV, the studies described are standalone algorithm performance evaluations.
      • SparkCo focused on the algorithm's ability to detect and correct spark artifacts (objective metrics like PSNR and subjective assessment by one evaluator).
      • Inline ECV focused on the algorithm's segmentation accuracy (subjective evaluation of segmentation masks against criteria).

    7. The Type of Ground Truth Used

    For SparkCo:

    • Spark detection accuracy: Ground truth was generated by simulating spark artifacts from spark-free raw data, implying a simulated/synthetic ground truth for training and a comparison against this for testing. For real-world data, the "ground-truth" for detection is implied but not explicitly detailed how it was established.
    • Spark correction performance: For PSNR, the "ground truth" for comparison is the original spark images. For subjective evaluation, it's against the "spark images" and the expectation of correction, suggesting human expert judgment (by one evaluator) rather than a pre-established clinical ground truth for each case.

    For Inline ECV:

    • The ground truth for Inline ECV appears to be a subjective expert assessment (though the number of experts is not specified) of the algorithm's automatically generated segmentation masks against predefined "Satisfied," "Acceptable," and "Fail" criteria. It is not an independent, pre-established ground truth like pathology or outcomes data.

    8. The Sample Size for the Training Set

    For SparkCo:

    • Training dataset for the AI module: 61 cases from 10 volunteers. From this, a total of 24,866 spark slices along with corresponding "ground truth" (location of spark points) were generated for training.

    For Inline ECV:

    • The document states, "The training data used for the training of the cardiac ventricular segmentation algorithm is independent of the data used to test the algorithm." However, the sample size for the training set itself is not explicitly provided in the given text.

    9. How the Ground Truth for the Training Set Was Established

    For SparkCo:

    • The ground truth for the SparkCo training set was established by simulating spark artifacts from spark-free raw data. This simulation process directly provided the "location of spark points" as the ground truth.

    For Inline ECV:

    • The document mentions that the training data is independent of the test data, but it does not describe how the ground truth for the training set of the Inline ECV algorithm was established.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 6