Search Results
Found 2 results
510(k) Data Aggregation
(190 days)
uMR Jupiter
The uMR Jupiter system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces sagittal, transverse, coronal, and oblique cross sectional images, and spectroscopic images, and that display internal anatomical structure and/or function of the head, body and extremities.
These images and the physical parameters derived from the images when interpreted by a trained physician yield information that may assist the diagnosis. Contrast agents may be used depending on the region of interest of the scan.
The device is intended for patients > 20 kg/44 lbs.
uMR Jupiter is a 5T superconducting magnetic resonance diagnostic device with a 60cm size patient bore and 8 channel RF transmit system. It consists of components such as magnet, RF power amplifier, RF coils, gradient power amplifier, gradient coils, patient table, spectrometer, computer, equipment cabinets, power distribution system, internal communication system, and vital signal module etc. uMR Jupiter is designed to conform to NEMA and DICOM standards.
The modification performed on the uMR Jupiter in this submission is due to the following changes that include:
-
Addition of RF coils: SuperFlex Large - 24 and Foot & Ankle Coil - 24.
-
Addition of applied body part for certain coil: SuperFlex Small-24 (add imaging of ankle).
-
Addition and modification of pulse sequences:
- a) New sequences: fse_wfi, gre_fsp_c (3D), gre_bssfp_ucs, epi_fid(3D), epi_dti_msh.
- b) Added Associated options for certain sequences: asl_3d (add mPLD) (Only output original images and no quantification images are output), gre_fsp_c (add Cardiac Cine, Cardiac Perfusion, PSIR, Cardiac mapping), gre_quick(add WFI, MRCA), gre_bssfp(add Cardiac Cine, Cardiac mapping), epi_dwi(add IVIM) (Only output original images and no quantification images are output).
-
Addition of function: EasyScan, EasyCrop, t-ACS, QScan, tFAST, DeepRecon and WFI.
-
Addition of workflow: EasyFACT.
This FDA 510(k) summary (K250246) for the uMR Jupiter provides details on several new AI-assisted features. Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:
Important Note: The document is not a MRMC study comparing human readers with and without AI. Instead, it focuses on the performance of individual AI modules and their integration into the MRI system, often verified by radiologists' review of image quality.
Acceptance Criteria and Reported Device Performance
The document presents acceptance criteria implicitly through the "Test Result" or "Performance Verification" sections for each AI feature. The "Performance" column below summarizes the device's reported achievement for these criteria.
Feature | Acceptance Criteria (Implicit) | Reported Device Performance |
---|---|---|
WFI | Expected to produce diagnostic quality images and effectively overcome water-fat swap artifacts, providing accurate initialization for the RIPE algorithm. Modes (default, standard, fast) should meet clinical diagnosis requirements. | "Based on the clinical evaluation of this independent testing dataset by three U.S. certificated radiologists, all three WFI modes meet the requirements for clinical diagnosis. In summary, the WFI performed as intended and passed all performance evaluations." |
t-ACS | AI Module Test: AI prediction output should be much closer to the reference compared to the AI module input images. | |
Integration Test: Better consistency between t-ACS and reference than CS and reference; no large structural differences; motion-time curves and Bland-Altman analysis showing consistency. | AI Module Test: "AI prediction (AI module output) was much closer to the reference comparing to the AI module input images in all t-ACS application types." | |
Integration Test: |
- "A better consistency between t-ACS and the reference than that between CS and the reference was shown in all t-ACS application types."
- "No large structural difference appeared between t-ACS and the reference in all t-ACS application types."
- "The motion-time curves and Bland-Altman analysis showed the consistency between t-ACS and the reference based on simulated and real acquired data in all t-ACS application types."
Overall: "The t-ACS on uMR Jupiter was shown to perform better than traditional Compressed Sensing in the sense of discrepancy from fully sampled images and PSNR using images from various age groups, BMIs, ethnicities and pathological variations. The structure measurements on paired images verified that same structures of t-ACS and reference were significantly the same. And t-ACS integration tests in two applications proved that t-ACS had good agreement with the reference." |
| DeepRecon | Expected to provide image de-noising and super-resolution, resulting in diagnostic quality images, with equivalent or higher scores than reference images in terms of diagnostic quality. | "The DeepRecon has been validated to provide image de-nosing and super-resolution processing using various ethnicities, age groups, BMIs, and pathological variations. In addition, DeepRecon images were evaluated by American Board of Radiologists certificated physicians, covering a range of protocols and body parts. The evaluation reports from radiologists verified that DeepRecon meets the requirements of clinical diagnosis. All DeepRecon images were rated with equivalent or higher scores in terms of diagnosis quality." |
| EasyFACT | Expected to effectively automate ROI placement and numerical statistics for FF and R2* values, with results subjectively evaluated as effective. | "The subjective evaluation method was used [to verify effectiveness]." "The proposal of algorithm acceptance criteria and score processing are conducted by the licensed physicians with U.S. credentials." (Implied successful verification from context) |
| EasyScan | Pass criteria of 99.3% for automatic slice group positioning, meeting safety and effectiveness requirements. | "The pass criteria of EasyScan feature is 99.3%, and the results evaluated by the licenced MRI technologist with U.S. credentials. Therefore, EasyScan meets the criteria for safety and effectiveness, and EasyScan can meet the requirements for automatic positioning locates slice groups." (Implied reaching or exceeding 99.3%.) |
| EasyCrop | Pass criteria of 100% for automatic image cropping, meeting safety and effectiveness requirements. | "The pass criteria of EasyCrop feature is 100%, and the results evaluated by the licenced MRI technologist with U.S. credentials. Therefore, EasCrop meets the criteria for safety and effectiveness, and EasCrop can meet the requirements for automatic cropping." (Implied reaching or exceeding 100%.) |
Study Details
-
Sample sizes used for the test set and the data provenance:
- WFI: 144 cases from 28 volunteers. Data collected from UIH Jupiter. "Completely separated from the previous mentioned training dataset by collecting from different volunteers and during different time periods." (Retrospective for testing, though specific country of origin beyond "UIH MRI systems" is not explicitly stated for testing data, training data has "Asian" majority.)
- t-ACS: 35 subjects (data from 76 volunteers used for overall training/validation/test split). Test data collected independently from the training data, with separated subjects and during different time periods. "White," "Black," and "Asian" ethnicities mentioned, implying potentially multi-country or diverse internal dataset.
- DeepRecon: 20 subjects (2216 cases). "Diverse demographic distributions" including "White" and "Asian" ethnicities. "Collecting testing data from various clinical sites and during separated time periods."
- EasyFACT: 5 subjects. "Data were acquired from 5T magnetic resonance imaging equipment from UIH," and "Asia" ethnicity is listed.
- EasyScan: 30 cases from 18 "Asia" subjects (initial testing); 40 cases from 8 "Asia" subjects (validation on uMR Jupiter system).
- EasyCrop: 5 subjects. "Data were acquired from 5T magnetic resonance imaging equipment from UIH," and "Asia" ethnicity is listed.
Data provenance isn't definitively "retrospective" or "prospective" for the test sets, but the emphasis on "completely separated" and "independent" from training data collected at "different time periods" suggests these were distinct, potentially newly acquired or curated sets for evaluation. The presence of multiple ethnicities (White, Black, Asian) suggests potentially broader geographical origins than just China where the company is based, or a focus on creating diverse internal datasets.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- WFI: Three U.S. certificated radiologists. (Qualifications: U.S. board-certified radiologists).
- t-ACS: No separate experts establishing ground truth for the test set performance evaluation are mentioned beyond the quantitative metrics (MAE, PSNR, SSIM) compared against "fully sampled images" (reference/ground truth). The document states that fully-sampled k-space data transformed into image domain served as the reference.
- DeepRecon: American Board of Radiologists certificated physicians. (Qualifications: American Board of Radiologists certificated physicians).
- EasyFACT: Licensed physicians with U.S. credentials. (Qualifications: Licensed physicians with U.S. credentials).
- EasyScan: Licensed MRI technologist with U.S. credentials. (Qualifications: Licensed MRI technologist with U.S. credentials).
- EasyCrop: Licensed MRI technologist with U.S. credentials. (Qualifications: Licensed MRI technologist with U.S. credentials).
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
The document does not explicitly state an adjudication method (like 2+1 or 3+1) for conflict resolution among readers. For WFI, DeepRecon, EasyFACT, EasyScan, and EasyCrop, it implies a consensus or majority opinion model based on the "evaluation reports from radiologists/technologists." For t-ACS, the evaluation of the algorithm's output is based on quantitative metrics against a reference image ground truth.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
No, a traditional MRMC comparative effectiveness study where human readers interpret cases with AI assistance versus without AI assistance was not described. The studies primarily validated the AI features' standalone performance (e.g., image quality, accuracy of automated functions) or their output's equivalence/superiority to traditional methods, often through expert review of the AI-generated images. Therefore, no effect size of human reader improvement with AI assistance is provided.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
Yes, standalone performance was the primary focus for most AI features mentioned, though the output was often subject to human expert review.
- WFI: The AI network provides initialization for the RIPE algorithm. The output image quality was then reviewed by radiologists.
- t-ACS: Performance was evaluated quantitatively against fully sampled images (reference/ground truth), indicating a standalone algorithm evaluation.
- DeepRecon: Evaluated based on images processed by the algorithm, with expert review of the output images.
- EasyFACT, EasyScan, EasyCrop: These are features that automate parts of the workflow. Their output (e.g., ROI placement, slice positioning, cropping) was evaluated, often subjectively by experts, but the automation itself is algorithm-driven.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- WFI: Expert consensus/review by three U.S. certificated radiologists for "clinical diagnosis" quality. No "ground truth" for water-fat separation accuracy itself is explicitly stated, but the problem being solved (water-fat swap artifacts) implies the improved stability of the algorithm's output.
- t-ACS: "Fully-sampled k-space data were collected and transformed into image domain as reference." This serves as the "true" or ideal image for comparison, not derived from expert interpretation or pathology.
- DeepRecon: "Multiple-averaged images with high-resolution and high SNR were collected as the ground-truth images." Expert review confirms diagnostic quality of processed images.
- EasyFACT: Subjective evaluation by licensed physicians with U.S. credentials, implying their judgment regarding the correctness of ROI placement and numerical statistics.
- EasyScan: Evaluation by a licensed MRI technologist with U.S. credentials against the "correctness" of automatic slice positioning.
- EasyCrop: Evaluation by a licensed MRI technologist with U.S. credentials against the "correctness" of automatic cropping.
-
The sample size for the training set:
- WFI AI module: 59 volunteers (2604 cases). Each scanned for multiple body parts and WFI protocols.
- t-ACS AI module: Not specified as a distinct number, but "collected from a variety of anatomies, image contrasts, and acceleration factors... resulting in a large number of cases." The overall dataset for training, validation, and testing was 76 volunteers.
- DeepRecon: 317 volunteers.
- EasyFACT, EasyScan, EasyCrop: "The training data used for the training of the EasyFACT algorithm is independent of the data used to test the algorithm." For EasyScan and EasyCrop, it states "The testing dataset was collected independently from the training dataset," but does not provide specific training set sizes for these workflow features.
-
How the ground truth for the training set was established:
- WFI AI module: The AI network was trained to provide accurate initialization for the RIPE algorithm. The document implies that the RIPE algorithm itself with human oversight or internal validation would have been used to establish correct water/fat separation for training.
- t-ACS AI module: "Fully-sampled k-space data were collected and transformed into image domain as reference." This served as the ground truth against which the AI was trained to reconstruct undersampled data.
- DeepRecon: "The multiple-averaged images with high-resolution and high SNR were collected as the ground-truth images." This indicates that high-quality, non-denoised, non-super-resolved images were used as the ideal target for the AI.
- EasyFACT, EasyScan, EasyCrop: Not explicitly detailed beyond stating that training data ground truth was established to enable the algorithms for automatic ROI placement, slice group positioning, and image cropping, respectively. It implies a process of manually annotating or identifying the correct ROIs/positions/crops on training data for the AI to learn from.
Ask a specific question about this device
(162 days)
uMR Jupiter
uMR Jupiter is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces sagittal, transverse, coronal, and oblique cross sectional images, and spectroscopic images, and that display internal anatomical structure and/or function of the head, body and extremities. These images and the physical parameters derived from the images when interpreted by a trained physician yield information that may assist the diagnosis. Contrast agents may be used depending on the region of interest of the scan.
The device is intended for patients > 20 kg/44 lbs.
uMR Jupiter is a 5T superconducting magnetic resonance diagnostic device with a 60cm size patient bore and 8 channel RF transmit system. It consists of components such as magnet, RF power amplifier, RF coils, gradient power amplifier, gradient coils, patient table, spectrometer, computer, equipment cabinets, power distribution system, internal communication system, and vital signal module etc. uMR Jupiter is designed to conform to NEMA and DICOM standards.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) submission information for the uMR Jupiter.
Acceptance Criteria and Reported Device Performance
The acceptance criteria for the uMR Jupiter largely revolve around its non-inferiority to the predicate device (uMR Omega) and its ability to produce diagnostic quality images while ensuring safety. The performance evaluation focused on various aspects of MRI image quality and the functionality of new or enhanced features, especially the AI-assisted Compressed Sensing (ACS).
Acceptance Criteria Category | Specific Criteria/Tests | Reported Device Performance (uMR Jupiter) |
---|---|---|
General Comparison to Predicate | The proposed device should have similar indications for use, performance, safety equivalence, and effectiveness as the predicate device. Differences should not raise new safety and effectiveness concerns. | The submission concludes that "the proposed device has similar indications for use, performance, safety equivalence, and effectiveness as the predicate device. The differences above between the proposed device and predicate device do not affect the intended use, technology characteristics, safety, and effectiveness. And no issues are raised regarding to safety and effectiveness." This broadly states that all differences (field strength, bore dimensions, magnet homogeneity, gradient amplitude, RF system, coils, etc.) were evaluated and deemed not to raise new safety or effectiveness concerns. |
Image Quality (Non-Clinical) | Conformance to NEMA standards (MS 1, MS 2, MS 3, MS 5, MS 6, MS 9) for SNR, geometric distortion, image uniformity, slice thickness, and characterization of phased array coils. | Non-clinical testing, including image performance tests, were conducted to verify that the proposed device met all design specifications. This implies adherence to the mentioned NEMA standards. |
Safety (Non-Clinical) | Conformance to IEC 60601-1 (General), IEC 60601-1-2 (EMC), IEC 60601-2-33 (Magnetic Resonance Equipment), IEC 60825-1 (Laser Safety), IEC 60601-1-6 (Usability), IEC 62304 (Software Life Cycle), IEC 62464-1 (Image Quality Parameters), NEMA MS 8 (SAR), NEMA MS 10 (Local SAR), NEMA MS 14 (RF Coil Heating), IEC 60601-4-2 (EMC Immunity). Control of peripheral nerve stimulation (PNS) and cardiac stimulation. SAR control for patients > 20kg. | Electrical safety and EMC tests were performed, claiming conformance to the listed IEC and NEMA standards. A volunteer study was conducted to determine nerve stimulation thresholds, and observed parameters were used to set PNS threshold levels as required by IEC 60601-2-33. The device's software controls SAR based on simulations for humans at least 20kg. |
Software Functionality | Functionality of new/enhanced features (e.g., Inline T2 Mapping using MASS, CASS, PASS, MoCap-Monitoring). | Performance evaluation reports were provided for ACS, 3D ASL, MoCap-Monitoring, FACT, 2D Flow, CEST, T1rho, Multiband, Inline T1 mapping, Inline T2 mapping, Inline T2* mapping, Liver MRS, Prostate MRS, and Brain MRS. The submission explicitly states that MASS, CASS, and PASS are "substantially equivalent" to existing techniques and MoCap-Monitoring provides real-time motion monitoring. |
AI (ACS) Performance | ACS (AI-assisted Compressed Sensing) should perform at least equivalently to Compressed Sensing (CS) in terms of SNR and resolution. Image qualities (contrast, uniformity) should be maintained compared to fully sampled data (golden standard). Structural measurements on paired images (ACS vs. fully sampled) should be significantly the same. Performance should be equivalent to ACS on the predicate device (uMR Omega). | ACS on uMR Jupiter was shown to "perform better than CS by measuring SNR and resolution" across diverse demographics and pathological variations. Results demonstrated that ACS "maintained image qualities, such as contrast and uniformity, as compared against fully sampled data as golden standards." Structural measurements verified that ACS and fully sampled images of the same structures were "significantly the same." The test results demonstrate that "ACS on uMR Jupiter performs equivalently to that on uMR Omega." |
Clinical Image Quality | The device should generate diagnostic quality images in accordance with MR guidance on premarket notification submissions. | Sample clinical images for all clinical sequences and coils were reviewed by three U.S. board-certified radiologists, and it was shown that the proposed device can generate diagnostic quality images comparable to the predicate. |
Biocompatibility | Conformance to ISO 10993-5 (In vitro cytotoxicity), ISO 10993-10 (Skin Sensitization), ISO 10993-23 (Irritation), and ISO 10993-1 (Evaluation and testing within a risk management process). | Claims conformance to the listed ISO 10993 standards. |
Risk Management | Conformance to ISO 14971 (Application of risk management to medical devices). | Claims conformance to ISO 14971. |
Quality System | Conformance to 21 CFR Part 820 (Quality System Regulation). | Claims conformance to 21 CFR Part 820. |
Study Details (Focusing on ACS as the AI component)
The document primarily details the performance evaluation of the AI-assisted Compressed Sensing (ACS) module, as it represents a key software enhancement.
1. A table of acceptance criteria and the reported device performance (see table above).
2. Sample sizes used for the test set and the data provenance:
- Test Set (for ACS performance): 25 subjects.
- Demographic Breakdown:
- Gender: 15 Male, 10 Female
- Age: 5 (18-28), 7 (29-40), 13 (>41)
- Ethnicity: 4 White, 21 Asian
- BMI: 2 Underweight (24.9)
- Data Provenance: "coming from different countries with diverse demographic distributions covering various genders, age groups, ethnicities, and BMI groups." The document does not explicitly state the specific countries, but the ethnicity breakdown suggests a diverse geographic origin. It's implied this was prospectively collected as a designated test set, and crucially, it was collected independently from the training dataset, with separated subjects and during different time periods.
- Demographic Breakdown:
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- For the ACS performance evaluation: The "ground-truth" for ACS was derived from fully-sampled k-space data, which was then transformed into image space. This indicates a technical, objective ground truth based on the full data acquisition, not directly on expert consensus for the ACS performance metrics (SNR, resolution, contrast, uniformity).
- For overall clinical image quality review: "Sample clinical images for all clinical sequences and coils were reviewed by three U.S. board-certified radiologists comparing the proposed device and predicate device." No specific years of experience are listed beyond "board-certified." This review established diagnostic quality.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- For ACS performance metrics (SNR, resolution, contrast, uniformity): No explicit adjudication method among experts is mentioned for these quantitative metrics, as the ground truth was "fully sampled data" which is a direct technical reference.
- For the overall clinical image quality review by radiologists: While three radiologists reviewed images, the document states "it was shown that the proposed device can generate diagnostic quality images," implying a consensus or satisfactory individual assessment, but no specific adjudication rule (e.g., majority vote, binding senior reviewer) is detailed.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No Multi-Reader Multi-Case (MRMC) comparative effectiveness study was explicitly described in the provided text for human readers assisted by AI. The evaluation focused on the standalone performance of the ACS algorithm itself relative to conventional CS and fully sampled data, and the overall clinical image quality review. There is no information about an effect size related to human reader improvement with AI assistance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone performance evaluation of the ACS algorithm was a primary component. The comparison of ACS against conventional CS and fully sampled data (golden standard) in terms of SNR, resolution, contrast, uniformity, and structural measurements directly assesses the algorithm's performance without a human in the loop for those specific quantitative metrics. The phrase "ACS on uMR Jupiter was shown to perform better than CS by measuring SNR and resolution" confirms this.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Technical/Objective Ground Truth: For the ACS performance evaluation, the ground truth was fully-sampled k-space data that was converted to image space. This serves as the "golden standard" against which the accelerated ACS images were compared for quantitative metrics like SNR, resolution, contrast, and uniformity.
- Expert Consensus (Implicit/Qualitative): For the overall assessment of diagnostic image quality, the review by three U.S. board-certified radiologists served as the qualitative ground truth for "diagnostic quality."
8. The sample size for the training set:
- Training Dataset (for AI module in ACS): Collected from 35 volunteers, comprising 24 males and 11 females, aged 18 to 60.
9. How the ground truth for the training set was established:
- The ground truth for the AI module's training data for ACS was established using fully-sampled k-space data converted to image space.
- "Fully-sampled k-space data were collected and transformed to image space as the ground-truth."
- "Input data [for training] were generated by sub-sampling the fully-sampled k-space data with different parallel imaging acceleration factors and partial Fourier factors."
- "All data were manually quality controlled before included for training." This manual quality control step likely involved expert review to ensure the quality of the "fully sampled" ground truth images.
Ask a specific question about this device
Page 1 of 1