Search Results
Found 7 results
510(k) Data Aggregation
(25 days)
The MAGNETOM system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces transverse, sagittal, coronal and oblique cross sectional images, spectroscopic images and/or spectra, and that displays the internal structure and/or function of the head, body, or extremities. Other physical parameters derived from the images and/or spectra may also be produced. Depending on the region of interest, contrast agents may be used. These images and/or spectra and the physical parameters derived from the images and/or spectra when interpreted by a trained physician yield information that may assist in diagnosis.
The MAGNETOM system may also be used for imaging during interventional procedures when performed with MR compatible devices such as in-room displays and MR Safe biopsy needles.
MAGNETOM Viato.Mobile with software syngo MR XA51A and XQ gradient system includes new hardware compared to the predicate device, MAGNETOM Viato.Mobile with software syngo MR XA51A and XJ gradient system. A highlevel summary of the modified hardware is provided below:
Hardware
Modified Hardware
- Gradient Coil
- Gradient Power Amplifier
This document is a 510(k) summary for the MAGNETOM Viato.Mobile system from Siemens Medical Solutions USA, Inc. The submission is for a modification to an already cleared device, primarily involving new hardware (Gradient Coil and Gradient Power Amplifier) and an XQ gradient system option. This is a claim of substantial equivalence to an existing predicate device, not a new device requiring a full de novo study. Therefore, the information provided focuses on demonstrating that the modified device performs as safely and effectively as the predicate, rather than presenting a detailed study proving the device meets specific acceptance criteria in a clinical context.
Here's an analysis based on the provided text, addressing your questions where information is available:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not present a table of specific acceptance criteria (e.g., sensitivity, specificity, accuracy targets) for diagnostic performance or a direct "reported device performance" in terms of clinical outcomes. Instead, it focuses on demonstrating that the modified hardware maintains the same safety and performance profile as the predicate device and reference device. The acceptance is based on compliance with standards and non-clinical testing.
Performance Test / Acceptance Criteria | Reported Device Performance |
---|---|
Nonclinical Tests: | |
Performance bench test | "performs as intended" |
Verification and validation | "performs as intended" |
Electrical safety and EMC (IEC 60601-1-2) | "performs as intended" |
ISO 14971 (Risk Management) | "ensured via a risk analysis" |
IEC 60601-1 series (Electrical/Mechanical Hazards) | "minimizes electrical and mechanical hazards" |
IEC 60601-2-33 (MR equipment safety) | Device conforms |
NEMA MS 4:2010 (Acoustic Noise) | Device conforms |
2. Sample Size for the Test Set and Data Provenance
Since this is a submission for a hardware modification and claims substantial equivalence based on non-clinical testing and comparison to a predicate, there is no "test set" in the traditional sense of a patient cohort or imaging dataset used to assess diagnostic performance. The testing involved new hardware itself.
- Sample Size for Test Set: Not applicable in a clinical diagnostic performance sense. Non-clinical hardware tests were performed.
- Data Provenance: Not applicable for diagnostic performance.
3. Number of Experts and Qualifications for Ground Truth
- Number of Experts: Not applicable. The ground truth for this submission concerns the performance and safety of the hardware modification, established through engineering tests and compliance with recognized standards. There were no experts establishing ground truth for diagnostic interpretations for this specific submission.
4. Adjudication Method
- Adjudication Method: Not applicable. There was no clinical ground truth requiring adjudication.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- MRMC Study: No. The document explicitly states: "No clinical study and no additional clinical tests were conducted to support substantial equivalence for the subject device." This means there was no MRMC study to compare human readers with or without AI assistance, as the device itself is an MR scanner, not an AI-assisted diagnostic tool.
6. Standalone Performance Study
- Standalone Performance Study: No. This submission focuses on hardware safety and performance modifications of an MR scanner, not the standalone diagnostic performance of an AI algorithm.
7. Type of Ground Truth Used
The "ground truth" for this submission is established through:
- Engineering test results and measurements: For performance bench tests, electrical safety, and electromagnetic compatibility.
- Compliance with recognized international standards: Like IEC 60601 series, ISO 14971, and NEMA MS 4:2010, which define safety and performance requirements for medical electrical equipment and MR systems.
- Comparison to the established safety and performance profile of the predicate and reference devices: The core argument is substantial equivalence, meaning the new hardware does not introduce new questions of safety or effectiveness compared to legally marketed devices.
8. Sample Size for the Training Set
- Sample Size for Training Set: Not applicable. This is not an AI-driven image analysis algorithm that requires a training set of images. It is a hardware modification to an MR scanner.
9. How the Ground Truth for the Training Set Was Established
- How Ground Truth for Training Set Was Established: Not applicable, as there is no training set for an AI algorithm in this submission.
Ask a specific question about this device
(174 days)
The MAGNETOM system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces transverse, sagittal, coronal and oblique cross sectional images, spectroscopic images and/or spectra, and that displays the internal structure and/or function of the head, body, or extremities. Other physical parameters derived from the images and/or spectra may also be produced. Depending on the region of interest, contrast agents may be used. These images and/or spectra and the physical parameters derived from the images and/or spectra when interpreted by a trained physician yield information that may assist in diagnosis.
The MAGNETOM system may also be used for imaging during interventional procedures when performed with MR compatible devices such as in-room displays and MR Safe biopsy needles.
The subject device, MAGNETOM Cima.X Fit with software syngo MR XA61A, consists of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Vida with syngo MR XA50A (K213693).
A high-level summary of the new and modified hardware and software is provided below:
For MAGNETOM Cima.X Fit with syngo MR XA61:
Hardware
New Hardware:
→ 3D Camera
Modified Hardware:
- → Host computers ((syngo MR Acquisition Workplace (MRAWP) and syngo MR Workplace (MRWP)).
-
MaRS (Measurement and Reconstruction System).
- → Gradient Coil
- → Cover
- → Cooling/ACSC
- → SEP
- → GPA
- → RFCEL Temp
- → Body Coil
- → Tunnel light
Software
New Features and Applications:
- -> GRE_PC
- → Physio logging
- -> Deep Resolve Boost HASTE
-
Deep Resolve Boost EPI Diffusion
- → Open Recon
- -> Ghost reduction (DPG)
- -> Fleet Ref Scan
- → Manual Mode
- → SAMER
- → MR Fingerprinting (MRF)1
Modified Features and Applications:
- → BEAT nav (re-naming only).
-
myExam Angio Advanced Assist (Test Bolus).
- → Beat Sensor (all sequences).
-
Stimulation monitoring
- -> Complex Averaging
I am sorry, but the provided text does not contain the acceptance criteria and the comprehensive study details you requested for the "MAGNETOM Cima.X Fit" device, particularly point-by-point information on a multi-reader multi-case (MRMC) comparative effectiveness study or specific quantitative acceptance criteria for its AI features like Deep Resolve Boost or Deep Resolve Sharp.
The document is a 510(k) summary for a Magnetic Resonance Diagnostic Device (MRDD), highlighting its substantial equivalence to a predicate device. While it mentions AI features and their training/validation, it does not provide the detailed performance metrics or study design to fully answer your request.
Here's what can be extracted based on the provided text, and where information is missing:
1. Table of Acceptance Criteria and Reported Device Performance:
The document mentions that the impact of the AI networks (Deep Resolve Boost and Deep Resolve Sharp) has been characterized by "several quality metrics such as peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM)," and evaluated by "visual comparisons to evaluate e.g., aliasing artifacts, image sharpness and denoising levels" and "perceptual loss." For Deep Resolve Sharp, "an evaluation of image sharpness by intensity profile comparisons of reconstructions with and without Deep Resolve Sharp" was also conducted.
However, specific numerical acceptance criteria (e.g., PSNR > X, SSIM > Y), or the actual reported performance values against these criteria are not provided in the text. The document states that the conclusions from the non-clinical data suggest that the features bear an equivalent safety and performance profile to that of the predicate device, but no quantitative data to support this for the AI features is included in this summary.
AI Feature | Acceptance Criteria (Not explicitly stated with numerical values in the text) | Reported Device Performance (No quantitative results provided in the text) |
---|---|---|
Deep Resolve Boost | - PSNR (implied to be high) |
- SSIM (implied to be high)
- Visual comparisons (e.g., absence of aliasing artifacts, good image sharpness, effective denoising levels) | Impact characterized by these metrics and visual comparisons. Claims of equivalent safety and performance profile to predicate device. No specific quantitative performance values (e.g., actual PSNR/SSIM scores) are reported in this document. |
| Deep Resolve Sharp | - PSNR (implied to be high) - SSIM (implied to be high)
- Perceptual loss
- Visual rating
- Image sharpness by intensity profile comparisons (reconstructions with and without Deep Resolve Sharp) | Impact characterized by these metrics, verified and validated by in-house tests. Claims of equivalent safety and performance profile to predicate device. No specific quantitative performance values are reported in this document. |
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective):
- Deep Resolve Boost:
- Test Set Description: The text mentions that "the performance was evaluated by visual comparisons." It does not explicitly state a separate test set size beyond the validation data used during development. It implies the performance evaluation was based on the broad range of data covered during training and validation.
- Data Provenance: Not specified (country of origin or retrospective/prospective). The data was "retrospectively created from the ground truth by data manipulation and augmentation."
- Deep Resolve Sharp:
- Test Set Description: The text mentions "in-house tests. These tests include visual rating and an evaluation of image sharpness by intensity profile comparisons of reconstructions with and without Deep Resolve Sharp." Similar to Deep Resolve Boost, a separate test set size is not explicitly stated. It implies these tests were performed on data from the more than 10,000 high-resolution 2D images used for training and validation.
- Data Provenance: Not specified (country of origin or retrospective/prospective). The data was "retrospectively created from the ground truth by data manipulation."
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not specified. The document mentions "visual comparisons" and "visual rating" as part of the evaluation but does not detail how many experts were involved or their qualifications.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not specified.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study is not mentioned in this document as being performed to establish substantial equivalence for the AI features. The document relies on technical metrics and visual comparisons of image quality to demonstrate equivalence.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- The evaluation mentioned, using metrics like PSNR, SSIM, perceptual loss, and intensity profile comparisons, are indicative of standalone algorithm performance in terms of image quality. Visual comparisons and ratings would involve human observers, but the primary focus described is on the image output quality itself from the algorithm. However, no specific "standalone" study design with comparative performance metrics (e.g., standalone diagnostic accuracy) is detailed.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Deep Resolve Boost: "The acquired datasets (as described above) represent the ground truth for the training and validation." This implies the high-quality, full-data MRI scans before artificial undersampling or noise addition served as the ground truth. This is a technical ground truth based on the original acquired MRI data, not a clinical ground truth like pathology or expert consensus on a diagnosis.
- Deep Resolve Sharp: "The acquired datasets represent the ground truth for the training and validation." Similar to Deep Resolve Boost, this refers to technical ground truth from high-resolution 2D images before manipulation.
8. The sample size for the training set:
- Deep Resolve Boost:
- TSE: more than 25,000 slices
- HASTE: pre-trained on the TSE dataset and refined with more than 10,000 HASTE slices
- EPI Diffusion: more than 1,000,000 slices
- Deep Resolve Sharp: on more than 10,000 high resolution 2D images.
9. How the ground truth for the training set was established:
- Deep Resolve Boost: "The acquired datasets (as described above) represent the ground truth for the training and validation. Input data was retrospectively created from the ground truth by data manipulation and augmentation. This process includes further under-sampling of the data by discarding k-space lines, lowering of the SNR level by addition Restricted of noise and mirroring of k-space data."
- Deep Resolve Sharp: "The acquired datasets represent the ground truth for the training and validation. Input data was retrospectively created from the ground truth by data manipulation. k-space data has been cropped such that only the center part of the data was used as input. With this method corresponding low-resolution data as input and high-resolution data as output / ground truth were created for training and validation."
In summary, the document focuses on the technical aspects of the AI features and their development, demonstrating substantial equivalence through non-clinical performance tests and image quality assessments, rather than clinical efficacy studies with specific diagnostic accuracy endpoints or human-AI interaction evaluations.
Ask a specific question about this device
(123 days)
The MAGNETOM system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces transverse, sagittal, coronal and oblique cross sectional images, spectroscopic images and/or spectra, and that displays the internal structure and/or function of the head, body, or extremities. Other physical parameters derived from the images and/or spectra may also be produced. Depending on the region of interest, contrast agents may be used. These images and/or spectra and the physical parameters derived from the images and/or spectra when interpreted by a trained physician yield information that may assist in diagnosis.
The MAGNETOM system may also be used for imaging during interventional procedures when performed with MR compatible devices such as in-room displays and MR Safe biopsy needles.
The subject devices, MAGNETOM Sola and MAGNETOM Altea with software syngo MR XA61A, consist of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Sola with syngo MR XA51A (K221733).
A high-level summary of the new and modified hardware and software is provided below:
Hardware
Modified Hardware:
- Host computers ((syngo MR Acquisition Workplace (MRAWP) and syngo MR Workplace (MRWP))
- MaRS (Measurement and Reconstruction System) computer – for MAGNETOM Sola only
- myExam 3D Camera
Software
New Features and Applications:
- GRE_PC
- Physiologging
- Deep Resolve Boost HASTE
- Deep Resolve Boost EPI Diffusion
- Complex Averaging
- myExam Implant Suite
Modified Features and Applications:
- OpenRecon Framework.
- BEAT_nav (re-naming only).
- Low SAR Protocols – SAR adoptive MR protocols to perform knee, spine, heart and brain examinations with 50% of the max allowed SAR values in normal mode for head and whole-body SAR.
The provided text describes the Siemens Medical Solutions USA, Inc. MAGNETOM Sola and MAGNETOM Altea with software syngo MR XA61A, which are Magnetic Resonance Diagnostic Devices (MRDD). The submission (K232535) claims substantial equivalence to a predicate device (MAGNETOM Sola with syngo MR XA51A, K221733).
Based on the provided information, the acceptance criteria and study details for the AI features (Deep Resolve Boost and Deep Resolve Sharp) are as follows:
1. Table of Acceptance Criteria and Reported Device Performance
Feature | Acceptance Criteria (Stated) | Reported Device Performance and Metrics |
---|---|---|
Deep Resolve Boost | The impact of the network has been characterized by several quality metrics such as peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Most importantly, the performance was evaluated by visual comparisons to evaluate e.g., aliasing artifacts, image sharpness and denoising levels. | Performance was evaluated by visual comparisons to evaluate aliasing artifacts, image sharpness, and denoising levels, in addition to quantitative metrics like PSNR and SSIM. The document states, "The results from each set of tests demonstrate that the devices perform as intended and are thus substantially equivalent to the predicate device to which it has been compared," implying these metrics met the internal acceptance criteria for substantial equivalence. No specific numerical thresholds are provided. |
Deep Resolve Sharp | The impact of the network has been characterized by several quality metrics such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and perceptual loss. In addition, the feature has been verified and validated by inhouse tests. These tests include visual rating and an evaluation of image sharpness by intensity profile comparisons of reconstructions with and without Deep Resolve Sharp. | Performance was evaluated by visual rating and intensity profile comparisons for image sharpness, along with quantitative metrics like PSNR, SSIM, and perceptual loss. The document states, "The results from each set of tests demonstrate that the devices perform as intended and are thus substantially equivalent to the predicate device to which it has been compared," implying these metrics met the internal acceptance criteria for substantial equivalence. No specific numerical thresholds are provided. |
2. Sample Size Used for the Test Set and Data Provenance
- Deep Resolve Boost:
- TSE: more than 25,000 slices (implicitly for both training/validation and testing, as no separate test set is explicitly mentioned).
- HASTE: more than 10,000 HASTE slices (refined from TSE dataset).
- EPI Diffusion: more than 1,000,000 slices.
- Data Provenance: Retrospective creation from acquired datasets. The data "covered a broad range of body parts, contrasts, fat suppression techniques, orientations, and field strength." Country of origin is not specified but given the manufacturer (Siemens Healthcare GmbH, Germany, and Siemens Shenzhen Magnetic Resonance LTD, China) and typical medical device development, it likely includes international data.
- Deep Resolve Sharp:
- 2D images: more than 10,000 high resolution 2D images.
- Data Provenance: Retrospective creation from acquired datasets. The data "covered a broad range of body parts, contrasts, fat suppression techniques, orientations, and field strength." Country of origin is not specified.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document does not specify the number of experts or their qualifications for establishing ground truth for the test set specifically. It mentions that "visual comparisons" and "visual rating" were part of the evaluation for both Deep Resolve Boost and Deep Resolve Sharp, which implies human expert review. However, details about these experts are not provided.
4. Adjudication Method for the Test Set
The document does not explicitly state an adjudication method (e.g., 2+1, 3+1). It refers to "visual comparisons" and "visual rating" as part of the evaluation, which suggests expert review, but the process for resolving disagreements or reaching consensus is not detailed.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance
No MRMC comparative effectiveness study involving human readers with and without AI assistance is reported for the substantial equivalence submission. The non-clinical tests focus on performance metrics and visual comparisons of image quality produced by the AI feature versus predicate device features. The "Publications" section lists several clinical feasibility studies, but these are noted as "referenced to provide information" and are not direct evidence of human reader improvement with AI for this specific submission's evaluation. The submission states, "No clinical tests were conducted to support substantial equivalence for the subject devices."
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, standalone performance was evaluated through quantitative image quality metrics (PSNR, SSIM, perceptual loss) and direct comparison of images produced by the AI-enhanced sequences against the predicate device's features. The "Test Statistics and Test Results Summary" for both Deep Resolve Boost and Deep Resolve Sharp detail these algorithm-only evaluations.
7. The Type of Ground Truth Used
The ground truth for both Deep Resolve Boost and Deep Resolve Sharp was established from acquired datasets (raw MRI data). This data was then retrospectively manipulated to create input data for the models:
- Deep Resolve Boost: Input data was "retrospectively created from the ground truth by data manipulation and augmentation," including undersampling k-space lines, lowering SNR, and mirroring k-space data. The acquired datasets themselves "represent the ground truth for the training and validation."
- Deep Resolve Sharp: Input data was "retrospectively created from the ground truth by data manipulation," specifically by cropping k-space data to use only the center part, which created corresponding low-resolution input data and high-resolution output/ground truth data. The acquired datasets "represent the ground truth for the training and validation."
Essentially, the "ground truth" refers to the high-quality, fully sampled/non-accelerated raw or reconstructed MRI data from which the training and validation inputs were derived.
8. The Sample Size for the Training Set
The sample sizes mentioned under "Training and Validation data" are implicitly for training, as they refer to the datasets from which both training and validation data were derived:
- Deep Resolve Boost:
- TSE: more than 25,000 slices
- HASTE: more than 10,000 HASTE slices (refined)
- EPI Diffusion: more than 1,000,000 slices
- Deep Resolve Sharp:
- more than 10,000 high resolution 2D images
9. How the Ground Truth for the Training Set Was Established
The ground truth for the training set was established from acquired datasets (raw MRI data). As explained in point 7, this involved:
- Deep Resolve Boost: Using the acquired datasets as the "ground truth." Input data for training was then generated by manipulating this ground truth (e.g., undersampling, adding noise).
- Deep Resolve Sharp: Using the acquired datasets as the "ground truth." Input data was then generated by manipulating the k-space data of the ground truth to create corresponding low-resolution inputs and high-resolution ground truth outputs for the model.
Ask a specific question about this device
(201 days)
The MAGNETOM system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces transverse, sagittal, coronal and oblique cross sectional images, spectroscopic images and/or spectra, and that displays the internal structure and/or function of the head, body, or extremities. Other physical parameters derived from the images and/or spectra may also be produced. Depending on the region of interest, contrast agents may be used. These images and/or spectra and the physical parameters derived from the images and/or spectra when interpreted by a trained physician yield information that may assist in diagnosis.
The MAGNETOM system may also be used for imaging during interventional procedures when performed with MR compatible devices such as in-room displays and MR Safe biopsy needles.
The subject device, MAGNETOM Cima.X with software syngo MR XA61A, consists of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Vida with syngo MR XA50A (K213693).
A high-level summary of the new and modified hardware and software is provided below:
For MAGNETOM Cima.X with syngo MR XA61:
Hardware
New Hardware:
→ 3D Camera
Modified Hardware:
- → Host computers ((syngo MR Acquisition Workplace (MRAWP) and syngo MR Workplace (MRWP)).
- → MaRS (Measurement and Reconstruction System).
- → Gradient Coil
- → Cover
- → Cooling/ACSC
- → SEP
- → GPA
- → RFCEL Temp
- → Body Coil
- → Tunnel light
Software
New Features and Applications:
- -> GRE_PC
- → Physio logging
- -> Deep Resolve Boost HASTE
- → Deep Resolve Boost EPI Diffusion
- → Open Recon
- -> Ghost reduction (DPG)
- -> Fleet Ref Scan
- → Manual Mode
- → SAMER
Modified Features and Applications:
- → BEAT_nav (re-naming only).
- → myExam Angio Advanced Assist (Test Bolus).
- → Beat Sensor (all sequences).
- → Stimulation monitoring
- -> Complex Averaging
Additionally, the pulse sequence MR Fingerprinting (MRF) (K213805) is now available for the subject device MAGNETOM Cima.X with syngo MR XA61A.
The provided text is a 510(k) Summary for a medical device (MAGNETOM Cima.X) and outlines how the device, particularly its AI features, meets acceptance criteria through studies.
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are implied by the performance characteristics used to evaluate the AI features. The reported device performance is presented in terms of quality metrics and visual evaluations.
Acceptance Criterion (Implied) | Reported Device Performance |
---|---|
Deep Resolve Boost (TSE, HASTE, EPI Diffusion) | |
Image quality (e.g., aliasing artifacts, sharpness, denoising levels) | Characterized by: |
- Peak Signal-to-Noise Ratio (PSNR)
- Structural Similarity Index (SSIM)
- Evaluated by visual comparisons to assess aliasing artifacts, image sharpness, and denoising levels. |
| Deep Resolve Sharp | |
| Image quality (e.g., sharpness) | Characterized by: - Peak Signal-to-Noise Ratio (PSNR)
- Structural Similarity Index (SSIM)
- Perceptual loss
- Verified and validated by in-house tests, including visual rating and evaluation of image sharpness by intensity profile comparisons of reconstructions with and without Deep Resolve Sharp. |
2. Sample Sizes Used for the Test Set and Data Provenance
The document does not explicitly delineate a separate "test set" with a dedicated sample size after the training and validation phase for Deep Resolve Boost and Deep Resolve Sharp. Instead, it seems the "validation" mentioned in the context of training and validation data encompasses the evaluation of device performance.
-
Deep Resolve Boost:
- TSE: More than 25,000 slices (used for training and validation).
- HASTE: Pre-trained on TSE dataset and refined with more than 10,000 HASTE slices (used for training and validation).
- EPI Diffusion: More than 1,000,000 slices (used for training and validation).
- Data Provenance: Retrospectively created from acquired datasets. The document does not specify the country of origin.
-
Deep Resolve Sharp:
- Sample Size: More than 10,000 high-resolution 2D images (used for training and validation).
- Data Provenance: Retrospectively created from acquired datasets. The document does not specify the country of origin.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
The document does not mention the use of experts to establish ground truth for the test set of the AI features. The "visual comparisons" and "visual rating" described are internal evaluations for feature performance but are not linked to expert-established ground truth for a formal test set described as such.
4. Adjudication Method for the Test Set
Not applicable, as no external expert-adjudicated test set is explicitly described for the AI features. The evaluations mentioned (visual comparisons, visual rating) appear to be internal.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No MRMC comparative effectiveness study is mentioned in the provided text for the AI features. The document focuses on the technical performance of the AI algorithms rather than their impact on human reader performance.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) Was Done
Yes, the performance evaluation for Deep Resolve Boost and Deep Resolve Sharp appears to be standalone algorithm performance. The metrics (PSNR, SSIM, perceptual loss) and visual comparisons/ratings are related to the image quality produced by the algorithm itself, without direct assessment of human-in-the-loop performance.
7. The Type of Ground Truth Used
- Deep Resolve Boost: The acquired datasets (MRI raw data or images) were considered the "ground truth" for training and validation. Input data for the AI was then retrospectively created from this ground truth by data manipulation and augmentation (discarding k-space lines, lowering SNR, mirroring k-space data) to simulate different acquisition conditions.
- Deep Resolve Sharp: The acquired datasets (high-resolution 2D images) were considered the "ground truth" for training and validation. Low-resolution input data for the AI was retrospectively created from this ground truth by cropping k-space data, so the high-resolution data served as the output/ground truth.
8. The Sample Size for the Training Set
The document combines training and validation data, so the sample sizes listed in point 2 apply:
- Deep Resolve Boost:
- TSE: More than 25,000 slices
- HASTE: More than 10,000 HASTE slices (refined)
- EPI Diffusion: More than 1,000,000 slices
- Deep Resolve Sharp: More than 10,000 high-resolution 2D images
9. How the Ground Truth for the Training Set Was Established
- Deep Resolve Boost: "The acquired datasets (as described above) represent the ground truth for the training and validation." This implies that the raw, original MRI data or images acquired under standard, full-sampling conditions were considered the reference. The AI was then trained to recover information from artificially degraded or undersampled versions of this ground truth.
- Deep Resolve Sharp: "The acquired datasets represent the ground truth for the training and validation." Similar to Deep Resolve Boost, the original high-resolution acquired 2D images were used as the ground truth. Low-resolution data was then derived from these high-resolution images to create the input for the AI, with the original high-resolution images serving as the target output (ground truth).
Ask a specific question about this device
(160 days)
MAGNETOM Free.Max system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces transverse, sagittal, coronal, and oblique cross-sectional images that display the internal structure and/or function of the head. body, or extremities. Other physical parameters derived from the images may also be produced. Depending on the region of interest, contrast agents may be used. These images and the physical parameters derived from the images when interpreted by a trained physician vield information that may assist in diagnosis.
MAGNETOM Free.Max may also be used for imaging during interventional procedures when performed with MR-compatible devices such as MR Safe biopsy needles.
MAGNETOM Free.Star system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces transverse, sagittal, coronal, and oblique cross-sectional images that display the internal structure and/or function of the head, body, or extremities. Other physical parameters derived from the images may also be produced. Depending on the region of interest, contrast agents may be used. These images and the physical parameters derived from the images when interpreted by a trained physician yield information that may assist in diagnosis.
MAGNETOM Free.Max and MAGNETOM Free.Star with syngo MR XA60A include new and modified features compared to the predicate devices MAGNETOM Free.Max and MAGNETOM Free.Star with syngo MR XA50A (K220575, cleared on June 24, 2022).
Below is a high-level summary of the new and modified hardware and software features compared to the predicate devices MAGNETOM Free.Max and MAGNETOM Free.Star with syngo MR XA50A:
Hardware
New hardware features:
- Contour Knee coil
- Respiratory Sensor
Modified hardware features:
- myExam 3D Camera
- Host computer
- MaRS
Software
New Features and Applications:
- Injector coupling
- Respiratory Sensor Support
- myExam RT Assist (only for MAGNETOM Free.Max)
- myExam Autopilot Hip
- Deep Resolve Boost
- Complex Averaging
- HASTE_Interactive (only for MAGNETOM Free.Max)
- BEAT_Interactive (only for MAGNETOM Free.Max)
- Needle Intervention AddIn (only for MAGNETOM Free.Max)
Modified Features and Applications:
- Deep Resolve Sharp
- Deep Resolve Gain
- SMS Averaging
Other Modifications:
- Indications for Use (only for MAGNETOM Free.Max)
- MAGNETOM Free.Max RT Edition marketing bundle (only for MAGNETOM Free.Max)
The provided text describes information about the submission of the MAGNETOM Free.Max and MAGNETOM Free.Star MRI systems for FDA 510(k) clearance, and references a specific AI feature called "Deep Resolve Boost." However, it does not contain acceptance criteria or a detailed study proving the device meets specific performance criteria for the AI feature.
The section titled "Test statistics and test results" for Deep Resolve Boost (Table 1, page 7) mentions that the impact of the network was characterized by quality metrics such as PSNR and SSIM, and visual inspection. It also states: "After successful passing of the quality metrics tests, work-in-progress packages of the network were delivered and evaluated in clinical settings with cooperation partners." This suggests internal testing and evaluation, but does not provide the specific numerical acceptance criteria or the detailed results of these tests.
Therefore, I cannot fully complete the requested table and answer all questions due to the lack of this specific information in the provided document.
However, I can extract the available information regarding the AI feature "Deep Resolve Boost" as much as possible:
1. Table of acceptance criteria and the reported device performance:
Metric / Criteria | Acceptance Criteria (Stated or Implied) | Reported Device Performance (Specifics not provided in document) |
---|---|---|
Deep Resolve Boost | ||
Peak Signal-to-Noise Ratio (PSNR) | Must pass initial quality metrics tests. | Quantified, but specific numerical values are not reported. |
Structural Similarity Index (SSIM) | Must pass initial quality metrics tests. | Quantified, but specific numerical values are not reported. |
Visual Inspection for Artifacts | Must ensure potential artifacts are detected that are not well captured by PSNR/SSIM. | Images visually inspected. |
Clinical Evaluation | Must be evaluated in clinical settings with cooperation partners. | "work-in-progress packages of the network were delivered and evaluated in clinical settings with cooperation partners." (No specific results or findings reported in this document.) |
2. Sample size used for the test set and the data provenance:
- Test Set (Validation set for AI feature Deep Resolve Boost):
- Sample Size: 1,874 2D slices.
- Data Provenance: "in-house measurements and collaboration partners." The document does not specify the country of origin.
- Retrospective or Prospective: Retrospectively created from ground truth by data manipulation and augmentation.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of experts: Not specified.
- Qualifications of experts: The document states the "acquired datasets represent the ground truth for the training and validation," but it does not specify how this ground truth was established in terms of expert involvement for the test set. It mentions "clinical settings with cooperation partners" for evaluation, but this is distinct from ground truth establishment.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not specified. The document states "acquired datasets represent the ground truth," suggesting pre-existing data or a different method of ground truth establishment than explicit reader adjudication for this AI feature.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- The document states "No clinical tests were conducted to support substantial equivalence for the subject device" (page 10). It mentions that "work-in-progress packages of the network were delivered and evaluated in clinical settings with cooperation partners," but this is not described as an MRMC comparative effectiveness study, nor are any results on human reader improvement reported.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:
- The performance of the "Deep Resolve Boost" AI feature was characterized by "quality metrics such as peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM)" and visual inspection, which suggests a standalone evaluation of the algorithm's output against a reference standard. Specific results are not provided.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For Deep Resolve Boost: "The acquired datasets represent the ground truth for the training and validation." Input data for training was "retrospectively created from the ground truth by data manipulation and augmentation." This implies that high-quality, likely clinical-grade, MRI scans acquired without the AI feature were considered the "ground truth" to which the AI-processed images were compared. It's not explicitly stated if this "ground truth" itself was established by expert consensus, but it infers it from high-quality clinical acquisition.
8. The sample size for the training set:
- For Deep Resolve Boost: 24,599 2D slices.
9. How the ground truth for the training set was established:
- "The acquired datasets represent the ground truth for the training and validation. Input data was retrospectively created from the ground truth by data manipulation and augmentation. This process includes further under-sampling of the data by discarding k-space lines, lowering of the SNR level by addition of noise and mirroring of k-space data."
- This indicates that "ground truth" was established by using full, high-quality MR images. The "input data" for the AI model (which the AI then "boosts") was intentionally degraded (under-sampled, noised) from this high-quality ground truth. The AI's task is to reconstruct the degraded input data back to resemble the original high-quality "ground truth."
Ask a specific question about this device
(146 days)
The MAGNETOM system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces transverse, sagittal, coronal and oblique cross-sectional images, spectroscopic images and/or spectra, and that displays the internal structure and/or function of the head, body, or extremities. Other physical parameters derived from the images and/or spectra may also be produced. Depending on the region of interest, contrast agents may be used. These images and/or spectra and the physical parameters derived from the images and/or spectra when interpreted by a trained physician yield information that may assist in diagnosis.
The MAGNETOM system may also be used for imaging during interventional procedures when performed with MR compatible devices such as in-room displays and MR Safe biopsy needles.
The subject devices, MAGNETOM Aera (including MAGNETOM Aera Mobile), MAGNETOM Skyra, MAGNETOM Prisma, MAGNETOM Prisma™, MAGNETOM Vida, MAGNETOM Lumina with software syngo MR XA60A, consist of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Vida with syngo MR XA50A (K213693).
This FDA 510(k) summary describes several updates to existing Siemens Medical Solutions MRI systems (MAGNETOM Vida, Lumina, Aera, Skyra, Prisma, and Prisma fit), primarily focusing on software updates (syngo MR XA60A) and some modified/new hardware components. The document highlights the evaluation of new AI features, specifically "Deep Resolve Boost" and "Deep Resolve Sharp."
Here's an analysis of the acceptance criteria and the study details for the AI features:
1. Table of Acceptance Criteria and Reported Device Performance
The document provides a general overview of the evaluation metrics used but does not explicitly state acceptance criteria in a quantitative format (e.g., "Deep Resolve Boost must achieve a PSNR of X" or "Deep Resolve Sharp must achieve Y SSIM"). Instead, it describes the types of metrics used and qualitative assessments.
AI Feature | Acceptance Criteria (Implicit from Evaluation) | Reported Device Performance (Summary) |
---|---|---|
Deep Resolve Boost | - Preservation of image quality (aliasing artifacts, image sharpness, denoising levels) compared to original. |
- Impact characterized by PSNR and SSIM. | The impact of the network has been characterized by several quality metrics such as peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Most importantly, the performance was evaluated by visual comparisons to evaluate e.g., aliasing artifacts, image sharpness and denoising levels. |
| Deep Resolve Sharp | - Preservation of image quality (image sharpness) compared to original. - Impact characterized by PSNR, SSIM, and perceptual loss.
- Verification and validation by visual rating and evaluation of image sharpness by intensity profile comparisons. | The impact of the network has been characterized by several quality metrics such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and perceptual loss. In addition, the feature has been verified and validated by inhouse tests. These tests include visual rating and an evaluation of image sharpness by intensity profile comparisons of reconstructions with and without Deep Resolve Sharp. |
2. Sample Size Used for the Test Set and Data Provenance
- Deep Resolve Boost: The document doesn't explicitly state a separate "test set" size. It mentions the "Training and Validation data" which includes:
- TSE: more than 25,000 slices
- HASTE: pre-trained on the TSE dataset and refined with more than 10,000 HASTE slices
- EPI Diffusion: more than 1,000,000 slices
- Data Provenance: The data covered a broad range of body parts, contrasts, fat suppression techniques, orientations, and field strength. No specific country of origin is mentioned, but the manufacturer (Siemens Healthcare GmbH) is based in Germany, and Siemens Medical Solutions USA, Inc. is the submitter. The data was "retrospectively created from the ground truth by data manipulation and augmentation."
- Deep Resolve Sharp: The document doesn't explicitly state a separate "test set" size. It mentions "Training and Validation data" from "on more than 10,000 high resolution 2D images."
- Data Provenance: Similar to Deep Resolve Boost, the data covered a broad range of body parts, contrasts, fat suppression techniques, orientations, and field strength. Data was "retrospectively created from the ground truth by data manipulation." No specific country of origin is mentioned.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
Not specified. The document states that the acquired datasets "represent the ground truth." There is no mention of expert involvement in establishing ground truth for the test sets. The focus is on technical metrics (PSNR, SSIM) and "visual comparisons" or "visual rating" which implies expert review, but the number and qualifications are not provided.
4. Adjudication Method for the Test Set
Not explicitly stated. The document mentions "visual comparisons" for Deep Resolve Boost and "visual rating" for Deep Resolve Sharp. This suggests subjective human review, but no specific adjudication method (like 2+1 or 3+1 consensus) is detailed.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs without AI Assistance
No MRMC comparative effectiveness study is described for the AI features. The studies mentioned (sections 8 and 9) focus on evaluating the technical performance and image quality of the AI algorithms themselves, not on their impact on human reader performance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, standalone performance evaluation of the algorithms was conducted. The "Test Statistics and Test Results Summary" for both Deep Resolve Boost and Deep Resolve Sharp detail the evaluation of the network's impact using quantitative metrics (PSNR, SSIM, perceptual loss) and qualitative assessments ("visual comparisons," "visual rating," "intensity profile comparisons"). This represents the algorithm's performance independent of a human reader's diagnostic accuracy.
7. The Type of Ground Truth Used
The ground truth used for both Deep Resolve Boost and Deep Resolve Sharp was the acquired datasets themselves, representing the original high-quality or reference images/slices.
- For Deep Resolve Boost, input data was "retrospectively created from the ground truth by data manipulation and augmentation," including undersampling k-space lines, lowering SNR, and mirroring k-space data. The original acquired data serves as the target "ground truth" for the AI to reconstruct/denoise.
- For Deep Resolve Sharp, input data was "retrospectively created from the ground truth by data manipulation," specifically by cropping k-space data to create low-resolution input, with the original high-resolution data serving as the "output / ground truth" for training and validation.
8. The Sample Size for the Training Set
- Deep Resolve Boost:
- TSE: more than 25,000 slices
- HASTE: pre-trained on the TSE dataset and refined with more than 10,000 HASTE slices
- EPI Diffusion: more than 1,000,000 slices
- Deep Resolve Sharp: more than 10,000 high resolution 2D images.
9. How the Ground Truth for the Training Set Was Established
The ground truth for the training set was established as the acquired, unaltered (or minimally altered, e.g., removal of k-space lines to simulate lower quality input from high quality ground truth) raw imaging data.
- For Deep Resolve Boost: "The acquired datasets (as described above) represent the ground truth for the training and validation. Input data was retrospectively created from the ground truth by data manipulation and augmentation." This implies that the original, high-quality scans were considered the ground truth, and the AI was trained to restore manipulated, lower-quality versions to this original quality.
- For Deep Resolve Sharp: "The acquired datasets represent the ground truth for the training and validation. Input data was retrospectively created from the ground truth by data manipulation. k-space data has been cropped such that only the center part of the data was used as input. With this method corresponding low-resolution data as input and high-resolution data as output / ground truth were created for training and validation." Similar to Boost, the original, higher-resolution scans served as the ground truth.
Ask a specific question about this device
(147 days)
The MAGNETOM system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces transverse, sagittal, coronal and oblique cross sectional images, spectroscopic images and/or spectra, and that displays the internal structure and/or function of the head, body, or extremities. Other physical parameters derived from the images and/or spectra may also be produced. Depending on the region of interest, contrast agents may be used. These images and/or spectra and the physical parameters derived from the images and/or spectra when interpreted by a trained physician yield information that may assist in diagnosis.
The MAGNETOM system may also be used for imaging during interventional procedures when performed with MR compatible devices such as in-room displays and MR Safe biopsy needles.
MAGNETOM Amira and MAGNETOM Sempra with syngo MR XA50M include new and modified features comparing to the predicate devices MAGNETOM Amira and MAGNETOM Sempra with syngo MR XA12M (K183221, cleared on February 14, 2019).
The provided document is a 510(k) summary for the Siemens MAGNETOM Amira and Sempra MR systems, detailing their substantial equivalence to predicate devices. It describes new and modified hardware and software features, including AI-powered "Deep Resolve Boost" and "Deep Resolve Sharp."
However, the document does not contain the detailed information necessary to fully answer the specific questions about acceptance criteria and a study proving the device meets those criteria, particularly in the context of AI performance. The provided text is a summary for regulatory clearance, not a clinical study report.
Specifically, it lacks:
- Concrete, quantifiable acceptance criteria for the AI features (e.g., a specific PSNR threshold that defines "acceptance").
- A comparative effectiveness study (MRMC) to show human reader improvement with AI assistance.
- Stand-alone algorithm performance metrics for the AI features (beyond general quality metrics like PSNR/SSIM, which are not explicitly presented as acceptance criteria).
- Details on expert involvement, adjudication, or ground truth establishment for a test set used for regulatory acceptance, as the "test statistics and test results" section refers to quality metrics and visual inspection, and "clinical settings with cooperation partners" rather than a formal test set for regulatory submission.
The "Test statistics and test results" section for Deep Resolve Boost mentions "After successful passing of the quality metrics tests, work-in-progress packages of the network were delivered and evaluated in clinical settings with cooperation partners." It also mentions "seven peer-reviewed publications" covering 427 patients which "concluded that the work-in-progress package and the reconstruction algorithm can be beneficially used for clinical routine imaging." This indicates real-world evaluation but does not provide specific acceptance criteria or detailed study results for the regulatory submission itself.
Based on the provided text, here's what can be extracted and what is missing:
1. Table of acceptance criteria and reported device performance:
The document does not explicitly state quantifiable "acceptance criteria" for the AI features (Deep Resolve Boost and Deep Resolve Sharp) that were used for regulatory submission. Instead, it describes general successful evaluation methods:
Acceptance Criteria (Inferred/Methods Used) | Reported Device Performance (Summary) |
---|---|
For Deep Resolve Boost: |
- Successful passing of quality metrics tests (PSNR, SSIM)
- Visual inspection to detect potential artifacts
- Evaluation in clinical settings with cooperation partners
- No misinterpretation, alteration, suppression, or introduction of anatomical information reported | Deep Resolve Boost:
- Impact characterized by PSNR and SSIM. Visual inspection conducted for artifacts.
- Evaluated in clinical settings with cooperation partners.
- Seven peer-reviewed publications (427 patients on 1.5T and 3T systems, covering prostate, abdomen, liver, knee, hip, ankle, shoulder, hand and lumbar spine).
- Publications concluded beneficial use for clinical routine imaging.
- No reported cases of misinterpretation, altered, suppressed, or introduced anatomical information.
- Significant time savings reported in most cases by enabling faster image acquisition. |
| For Deep Resolve Sharp: - Successful passing of quality metrics tests (PSNR, SSIM, perceptual loss)
- In-house visual rating
- Evaluation of image sharpness by intensity profile comparisons of reconstruction with and without Deep Resolve Sharp | Deep Resolve Sharp:
- Impact characterized by PSNR, SSIM, and perceptual loss.
- Verified and validated by in-house tests, including visual rating and evaluation of image sharpness by intensity profile comparisons.
- Both tests showed increased edge sharpness. |
2. Sample sized used for the test set and the data provenance:
The document mixes "training" and "validation" datasets. It doesn't explicitly refer to a separate "test set" for regulatory evaluation with clear sample sizes for that purpose. The "Test statistics and test results" section refers to general evaluations and published studies.
- "Validation" Datasets (internal validation, not explicitly a regulatory test set):
- Deep Resolve Boost: 1,874 2D slices
- Deep Resolve Sharp: 2,057 2D slices
- Data Provenance (Training/Validation):
- Source: For Deep Resolve Boost: "in-house measurements and collaboration partners." For Deep Resolve Sharp: "in-house measurements."
- Origin: Not specified by country.
- Retrospective/Prospective: "Input data was retrospectively created from the ground truth by data manipulation and augmentation" (for Boost) and "retrospectively created from the ground truth by data manipulation" (for Sharp). This implies the underlying acquired datasets were retrospective.
- "Clinical Settings" / Publications (Implied real-world evaluation, not a regulatory test set):
- Deep Resolve Boost: "a total of seven peer-reviewed publications 427 patients"
- Data Provenance: Not specified by origin or retrospective/prospective for these external evaluations.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
This information is not provided in the document. It mentions "visual inspection" and "visual rating," but does not detail the number or qualifications of experts involved in these processes for the "validation" sets or any dedicated regulatory "test set." For the "seven peer-reviewed publications," the expertise of the authors is implied but not detailed as part of the regulatory submission.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
This information is not provided in the document.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
A formal MRMC comparative effectiveness study demonstrating human reader improvement with AI assistance is not described in this document. The document focuses on the technical performance of the AI features themselves and their general clinical utility as reported in external publications (e.g., faster imaging, no misinterpretation), but not a comparative study of human performance with and without the AI.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
Yes, the sections on "Test statistics and test results" for both Deep Resolve Boost and Deep Resolve Sharp describe evaluation of the algorithm's performance using quality metrics (PSNR, SSIM, perceptual loss) and visual/intensity profile comparisons. This implies standalone algorithm evaluation. No specific quantifiable results for these metrics are provided as acceptance criteria, only that tests were successfully passed and showed increased sharpness for Deep Resolve Sharp.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc):
The ground truth for the AI training and validation datasets is described as:
- Deep Resolve Boost: "The acquired datasets represent the ground truth for the training and validation. Input data was retrospectively created from the ground truth by data manipulation and augmentation." This implies that the original, full-quality MR images serve as the ground truth.
- Deep Resolve Sharp: "The acquired datasets represent the ground truth for the training and validation. Input data was retrospectively created from the ground truth by data manipulation." Similarly, the original, high-resolution MR images are the ground truth.
This indicates the ground truth is derived directly from the originally acquired (presumably high-quality/standard) MRI data, rather than an independent clinical assessment like pathology or expert consensus. The AI's purpose is to reconstruct a high-quality image from manipulated or undersampled input, so the "truth" is the original high-quality image.
8. The sample size for the training set:
- Deep Resolve Boost: 24,599 2D slices
- Deep Resolve Sharp: 11,920 2D slices
Note that the document states: "due to reasons of data privacy, we did not record how many individuals the datasets belong to. Gender, age and ethnicity distribution was also not recorded during data collection."
9. How the ground truth for the training set was established:
As described in point 7:
- Deep Resolve Boost: The "acquired datasets" (original, full-quality MR images) served as the ground truth. Input data for the AI model was then "retrospectively created from the ground truth by data manipulation and augmentation," including undersampling, adding noise, and mirroring k-space data.
- Deep Resolve Sharp: The "acquired datasets" (original MR images) served as the ground truth. Input data was "retrospectively created from the ground truth by data manipulation," specifically by cropping k-space data so only the center part was used as low-resolution input, with the original full data as the high-resolution output/ground truth.
Ask a specific question about this device
Page 1 of 1