Search Results
Found 1 results
510(k) Data Aggregation
(174 days)
MAGNETOM Cima.X Fit
The MAGNETOM system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces transverse, sagittal, coronal and oblique cross sectional images, spectroscopic images and/or spectra, and that displays the internal structure and/or function of the head, body, or extremities. Other physical parameters derived from the images and/or spectra may also be produced. Depending on the region of interest, contrast agents may be used. These images and/or spectra and the physical parameters derived from the images and/or spectra when interpreted by a trained physician yield information that may assist in diagnosis.
The MAGNETOM system may also be used for imaging during interventional procedures when performed with MR compatible devices such as in-room displays and MR Safe biopsy needles.
The subject device, MAGNETOM Cima.X Fit with software syngo MR XA61A, consists of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Vida with syngo MR XA50A (K213693).
A high-level summary of the new and modified hardware and software is provided below:
For MAGNETOM Cima.X Fit with syngo MR XA61:
Hardware
New Hardware:
→ 3D Camera
Modified Hardware:
- → Host computers ((syngo MR Acquisition Workplace (MRAWP) and syngo MR Workplace (MRWP)).
-
MaRS (Measurement and Reconstruction System).
- → Gradient Coil
- → Cover
- → Cooling/ACSC
- → SEP
- → GPA
- → RFCEL Temp
- → Body Coil
- → Tunnel light
Software
New Features and Applications:
- -> GRE_PC
- → Physio logging
- -> Deep Resolve Boost HASTE
-
Deep Resolve Boost EPI Diffusion
- → Open Recon
- -> Ghost reduction (DPG)
- -> Fleet Ref Scan
- → Manual Mode
- → SAMER
- → MR Fingerprinting (MRF)1
Modified Features and Applications:
- → BEAT nav (re-naming only).
-
myExam Angio Advanced Assist (Test Bolus).
- → Beat Sensor (all sequences).
-
Stimulation monitoring
- -> Complex Averaging
I am sorry, but the provided text does not contain the acceptance criteria and the comprehensive study details you requested for the "MAGNETOM Cima.X Fit" device, particularly point-by-point information on a multi-reader multi-case (MRMC) comparative effectiveness study or specific quantitative acceptance criteria for its AI features like Deep Resolve Boost or Deep Resolve Sharp.
The document is a 510(k) summary for a Magnetic Resonance Diagnostic Device (MRDD), highlighting its substantial equivalence to a predicate device. While it mentions AI features and their training/validation, it does not provide the detailed performance metrics or study design to fully answer your request.
Here's what can be extracted based on the provided text, and where information is missing:
1. Table of Acceptance Criteria and Reported Device Performance:
The document mentions that the impact of the AI networks (Deep Resolve Boost and Deep Resolve Sharp) has been characterized by "several quality metrics such as peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM)," and evaluated by "visual comparisons to evaluate e.g., aliasing artifacts, image sharpness and denoising levels" and "perceptual loss." For Deep Resolve Sharp, "an evaluation of image sharpness by intensity profile comparisons of reconstructions with and without Deep Resolve Sharp" was also conducted.
However, specific numerical acceptance criteria (e.g., PSNR > X, SSIM > Y), or the actual reported performance values against these criteria are not provided in the text. The document states that the conclusions from the non-clinical data suggest that the features bear an equivalent safety and performance profile to that of the predicate device, but no quantitative data to support this for the AI features is included in this summary.
AI Feature | Acceptance Criteria (Not explicitly stated with numerical values in the text) | Reported Device Performance (No quantitative results provided in the text) |
---|---|---|
Deep Resolve Boost | - PSNR (implied to be high) |
- SSIM (implied to be high)
- Visual comparisons (e.g., absence of aliasing artifacts, good image sharpness, effective denoising levels) | Impact characterized by these metrics and visual comparisons. Claims of equivalent safety and performance profile to predicate device. No specific quantitative performance values (e.g., actual PSNR/SSIM scores) are reported in this document. |
| Deep Resolve Sharp | - PSNR (implied to be high) - SSIM (implied to be high)
- Perceptual loss
- Visual rating
- Image sharpness by intensity profile comparisons (reconstructions with and without Deep Resolve Sharp) | Impact characterized by these metrics, verified and validated by in-house tests. Claims of equivalent safety and performance profile to predicate device. No specific quantitative performance values are reported in this document. |
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective):
- Deep Resolve Boost:
- Test Set Description: The text mentions that "the performance was evaluated by visual comparisons." It does not explicitly state a separate test set size beyond the validation data used during development. It implies the performance evaluation was based on the broad range of data covered during training and validation.
- Data Provenance: Not specified (country of origin or retrospective/prospective). The data was "retrospectively created from the ground truth by data manipulation and augmentation."
- Deep Resolve Sharp:
- Test Set Description: The text mentions "in-house tests. These tests include visual rating and an evaluation of image sharpness by intensity profile comparisons of reconstructions with and without Deep Resolve Sharp." Similar to Deep Resolve Boost, a separate test set size is not explicitly stated. It implies these tests were performed on data from the more than 10,000 high-resolution 2D images used for training and validation.
- Data Provenance: Not specified (country of origin or retrospective/prospective). The data was "retrospectively created from the ground truth by data manipulation."
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not specified. The document mentions "visual comparisons" and "visual rating" as part of the evaluation but does not detail how many experts were involved or their qualifications.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not specified.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study is not mentioned in this document as being performed to establish substantial equivalence for the AI features. The document relies on technical metrics and visual comparisons of image quality to demonstrate equivalence.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- The evaluation mentioned, using metrics like PSNR, SSIM, perceptual loss, and intensity profile comparisons, are indicative of standalone algorithm performance in terms of image quality. Visual comparisons and ratings would involve human observers, but the primary focus described is on the image output quality itself from the algorithm. However, no specific "standalone" study design with comparative performance metrics (e.g., standalone diagnostic accuracy) is detailed.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Deep Resolve Boost: "The acquired datasets (as described above) represent the ground truth for the training and validation." This implies the high-quality, full-data MRI scans before artificial undersampling or noise addition served as the ground truth. This is a technical ground truth based on the original acquired MRI data, not a clinical ground truth like pathology or expert consensus on a diagnosis.
- Deep Resolve Sharp: "The acquired datasets represent the ground truth for the training and validation." Similar to Deep Resolve Boost, this refers to technical ground truth from high-resolution 2D images before manipulation.
8. The sample size for the training set:
- Deep Resolve Boost:
- TSE: more than 25,000 slices
- HASTE: pre-trained on the TSE dataset and refined with more than 10,000 HASTE slices
- EPI Diffusion: more than 1,000,000 slices
- Deep Resolve Sharp: on more than 10,000 high resolution 2D images.
9. How the ground truth for the training set was established:
- Deep Resolve Boost: "The acquired datasets (as described above) represent the ground truth for the training and validation. Input data was retrospectively created from the ground truth by data manipulation and augmentation. This process includes further under-sampling of the data by discarding k-space lines, lowering of the SNR level by addition Restricted of noise and mirroring of k-space data."
- Deep Resolve Sharp: "The acquired datasets represent the ground truth for the training and validation. Input data was retrospectively created from the ground truth by data manipulation. k-space data has been cropped such that only the center part of the data was used as input. With this method corresponding low-resolution data as input and high-resolution data as output / ground truth were created for training and validation."
In summary, the document focuses on the technical aspects of the AI features and their development, demonstrating substantial equivalence through non-clinical performance tests and image quality assessments, rather than clinical efficacy studies with specific diagnostic accuracy endpoints or human-AI interaction evaluations.
Ask a specific question about this device
Page 1 of 1