Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K260265

    Validate with FDA (Live)

    Date Cleared
    2026-02-23

    (26 days)

    Product Code
    Regulation Number
    892.1000
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The MAGNETOM system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces transverse, sagittal, coronal and oblique cross-sectional images, spectroscopic images and/or spectra, and that displays, depending on optional local coils that have been configured with the system, the internal structure and/or function of the head, body, or extremities. Other physical parameters derived from the images and/or spectra may also be produced. Depending on the region of interest, contrast agents may be used. These images and/or spectra and the physical parameters derived from the images and/or spectra when interpreted by a trained physician yield information that may assist in diagnosis.

    The MAGNETOM system may also be used for imaging during interventional procedures when performed with MR compatible devices such as in-room displays and MR Safe biopsy needles.

    Device Description

    MAGNETOM Flow.Ace & MAGNETOM Flow.Plus with software Syngo MR XB10 include new and modified hardware and software compared to the predicate devices, MAGNETOM Flow.Ace & MAGNETOM Flow.Plus with software syngo MR XA70A.

    New compared to predicate:

    • Spine support respiratory (Cushion) as a part of BM Spine Coil Set 1.5T (including new surface material)
    • Gradient Configuration Upgrade

    Modified same as predicate, but with new claim introduced:

    • PETRA (new claim for the existing sequence)

    In addition, the following hardware and software are transferred from the reference device MAGNETOM Flow.Neo with software Syngo MR XB10 (K252838), to the subject devices without any modifications:

    Hardware (New compared to predicate, same as reference):

    • BioMatrix Dockable Table with / without eDrive
    • Comfort Sound: Cushion

    Hardware (Modified compared to predicate, same as reference):

    • Comfort Sound: BM Head/Neck Coil
    • Relocatable Option

    Software (New compared to predicate, same as reference):

    • Open Workflow

    Software (Modified compared to predicate, same as reference):

    • BioMatrix Motion Sensor (SAMER)
    • CS_VIBE
    • SPAIR FatSat Improvements: SPAIR "Abdomen & Pelvis" mode and SPAIR Breast mode
    • Deep Resolve Boost for FL3D_VIBE and SPACE
    • Deep Resolve Sharp for FL3D_VIBE and SPACE
    • Preview functionality for Deep Resolve Boost
    • myExam Implant Suite
    • GRE_PC
    • Open Recon 2.0
    • Deep Resolve Boost for TSE
    • "MTC Mode" for SPACE

    Other Modifications and / or Minor Changes (New compared to predicate, same as reference):

    • Eco Power Mode Pro

    Other Modifications and / or Minor Changes (Modified compared to predicate, same as reference):

    • Off-Center Planning Support
    • Flip Angle Optimization (Lock TR and FA)
    • ID Gain (re-naming)
    • Marketing bundle "myExam Companion"
    AI/ML Overview

    Acceptance Criteria and Study Details for Siemens MAGNETOM Flow.Ace and Flow.Plus

    Based on the provided 510(k) clearance letter, the acceptance criteria and study details are as follows. It's important to note that this document primarily focuses on demonstrating substantial equivalence to a predicate device rather than presenting a detailed de novo clinical trial for device efficacy. Therefore, specific metrics like sensitivity, specificity, or AUC for diagnostic performance are not explicitly stated as acceptance criteria in the typical sense for a new diagnostic claim.

    The acceptance criteria are generally focused on demonstrating that the new and modified features of the MAGNETOM Flow.Ace and Flow.Plus systems maintain equivalent safety and performance to the predicate device.

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance CriteriaReported Device Performance
    Software Verification and Validation: New and modified software features conform to design specifications and perform as intended.Testing demonstrated that the new and modified software features performed as intended, supporting substantial equivalence.
    Functionality of New/Modified Hardware: New hardware ("Spine support respiratory (Cushion)"), and modified hardware ("BioMatrix Dockable Table with/without eDrive", "Comfort Sound: BM Head/Neck Coil", "Relocatable Option") perform as intended and safely.Testing demonstrated that the new and modified hardware features performed as intended and safely, supporting substantial equivalence.
    Biocompatibility: Surface of applied parts (Spine support respiratory cushion and Comfort Sound Cushion) in contact with patients is biocompatible.Biocompatibility testing (per ISO 10993-1) was conducted, demonstrating compliance.
    Electrical Safety and Electromagnetic Compatibility (EMC): The complete system complies with relevant safety and EMC standards.Electrical safety and EMC testing (per IEC 60601-1 and related collateral standards) was conducted, demonstrating compliance.
    Acoustic Noise: The device meets acoustic noise limits.Acoustic noise measurement procedures (per NEMA MS 4-2010), were followed.
    Compliance with General Standards: All modifications comply with recognized industry standards (e.g., IEC 60601-1 series, ISO 14971, IEC 62304, NEMA, DICOM).The device conforms to the listed FDA recognized and international IEC, ISO, and NEMA standards.
    Clinical Equivalence (through sample images and comparative literature): New features (e.g., Deep Resolve Boost, CS_VIBE, PETRA, BioMatrix Motion Sensor) provide information that assists in diagnosis, maintaining the existing indications for use.Sample clinical images were provided as claim evidence. Clinical publications were referenced to support the use and performance of specific features (SAMER, CS_VIBE, PETRA). The conclusion was that the features bear an equivalent safety and performance profile.

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not explicitly state a specific sample size for a test set in the context of clinical images or patient data.

    • Data Provenance: "Sample clinical images" were provided as "claim evidence." The origin (e.g., country) and whether this data was retrospective or prospective are not specified in the provided text. The clinical publications referenced are peer-reviewed articles, which would typically involve patient data.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    The document does not provide information regarding the number of experts, their qualifications, or how ground truth was established for any "test set" of images. The phrase "interpreted by a trained physician" is used in the Indications for Use, which is a general statement about MR diagnostic devices.

    4. Adjudication Method for the Test Set

    The document does not specify any adjudication method (e.g., 2+1, 3+1, none) for a test set.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done

    No, an MRMC comparative effectiveness study was not explicitly done or reported in this document. The submission relies on "sample clinical images" as "claim evidence" and references existing clinical publications for certain features to demonstrate substantial equivalence, rather than a direct comparative study showing improvement with AI assistance.

    6. If a Standalone (i.e. algorithm only without human-in-the loop performance) Study was Done

    The document discusses "software verification and validation" and "nonclinical tests" which would imply standalone performance testing of the algorithms and features. However, it does not provide specific metrics or results for standalone algorithm performance (e.g., sensitivity, specificity, or AUC if applicable to specific features). The focus is on the performance of the integrated system.

    7. The Type of Ground Truth Used

    The document does not explicitly state the type of ground truth used for any specific evaluation. The "Indications for Use" mention that images "when interpreted by a trained physician yield information that may assist in diagnosis." For the referenced clinical publications, the ground truth would depend on the methodology of those studies (e.g., pathology, clinical follow-up, expert consensus in a research setting), but this information is external to the 510(k) summary itself.

    8. The Sample Size for the Training Set

    The document does not provide any information regarding the sample size for a training set. This is a 510(k) submission for an updated MR system, not a de novo AI/ML device where training data details are typically prominent. While features like "Deep Resolve Boost" and "Deep Resolve Sharp" likely leverage AI/ML, the details of their development and training are not disclosed in this summary.

    9. How the Ground Truth for the Training Set Was Established

    The document does not provide any information on how ground truth was established for a training set.

    Ask a Question

    Ask a specific question about this device

    K Number
    K250436

    Validate with FDA (Live)

    Date Cleared
    2025-06-16

    (122 days)

    Product Code
    Regulation Number
    892.1000
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The MAGNETOM system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces transverse, sagittal, coronal and oblique cross-sectional images, spectroscopic images and/or spectra, and that displays, depending on optional local coils that have been configured with the system, the internal structure and/or function of the head, body, or extremities. Other physical parameters derived from the images and/or spectra may also be produced. Depending on the region of interest, contrast agents may be used. These images and/or spectra and the physical parameters derived from the images and/or spectra when interpreted by a trained physician yield information that may assist in diagnosis.

    The MAGNETOM system may also be used for imaging during interventional procedures when performed with MR compatible devices such as in-room displays and MR Safe biopsy needles.

    Device Description

    MAGNETOM Flow.Ace and MAGNETOM Flow.Plus are 60cm-bore MRI systems with quench pipe-free, sealed magnets utilizing DryCool technology. They are equipped with BioMatrix technology and run on Siemens' syngo MR XA70A software platform. The systems include Eco Power Mode for reduced energy and helium consumption. They have different gradient configurations suitable for all body regions, with stronger configurations supporting advanced cardiac imaging. Compared to the predicate device, new hardware includes a new magnet, gradient coil, RF system, local coils, patient tables, and computer systems. New software features include AutoMate Cardiac, Quick Protocols, BLADE with SMS acceleration for non-diffusion imaging, Deep Resolve Swift Brain, Fast GRE Reference Scan, Ghost reduction, Fleet Reference Scan, SMS Averaging, Select&GO extension, myExam Spine Autopilot, and New Startup-Timer. Modified features include improvements for Pulse Sequence Type SPACE, improved Gradient ECO Mode Settings, and Inline Image Filter switchable for users.

    AI/ML Overview

    The provided 510(k) clearance letter and summary describe the acceptance criteria and supporting studies for the MAGNETOM Flow.Ace and MAGNETOM Flow.Plus devices, particularly focusing on their AI features: Deep Resolve Boost, Deep Resolve Sharp, and Deep Resolve Swift Brain.

    Here's a breakdown of the requested information:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document uses quality metrics like PSNR, SSIM, and NMSE as indicators of performance and implicitly as acceptance criteria. Visual inspection and clinical evaluations are also mentioned.

    FeatureQuality Metrics (Acceptance Criteria)Reported Performance (Summary)
    Deep Resolve BoostPeak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM)Most metrics passed.
    Deep Resolve SharpPSNR, SSIM, Perceptual Loss, Visual Rating, Image sharpness evaluation by intensity profile comparisonsVerified and validated by in-house tests, including visual rating and evaluation of image sharpness.
    Deep Resolve Swift BrainPSNR, SSIM, Normalized Mean Squared Error (NMSE), Visual InspectionAfter successful passing of quality metrics tests, work-in-progress packages were delivered and evaluated in clinical settings with collaboration partners. Potential artifacts not well-captured by metrics were detected via visual inspection.

    2. Sample Sizes Used for the Test Set and Data Provenance

    The document uses "Training and Validation data" and often refers to the datasets used for both. It is not explicitly stated what percentage or how many cases from these datasets were strictly reserved for a separate "test set" and what came from the "validation sets." However, given the separation in slice count, the "Validation" slices for Deep Resolve Swift Brain might be considered the test set.

    • Deep Resolve Boost:
      • TSE: >25,000 slices
      • HASTE: >10,000 HASTE slices (refined)
      • EPI Diffusion: >1,000,000 slices
      • Data Provenance: Retrospectively created from acquired datasets. Data covered a broad range of body parts, contrasts, fat suppression techniques, orientations, and field strength.
    • Deep Resolve Sharp:
      • Data: >10,000 high resolution 2D images
      • Data Provenance: Retrospectively created from acquired datasets. Data covered a broad range of body parts, contrasts, fat suppression techniques, orientations, and field strength.
    • Deep Resolve Swift Brain:
      • 1.5T Validation: 3,616 slices (This functions as a test set for 1.5T)
      • 3T Validation: 6,048 slices (This functions as a test set for 3T)
      • Data Provenance: Retrospectively created from acquired datasets.

    The document does not explicitly state the country of origin for the data.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The document does not explicitly state the number or qualifications of experts used to establish the ground truth for the test sets. For Deep Resolve Swift Brain, it mentions "evaluated in clinical settings with collaboration partners," implying clinical experts were involved in the evaluation, but details are not provided. For Boost and Sharp, the "acquired datasets...represent the ground truth," suggesting the raw imaging data itself, rather than expert annotations on that data, served as ground truth.

    4. Adjudication Method for the Test Set

    The document does not describe a formal adjudication method (e.g., 2+1, 3+1). For Deep Resolve Swift Brain, it mentions "visually inspected" and "evaluated in clinical settings with collaboration partners," suggesting a qualitative assessment, but details on consensus or adjudication are missing.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    A formal MRMC comparative effectiveness study demonstrating human reader improvement with AI vs. without AI assistance is not described in the provided text. The studies focus on the AI's standalone performance in terms of image quality metrics and internal validation.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was Done

    Yes, standalone performance was done for the AI features. The "Test Statistics and Test Results Summary" for Deep Resolve Boost, Deep Resolve Sharp, and Deep Resolve Swift Brain describe the evaluation of the algorithm's output using quantitative metrics (PSNR, SSIM, NMSE) and visual inspection against reference standards, which is characteristic of standalone performance evaluation.

    7. The Type of Ground Truth Used

    For Deep Resolve Boost, Deep Resolve Sharp, and Deep Resolve Swift Brain, the ground truth used was the acquired high-quality datasets themselves. The input data for training and validation was then retrospectively created from this ground truth by manipulating or augmenting it (e.g., undersampling k-space, adding noise, cropping, using only the center part of k-space). This means the original, higher-quality MR images or k-space data served as the reference for what the AI models should reconstruct or improve upon.

    8. The Sample Size for the Training Set

    • Deep Resolve Boost:
      • TSE: >25,000 slices
      • HASTE: pre-trained on the TSE dataset and refined with >10,000 HASTE slices
      • EPI Diffusion: >1,000,000 slices
    • Deep Resolve Sharp: >10,000 high resolution 2D images
    • Deep Resolve Swift Brain: 20,076 slices

    9. How the Ground Truth for the Training Set Was Established

    For Deep Resolve Boost, Deep Resolve Sharp, and Deep Resolve Swift Brain, the "acquired datasets (as described above) represent the ground truth for the training and validation." This implies that high-quality, fully acquired MRI data was considered the ground truth. The input data used during training (e.g., undersampled, noisy, or lower-resolution versions) was then derived or manipulated from this original ground truth. Essentially, the "ground truth" was the optimal, full-data acquisition before any degradation was simulated for the AI's input.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1