Search Filters

Search Results

Found 28 results

510(k) Data Aggregation

    K Number
    K253057

    Validate with FDA (Live)

    Date Cleared
    2026-01-22

    (122 days)

    Product Code
    Regulation Number
    892.2050
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AI-Rad Companion Brain MR is a post-processing image analysis software that assists clinicians in viewing, analyzing, and evaluating MR brain images.

    AI-Rad Companion Brain MR provides the following functionalities:
    • Automated segmentation and quantitative analysis of individual brain structures and white matter hyperintensities
    • Quantitative comparison of each brain structure with normative data from a healthy population
    • Presentation of results for reporting that includes all numerical values as well as visualization of these results

    Device Description

    AI-Rad Companion Brain MR runs two distinct and independent algorithms for Brain Morphometry analysis and White Matter Hyperintensities (WMH) segmentation, respectively. In overall, comprises four main algorithmic features:

    • Brain Morphometry
    • Brain Morphometry follow-up
    • White Matter Hyperintensities (WMH)
    • White Matter Hyperintensities (WMH) follow-up

    The feature for Brain Morphometry is available since the first version of the device (VA2x), while segmentation of White Matter Hyperintensities was added since VA4x and the follow-up analysis for both is available since VA5x. The brain morphometry and brain morphometry follow-up feature have not been modified and remain identical to previous VA5x mainline version.

    AI-Rad Companion Brain MR VA60 is an enhancement to the predicate, AI-Rad Companion Brain MR VA50 (K232305). Just as in the predicate, the brain morphometry feature of AI-Rad Companion Brain MR addresses the automatic quantification and visual assessment of the volumetric properties of various brain structures based on T1 MPRAGE datasets. From a predefined list of brain structures (e.g. Hippocampus, Caudate, Left Frontal Gray Matter, etc.) volumetric properties are calculated as absolute and normalized volumes with respect to the total intracranial volume. The normalized values are compared against age-matched mean and standard deviations obtained from a population of healthy reference subjects. The deviation from this reference population can be visualized as 3D overlay map or out-of-range flag next to the quantitative values.

    Additionally, identical to the predicate, the white matter hyperintensities feature addresses the automatic quantification and visual assessment of white matter hyperintensities on the basis of T1 MPRAGE and T2 weighted FLAIR datasets. The detected WMH can be visualized as a 3D overlay map and the quantification in count and volume as per 4 brain regions in the report.

    AI/ML Overview

    Here's a structured overview of the acceptance criteria and study details for the AI-Rad Companion Brain MR, based on the provided FDA 510(k) clearance letter:

    Acceptance Criteria and Reported Device Performance

    Acceptance CriteriaReported Device Performance (AI-Rad Companion Brain MR WMH Feature)Reported Device Performance (AI-Rad Companion Brain MR WMH Follow-up Feature)
    WMH Segmentation AccuracyPearson correlation coefficient between WMH volumes and ground truth annotation: 0.96Interclass correlation coefficient between WMH volumes and ground truth annotation: 0.94Dice score: 0.60F1-score: 0.67Detailed Dice Scores for WMH Segmentation:Mean: 0.60Median: 0.62STD: 0.1495% CI: [0.57, 0.63]Detailed ASSD Scores for WMH Segmentation:Mean: 0.05Median: 0.00STD: 0.1595% CI: [0.02, 0.08]
    New or Enlarged WMH Segmentation Accuracy (Follow-up)Pearson correlation coefficient between new or enlarged WMH volumes and ground truth annotation: 0.76Average Dice score: 0.59Average F1-score: 0.71Detailed Dice Scores for New/Enlarged WMH Segmentation (by Vendor - Siemens, GE, Philips):Siemens: Mean 0.64, Med 0.67, STD 0.15, 95% CI [0.60, 0.69]GE: Mean 0.56, Med 0.60, STD 0.14, 95% CI [0.51, 0.61]Philips: Mean 0.55, Med 0.59, STD 0.16, 95% CI [0.50, 0.61]Detailed ASSD Scores for New/Enlarged WMH Segmentation (by Vendor - Siemens, GE, Philips):Siemens: Mean 0.02, Med 0.00, STD 0.06, 95% CI [0.00, 0.04]GE: Mean 0.09, Med 0.01, STD 0.23, 95% CI [0.03, 0.19]Philips: Mean 0.04, Med 0.00, STD 0.11, 95% CI [0.00, 0.08]

    Study Details

    1. Sample Size Used for the Test Set and Data Provenance:

      • White Matter Hyperintensities (WMH) Feature: 100 subjects (Multiple Sclerosis patients (MS), Alzheimer's patients (AD), cognitive impaired (CI), and healthy controls (HC)).
      • White Matter Hyperintensities (WMH) Follow-up Feature: 165 subjects (Multiple Sclerosis patients (MS) and Alzheimer's patients (AD)).
      • Data Provenance: Data acquired from Siemens, GE, and Philips scanners. Testing data had balanced distribution with respect to gender and age of the patient according to target patient population, and field strength (1.5T and 3T). This indicates a retrospective, multi-vendor, multi-national (implied by vendor diversity) dataset.
    2. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications:

      • Number of Experts: Three radiologists.
      • Qualifications: Not explicitly stated beyond "radiologists." It is not specified if they are board-certified, or their years of experience.
    3. Adjudication Method for the Test Set:

      • For each dataset, three sets of ground truth annotations were created manually.
      • Each set was annotated by a disjoint group consisting of an annotator, a reviewer, and a clinical expert.
      • The clinical expert was randomly assigned per case to minimize annotation bias.
      • The clinical expert reviewed and corrected the initial annotation of the changed WMH areas according to a specified annotation protocol. Significant corrections led to re-communication with the annotator and re-review.
      • This suggests a 3+1 Adjudication process, where three initial annotations are reviewed by a clinical expert.
    4. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study Was Done:

      • No, an MRMC comparative effectiveness study comparing human readers with and without AI assistance was not done. The study focuses on the standalone performance of the AI algorithm against expert ground truth.
    5. If a Standalone (i.e. algorithm only without human-in-the loop performance) Was Done:

      • Yes, a standalone performance study was done. The "Accuracy was validated by comparing the results of the device to manual annotated ground truth from three radiologists." This evaluates the algorithm's performance directly.
    6. The Type of Ground Truth Used:

      • Expert Consensus / Manual Annotation: The ground truth for both WMH and WMH follow-up features was established through "manual annotated ground truth from three radiologists" and involved a "standard annotation process" with annotators, reviewers, and clinical experts.
    7. The Sample Size for the Training Set:

      • The document states that the "training data used for the fine tuning the hyper parameters of WMH follow-up algorithm is independent of the data used to test the white matter hyperintensity algorithm follow up algorithm." However, the specific sample size for the training set is not provided in the given text.
    8. How the Ground Truth for the Training Set Was Established:

      • The document implies that the WMH follow-up algorithm "does not include any machine learning/ deep learning component," suggesting a rule-based or conventional image processing algorithm. Therefore, "training" might refer to parameter tuning rather than machine learning model training.
      • For the "fine-tuning the hyper parameters of WMH follow-up algorithm," the ground truth establishment method for this training data is not explicitly detailed in the provided text. It only states that this data was "independent of the data used to test" the algorithm.
    Ask a Question

    Ask a specific question about this device

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Intended Use / Indications for Use

    Indications for Use for MAGNETOM Vida, MAGNETOM Lumina, MAGNETOM Vida Fit, MAGNETOM Sola, MAGNETOM Altea, MAGNETOM Sola Fit, MAGNETOM Viato.Mobile:

    The MAGNETOM system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces transverse, sagittal, coronal and oblique cross sectional images, spectroscopic images and/or spectra, and that displays the internal structure and/or function of the head, body, or extremities. Other physical parameters derived from the images and/or spectra may also be produced. Depending on the region of interest, contrast agents may be used. These images and/or spectra and the physical parameters derived from the images and/or spectra when interpreted by a trained physician yield information that may assist in diagnosis.

    The MAGNETOM system may also be used for imaging during interventional procedures when performed with MR compatible devices such as in-room displays and MR Safe biopsy needles.

    Indications for Use for MAGNETOM Flow.Elite, MAGNETOM Flow.Neo, MAGNETOM Flow.Rise:

    The MAGNETOM system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces transverse, sagittal, coronal and oblique cross sectional images, spectroscopic images and/or spectra, and that displays, depending on optional local coils that have been configured with the system, the internal structure and/or function of the head, body, or extremities. Other physical parameters derived from the images and/or spectra may also be produced. Depending on the region of interest, contrast agents may be used. These images and/or spectra and the physical parameters derived from the images and/or spectra when interpreted by a trained physician yield information that may assist in diagnosis.

    The MAGNETOM system may also be used for imaging during interventional procedures when performed with MR compatible devices such as in-room displays and MR Safe biopsy needles.

    Device Description

    The subject device, MAGNETOM Vida with software Syngo MR XB10, consists of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Vida with syngo MR XA60A (K231560).

    A high-level summary of the new and modified hardware and software is provided below:

    New Hardware:

    • myExam 3D Camera
    • BM Contour XL Coil

    Modified Hardware:

    • RF Transmitter TBX3 3T (TX Box 3)
    • MaRS (Measurement and reconstruction system)

    Software

    New Features and Applications:

    • Brachytherapy Support for use with MR conditional applicators
    • CS Vibe
    • myExam Implant Suite
    • DANTE blood suppression
    • SMS Averaging for TSE
    • SMS Averaging for TSE_DIXON
    • SMS for BLADE without diffusion function
    • BioMatrix Motion Sensor
    • RF pulse optimization with VERSE
    • Deep Resolve Boost for FL3D_VIBE and SPACE
    • Deep Resolve Sharp for FL3D_VIBE and SPACE
    • ASNR recommended protocols for imaging of ARIA
    • Preview functionality for Deep Resolve Boost
    • EP2D_FID_PHS
    • EP_SEG_FID_PHS
    • 3D Whole Heart
    • Ghost reduction (Dual polarity Grappa (DPG))
    • Fleet Reference Scan
    • AutoMate Cardiac (Cardiac AI Scan Companion)
    • Complex Averaging
    • myExam Autopilot Spine
    • myExam Autopilot Brain and myExam Autopilot Knee
    • Open Workflow

    Modified features and applications:

    • GRE_PC
    • myExam RT Assist workflow improvements
    • Open Recon 2.0
    • Deep Resolve Boost for TSE
    • "MTC Mode" for SPACE
    • SPACE Improvement: high bandwidth IR pulse
    • SPACE Improvement: increase gradient spoiling

    The subject device, MAGNETOM Lumina with software Syngo MR XB10, consists of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Lumina with syngo MR XA60A (K231560). A high-level summary of the new and modified hardware and software is provided below:

    New Hardware:

    • myExam 3D Camera
    • BM Contour XL Coil

    Modified Hardware:

    • RF Transmitter TBX3 3T (TX Box 3)
    • MaRS (Measurement and reconstruction system)

    Software

    New Features and Applications:

    • CS Vibe
    • myExam Implant Suite
    • DANTE blood suppression
    • SMS Averaging for TSE
    • SMS Averaging for TSE_DIXON
    • SMS for BLADE without diffusion function
    • BioMatrix Motion Sensor
    • RF pulse optimization with VERSE
    • Deep Resolve Boost for FL3D_VIBE and SPACE
    • Deep Resolve Sharp for FL3D_VIBE and SPACE
    • Preview functionality for Deep Resolve Boost
    • EP2D_FID_PHS
    • EP_SEG_FID_PHS
    • Ghost reduction (Dual polarity Grappa (DPG))
    • Fleet Reference Scan
    • AutoMate Cardiac (Cardiac AI Scan Companion)
    • Complex Averaging
    • myExam Autopilot Spine
    • myExam Autopilot Brain and myExam Autopilot Knee
    • Compressed Sensing Cardiac Cine
    • Open Workflow

    Modified Features and Applications:

    • GRE_PC
    • Open Recon 2.0
    • Deep Resolve Boost for TSE
    • "MTC Mode" for SPACE
    • SPACE Improvement: high bandwidth IR pulse
    • SPACE Improvement: increase gradient spoiling

    The subject device, MAGNETOM Lumina with software Syngo MR XB10, consists of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Lumina with syngo MR XA60A (K231560). A high-level summary of the new and modified hardware and software is provided below:

    New Hardware:

    • myExam 3D Camera
    • BM Contour XL Coil

    Modified Hardware:

    • RF Transmitter TBX3 3T (TX Box 3)
    • MaRS (Measurement and reconstruction system)

    Software

    New Features and Applications:

    • CS Vibe
    • myExam Implant Suite
    • DANTE blood suppression
    • SMS Averaging for TSE
    • SMS Averaging for TSE_DIXON
    • SMS for BLADE without diffusion function
    • BioMatrix Motion Sensor
    • RF pulse optimization with VERSE
    • Deep Resolve Boost for FL3D_VIBE and SPACE
    • Deep Resolve Sharp for FL3D_VIBE and SPACE
    • Preview functionality for Deep Resolve Boost
    • EP2D_FID_PHS
    • EP_SEG_FID_PHS
    • Ghost reduction (Dual polarity Grappa (DPG))
    • Fleet Reference Scan
    • AutoMate Cardiac (Cardiac AI Scan Companion)
    • Complex Averaging
    • myExam Autopilot Spine
    • myExam Autopilot Brain and myExam Autopilot Knee
    • Compressed Sensing Cardiac Cine
    • Open Workflow

    Modified Features and Applications:

    • GRE_PC
    • Open Recon 2.0
    • Deep Resolve Boost for TSE
    • "MTC Mode" for SPACE
    • SPACE Improvement: high bandwidth IR pulse
    • SPACE Improvement: increase gradient spoiling

    The subject device, MAGNETOM Vida Fit with software Syngo MR XB10, consists of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Vida with syngo MR XA60A (K231560).

    A high-level summary of the new and modified hardware and software is provided below:

    New Hardware:

    • myExam 3D Camera
    • Beat Sensor
    • BM Contour XL Coil

    Modified Hardware:

    • RF Transmitter TBX3 3T (TX Box 3)
    • MaRS (Measurement and reconstruction system)
    • Host computers

    Software

    New Features and Applications:

    • Brachytherapy Support for use with MR conditional applicators
    • CS Vibe
    • myExam Implant Suite
    • DANTE blood suppression
    • SMS Averaging for TSE
    • SMS Averaging for TSE_DIXON
    • SMS for BLADE without diffusion function
    • BioMatrix Motion Sensor
    • RF pulse optimization with VERSE
    • Deep Resolve Boost for FL3D_VIBE and SPACE
    • Deep Resolve Sharp for FL3D_VIBE and SPACE
    • ASNR recommended protocols for imaging of ARIA
    • Preview functionality for Deep Resolve Boost
    • EP2D_FID_PHS
    • EP_SEG_FID_PHS
    • GRE_PC
    • Open Recon 2.0
    • 3D Whole Heart
    • Ghost reduction (Dual polarity Grappa (DPG))
    • Fleet Reference Scan
    • AutoMate Cardiac (Cardiac AI Scan Companion)
    • myExam Autopilot Spine
    • myExam Autopilot Brain and myExam Autopilot Knee
    • Deep Resolve for EPI
    • Deep Resolve for HASTE
    • Physiologging
    • Complex Averaging
    • Open Workflow

    Modified features and applications:

    • myExam RT Assist workflow improvements
    • Deep Resolve Boost for TSE
    • "MTC Mode" for SPACE
    • myExam Angio Advanced Assist (Test Bolus)
    • SPACE Improvement: high bandwidth IR pulse
    • SPACE Improvement: increase gradient spoiling

    The subject device, MAGNETOM Sola with software Syngo MR XB10, consists of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Sola with syngo MR XA61A (K232535).

    A high-level summary of the new and modified hardware and software is provided below:

    New Hardware:

    • BM Contour XL Coil

    Modified Hardware:

    • MaRS (Measurement and reconstruction system)

    Software

    New Features and Applications:

    • Brachytherapy Support for use with MR conditional applicators
    • CS Vibe
    • DANTE blood suppression
    • BioMatrix Motion Sensor
    • SPAIR FatSat Improvements: SPAIR "Abdomen&Pelvis" mode and SPAIR Breast mode
    • RF pulse optimization with VERSE
    • Deep Resolve Boost for FL3D_VIBE and SPACE
    • Deep Resolve Sharp for FL3D_VIBE and SPACE
    • ASNR recommended protocols for imaging of ARIA
    • Preview functionality for Deep Resolve Boost
    • EP2D_FID_PHS
    • EP_SEG_FID_PHS
    • 3D Whole Heart
    • AutoMate Cardiac (Cardiac AI Scan Companion)
    • SMS Averaging for TSE
    • SMS Averaging for TSE_DIXON
    • SMS for BLADE without diffusion function
    • Ghost reduction (Dual polarity Grappa (DPG))
    • Fleet Reference Scan
    • Deep Resolve Swift Brain
    • myExam Autopilot Spine
    • Open Workflow
    • Complex Averaging
    • Open Workflow

    Modified features and applications:

    • myExam RT Assist workflow improvements
    • Deep Resolve Boost for TSE
    • "MTC Mode" for SPACE
    • myExam Angio Advanced Assist (Test Bolus)
    • SPACE Improvement: high bandwidth IR pulse
    • SPACE Improvement: increase gradient spoiling

    The subject device, MAGNETOM Sola with software Syngo MR XB10, consists of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Sola with syngo MR XA61A (K232535).

    A high-level summary of the new and modified hardware and software is provided below:

    New Hardware:

    • BM Contour XL Coil

    Modified Hardware:

    • MaRS (Measurement and reconstruction system)

    Software

    New Features and Applications:

    • Brachytherapy Support for use with MR conditional applicators
    • CS Vibe
    • DANTE blood suppression
    • BioMatrix Motion Sensor
    • SPAIR FatSat Improvements: SPAIR "Abdomen&Pelvis" mode and SPAIR Breast mode
    • RF pulse optimization with VERSE
    • Deep Resolve Boost for FL3D_VIBE and SPACE
    • Deep Resolve Sharp for FL3D_VIBE and SPACE
    • ASNR recommended protocols for imaging of ARIA
    • Preview functionality for Deep Resolve Boost
    • EP2D_FID_PHS
    • EP_SEG_FID_PHS
    • 3D Whole Heart
    • AutoMate Cardiac (Cardiac AI Scan Companion)
    • SMS Averaging for TSE
    • SMS Averaging for TSE_DIXON
    • SMS for BLADE without diffusion function
    • Ghost reduction (Dual polarity Grappa (DPG))
    • Fleet Reference Scan
    • Deep Resolve Swift Brain
    • myExam Autopilot Spine
    • Open Workflow

    Modified features and applications:

    • myExam Implant Suite
    • GRE_PC
    • myExam RT Assist workflow improvements
    • Open Recon 2.0
    • Deep Resolve Boost for TSE
    • "MTC Mode" for SPACE
    • SPACE Improvement: high bandwidth IR pulse
    • SPACE Improvement: increase gradient spoiling

    The subject device, MAGNETOM Altea with software Syngo MR XB10, consists of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Altea with syngo MR XA61A (K232535).

    A high-level summary of the new and modified hardware and software is provided below:

    New Hardware:

    • BM Contour XL Coil

    Modified Hardware:

    • MaRS (Measurement and reconstruction system)

    Software

    New Features and Applications:

    • CS Vibe
    • DANTE blood suppression
    • BioMatrix Motion Sensor
    • SPAIR FatSat Improvements: SPAIR "Abdomen&Pelvis" mode and SPAIR Breast mode
    • RF pulse optimization with VERSE
    • Deep Resolve Boost for FL3D_VIBE and SPACE
    • Deep Resolve Sharp for FL3D_VIBE and SPACE
    • Preview functionality for Deep Resolve Boost
    • EP2D_FID_PHS
    • EP_SEG_FID_PHS
    • AutoMate Cardiac (Cardiac AI Scan Companion)
    • SMS Averaging for TSE
    • SMS Averaging for TSE_DIXON
    • SMS for BLADE without diffusion function
    • Ghost reduction (Dual polarity Grappa (DPG))
    • Fleet Reference Scan
    • Deep Resolve Swift Brain
    • myExam Autopilot Spine
    • Compressed Sensing Cardiac Cine
    • Open Workflow

    Modified features and applications:

    • myExam Implant Suite
    • GRE_PC
    • myExam RT Assist workflow improvements
    • Open Recon 2.0
    • Deep Resolve Boost for TSE
    • "MTC Mode" for SPACE
    • SPACE Improvement: high bandwidth IR pulse
    • SPACE Improvement: increase gradient spoiling

    The subject device, MAGNETOM Sola Fit with software Syngo MR XB10, consists of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Sola Fit with syngo MR XA70A (K250443).

    A high-level summary of the new and modified hardware and software is provided below:

    New Hardware:

    • BM Contour XL Coil

    Modified Hardware:

    • MaRS (Measurement and reconstruction system)
    • Host computers

    Software

    New Features and Applications:

    • Brachytherapy Support for use with MR conditional applicators
    • CS Vibe
    • DANTE blood suppression
    • BioMatrix Motion Sensor
    • SPAIR FatSat Improvements: SPAIR "Abdomen&Pelvis" mode and SPAIR Breast mode
    • RF pulse optimization with VERSE
    • Deep Resolve Boost for FL3D_VIBE and SPACE
    • Deep Resolve Sharp for FL3D_VIBE and SPACE
    • ASNR recommended protocols for imaging of ARIA
    • Preview functionality for Deep Resolve Boost
    • EP2D_FID_PHS
    • EP_SEG_FID_PHS
    • myExam Implant Suite
    • GRE_PC
    • Open Recon 2.0
    • SMS Averaging for TSE
    • SMS Averaging for TSE_DIXON
    • SMS for BLADE without diffusion function
    • Deep Resolve Swift Brain
    • myExam Autopilot Spine
    • Open Workflow

    Modified features and applications:

    • myExam RT Assist workflow improvements
    • myExam Implant Suite
    • Deep Resolve Boost for TSE
    • "MTC Mode" for SPACE
    • SPACE Improvement: high bandwidth IR pulse
    • SPACE Improvement: increase gradient spoiling

    The subject device, MAGNETOM Sola Fit with software Syngo MR XB10, consists of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Sola Fit with syngo MR XA70A (K250443).

    A high-level summary of the new and modified hardware and software is provided below:

    New Hardware:

    • BM Contour XL Coil

    Modified Hardware:

    • MaRS (Measurement and reconstruction system)
    • Host computers

    Software

    New Features and Applications:

    • Brachytherapy Support for use with MR conditional applicators
    • CS Vibe
    • DANTE blood suppression
    • BioMatrix Motion Sensor
    • SPAIR FatSat Improvements: SPAIR "Abdomen&Pelvis" mode and SPAIR Breast mode
    • RF pulse optimization with VERSE
    • Deep Resolve Boost for FL3D_VIBE and SPACE
    • Deep Resolve Sharp for FL3D_VIBE and SPACE
    • ASNR recommended protocols for imaging of ARIA
    • Preview functionality for Deep Resolve Boost
    • EP2D_FID_PHS
    • EP_SEG_FID_PHS
    • myExam Implant Suite
    • GRE_PC
    • Open Recon 2.0
    • SMS Averaging for TSE
    • SMS Averaging for TSE_DIXON
    • SMS for BLADE without diffusion function
    • Deep Resolve Swift Brain
    • myExam Autopilot Spine
    • Open Workflow

    Modified features and applications:

    • myExam RT Assist workflow improvements
    • myExam Implant Suite
    • Deep Resolve Boost for TSE
    • "MTC Mode" for SPACE

    The subject device, MAGNETOM Viato.Mobile with software Syngo MR XB10, consists of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Viato.Mobile with syngo MR XA70A (K250443).

    A high-level summary of the new and modified hardware and software is provided below:

    New Hardware:

    • BM Contour XL Coil

    Modified Hardware:

    • MaRS (Measurement and reconstruction system)
    • Host computers

    Software

    New Features and Applications:

    • CS Vibe
    • DANTE blood suppression
    • BioMatrix Motion Sensor
    • SPAIR FatSat Improvements: SPAIR "Abdomen&Pelvis" mode and SPAIR Breast mode
    • RF pulse optimization with VERSE
    • Deep Resolve Boost for FL3D_VIBE and SPACE
    • Deep Resolve Sharp for FL3D_VIBE and SPACE
    • ASNR recommended protocols for imaging of ARIA
    • Preview functionality for Deep Resolve Boost
    • EP2D_FID_PHS
    • EP_SEG_FID_PHS
    • myExam Implant Suite
    • GRE_PC
    • Open Recon 2.0
    • SMS Averaging for TSE
    • SMS Averaging for TSE_DIXON
    • SMS for BLADE without diffusion function
    • Deep Resolve Swift Brain
    • myExam Autopilot Spine
    • Open Workflow

    Modified features and applications:

    • myExam Implant Suite
    • Deep Resolve Boost for TSE
    • "MTC Mode" for SPACE

    With the subject software version, Syngo MR XB10, we are also introducing the following new 1.5T devices, which are part of our MAGNETOM Flow. Platform:

    MAGNETOM Flow.Elite
    MAGNETOM Flow.Neo
    MAGNETOM Flow.Rise

    The subject device, MAGNETOM Flow.Elite, MAGNETOM Flow.Neo and MAGNETOM Flow.Rise with software Syngo MR XB10, consists of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Sola with syngo MR XA61A (K232535).

    A high-level summary of the new and modified hardware and software is provided below:

    New Hardware:

    • Magnet
    • MREF (Magnet Refrigerator)
    • Gradient system
    • Gradient Coil
    • RF System
    • System Cover
    • Patient Table
    • MaRS (Measurement and Reconstruction System)
    • Select&GO Display (TPAN_3G) and Control Panel (CPAN_2G)
    • Body Coil
    • Head/Neck Coil
    • BM Head/Neck Coil (with ComfortSound)
    • BM Contour S Coil
    • BM Contour M Coil
    • BM Contour L Coil
    • BM Contour XL Coil
    • Foot/Ankle Coil
    • BM Spine Coil
    • iTx Extremity 18 Flare
    • Multi-Index MR-RT Positioning (a part of "RT Pro Edition" marketing bundle) (not available for MAGNETOM Flow.Rise)

    Modified Hardware:

    • Gradient Power Amplifier (GPA)
    • SAR Monitoring
    • In-Vivo Shim

    Software

    New Features and Applications:

    • CS Vibe
    • BioMatrix Motion Sensor
    • SPAIR FatSat Improvements: SPAIR "Abdomen&Pelvis" mode and SPAIR Breast mode
    • Deep Resolve Boost for FL3D_VIBE and SPACE
    • Deep Resolve Sharp for FL3D_VIBE and SPACE
    • Preview functionality for Deep Resolve Boost
    • EP2D_FID_PHS
    • EP_SEG_FID_PHS
    • AutoMate Cardiac (Cardiac AI Scan Companion)
    • DANTE blood suppression
    • SMS Averaging for TSE
    • SMS Averaging for TSE_DIXON
    • SMS for BLADE without diffusion function
    • Ghost reduction (Dual polarity Grappa (DPG))
    • Fleet Reference Scan
    • Deep Resolve Swift Brain
    • Quick Protocols
    • myExam Autopilot Spine
    • Open Workflow

    Modified features and applications:

    • myExam Implant Suite
    • GRE_PC
    • myExam RT Assist workflow improvements (not available for MAGNETOM Flow.Rise)
    • Open Recon 2.0
    • Deep Resolve Boost for TSE
    • "MTC Mode" for SPACE
    • SPACE Improvement: high bandwidth IR pulse
    • SPACE Improvement: increase gradient spoiling

    New (general) Software / Platform / Workflow:

    • Select&GO extension (coil-based Iso Centering, Patient Registration at the touch display, Start Scan at the touch display)
    • New Startup-Timer
    • myExam RT Assist (not available for MAGNETOM Flow.Rise)
    • myExam Brain RT-Autopilot (not available for MAGNETOM Flow.Rise)
    • Eco Power Mode Pro

    Modified (general) Software / Platform:

    • Improved Gradient ECO Mode Settings

    Furthermore, the following minor updates and changes were conducted for the subject devices MAGNETOM Vida, MAGNETOM Lumina, MAGNETOM Vida Fit, MAGNETOM Sola, MAGNETOM Altea:

    • Off-Center Planning Support
    • Flip Angle Optimization (Lock TR and FA)
    • Inline Image Filter
    • Automatic System Shutdown (ASS) sensor (Smoke Detector)
    • ID Gain (re-naming)
    • Select&Go Display (Touch Display (TPAN))
    • Marketing bundle "myExam Companion"

    The following minor updates and changes were conducted for the subject devices MAGNETOM Sola Fit and MAGNETOM Viato.Mobile:

    • Off-Center Planning Support
    • Automatic System Shutdown (ASS) sensor (Smoke Detector)
    • ID Gain (re-naming)
    • Select&Go Display (Touch Display (TPAN))
    • Marketing bundle "myExam Companion"

    The following minor updates and changes were conducted for the subject devices MAGNETOM Flow.Elite, MAGNETOM Flow.Neo, MAGNETOM Flow.Rise:

    • Off-Center Planning Support
    • Flip Angle Optimization (Lock TR and FA)
    • Inline Image Filter
    • Automatic System Shutdown (ASS) sensor (Smoke Detector)
    • ID Gain (re-naming)
    • Marketing bundle "myExam Companion"
    • Marketing Bundle "RT Pro Edition"(not available for MAGNETOM Flow.Rise)
    AI/ML Overview

    This FDA 510(k) clearance letter pertains to several MAGNETOM MRI systems with software Syngo MR XB10. The document primarily focuses on demonstrating substantial equivalence to predicate devices through non-clinical testing of new and modified hardware and software features, particularly those involving Artificial Intelligence (AI) such as "Deep Resolve" functionalities.

    Here's an analysis of the acceptance criteria and the studies that prove the devices meet them, specifically for the AI features:

    1. Table of Acceptance Criteria and Reported Device Performance for AI Features

    The document does not explicitly state "acceptance criteria" for the AI features in a numerical format that would typically be seen for a device's performance metrics (e.g., minimum sensitivity, specificity). Instead, the acceptance criteria are implicitly defined by the evaluation methods and the "Test result summary" for each Deep Resolve feature, which aim to demonstrate equivalent or improved image quality compared to conventional methods.

    AI FeatureAcceptance Criteria (Implied)Reported Device PerformanceComments
    Deep Resolve Swift Brain- Quantitative quality metrics (PSNR, SSIM, NMSE) to demonstrate network impact.- Visual inspection to ensure no undetected artifacts.- Evaluation in clinical settings with collaboration partners.- "Impact of the network has been characterized by several quality metrics such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM) and normalized mean squared error (NMSE)."- "Images were inspected visually to ensure that potential artefacts are detected that are not well captured by the metrics."- "Work-in-progress packages of the network were delivered and evaluated in clinical settings with collaboration partners."The results indicate successful performance in meeting these criteria, suggesting the AI feature performs as intended without negative impact on image quality and with acceptable quantitative metrics.
    Deep Resolve Boost for FL3D_VIBE and Deep Resolve Boost for SPACE- Quantitative evaluations (SSIM, PSNR, MSE) showing convergence of training and improvements over conventional parallel imaging.- Visual inspection to confirm no negative impact on image quality.- The function should allow for faster acquisition or improved image quality.- "Quantitative evaluations of structural similarity index (SSIM), peak signal-to-noise ratio (PSNR) and mean squared error (MSE) metrics showed a convergence of the training and improvements compared to conventional parallel imaging."- "An inspection of the test images did not reveal any negative impact to the image quality."- "The function has been used either to acquire images faster or to improve image quality."The results indicate successful performance, demonstrating quantitative improvements and confirming user benefit (faster acquisition or improved image quality) without negative visual impact.
    Deep Resolve Sharp for FL3D_VIBE and Deep Resolve Sharp for SPACE- Quantitative quality metrics (PSNR, SSIM, perceptual loss).- Rating and evaluation of image sharpness by intensity profile comparisons.- Demonstration of increased edge sharpness and reduced Gibb's artifacts.- "The impact of the Deep Resolve Sharp network has been characterized by several quality metrics such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and perceptual loss."- "The tests include rating and an evaluation of image sharpness by intensity profile comparisons of reconstruction with and without Deep Resolve Sharp. Both tests show increased edge sharpness and reduced Gibb's artifacts."The results directly confirm improved image sharpness and reduced artifacts, meeting the implied performance criteria.
    Deep Resolve Boost for TSE- Similar metrics (PSNR, SSIM, LPIPS) to predicate (cleared) network, both outperforming conventional GRAPPA.- Statistically significant reduction of banding artifacts.- No significant changes in sharpness and detail visibility.- Radiologist evaluation confirming no difference in suitability for clinical diagnostics.- "The evaluation on the test dataset confirmed very similar metrics in terms of peak signal-to-noise ratio (PSNR), structural similarity index (SSIM) and learned perceptual image patch similarity metrics (LPIPS) for the predicate and the modified network with both outperforming conventional GRAPPA as the reference."- "Visual evaluations confirmed statistically significant reduction of banding artifacts with no significant changes in sharpness and detail visibility."- "In addition, the radiologist evaluation revealed no difference in suitability for clinical diagnostics between updated and cleared predicate network."This AI feature directly demonstrates equivalent or improved performance compared to the predicate, with specific mention of "radiologist evaluation" ensuring clinical suitability.

    2. Sample Size Used for the Test Set and Data Provenance

    Since the document distinguishes between training, validation, and testing datasets, the "test set" here refers to the data used for final evaluation of the AI model's performance.

    • Deep Resolve Swift Brain:

      • Test Set Sample Size: The document lists "Validation: 3,616 slices (1.5T validation); 6,048 slices (3T validation)" as part of the split. It also mentions "work-in-progress packages of the network were delivered and evaluated in clinical settings with collaboration partners," implying additional testing, but a specific numerical sample size for this external validation is not provided in detail. However, the initial splits serve as the primary "test set" for performance metrics mentioned.
      • Data Provenance: "in-house measurement," implying retrospective data collected at Siemens' facilities. The document notes that "attributes like gender, age and ethnicity are not relevant to the training data" due to network architecture, but no specific country of origin is stated beyond "in-house."
    • Deep Resolve Boost for FL3D_VIBE and Deep Resolve Boost for SPACE:

      • Test Set Sample Size: The document states 19% of 1265 measurements for validation. It also explicitly mentions "collaboration partners (testing)" indicating an external test set, but a specific numerical breakdown for this is not provided.
      • Data Provenance: "in-house measurements (training and validation) and collaboration partners (testing)." This suggests a mix of retrospective data potentially from various countries where Siemens has collaboration, though specific locations are not listed.
    • Deep Resolve Sharp for FL3D_VIBE and Deep Resolve Sharp for SPACE:

      • Test Set Sample Size: 30% of the 500 measurements are listed for validation, which serves as a test set. This equates to 150 measurements.
      • Data Provenance: "in-house measurements," implying retrospective data from Siemens' research facilities. Specific country not mentioned.
    • Deep Resolve Boost for TSE:

      • Test Set Sample Size: "Additional test dataset for banding artifact reduction: more than 2000 slices."
      • Data Provenance: "in-house measurements and collaboration partners" for training/validation. The "additional test dataset for banding artifact reduction" likely follows the same provenance. Retrospective data.

    3. Number of Experts Used and Qualifications for Ground Truth

    The document does not explicitly state the number of experts used to establish ground truth or their specific qualifications (e.g., "radiologist with 10 years of experience") for any of the Deep Resolve features.

    However, for Deep Resolve Boost for TSE, it mentions:

    • "Visual evaluations confirmed statistically significant reduction of banding artifacts... "
    • "In addition, the radiologist evaluation revealed no difference in suitability for clinical diagnostics..."

    This indicates that radiologists were involved in the evaluation of the Deep Resolve Boost for TSE feature, presumably as experts to establish the clinical suitability. The exact number and their detailed qualifications are not provided. For other features, the ground truth is primarily based on the acquired raw data or manipulated versions of it, without explicit mention of expert review in the ground truth establishment process.


    4. Adjudication Method (for the test set)

    The document does not specify an adjudication method like "2+1" or "3+1" for establishing ground truth or evaluating the test set for any of the AI features. The ground truth for training and validation is derived from the "acquired datasets" which are considered the ground truth due to data manipulation and augmentation from these high-quality source images. For Deep Resolve Boost for TSE, a "radiologist evaluation" is mentioned, implying expert review without detailing a specific adjudication protocol.


    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    The document does not indicate that a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done to measure the improvement of human readers with AI assistance versus without AI assistance. The evaluations focus on the standalone performance of the AI algorithms in improving image quality metrics and, in one instance (Deep Resolve Boost for TSE, radiologist evaluation), the suitability for clinical diagnostics, rather than the impact on human reader performance.


    6. Standalone (Algorithm Only) Performance

    Yes, standalone (algorithm only) performance was done. The descriptions for each Deep Resolve feature focus entirely on the algorithm's performance in terms of quantitative image quality metrics (PSNR, SSIM, NMSE, MSE, LPIPS), visual inspection for artifacts, and improvements over conventional techniques. There is no mention of a "human-in-the-loop" component in the described performance evaluations for these AI features, except for the "radiologist evaluation" for Deep Resolve Boost for TSE which assessed clinical suitability of the output images, not reader performance with the AI.


    7. Type of Ground Truth Used

    • For Deep Resolve Swift Brain, Deep Resolve Boost for FL3D_VIBE & SPACE, and Deep Resolve Sharp for FL3D_VIBE & SPACE:

      • The ground truth used was the acquired datasets (raw MRI data). The input data for the AI models was then "retrospectively created from the ground truth by data manipulation and augmentation" (e.g., undersampling k-space, adding noise, cropping, creating sub-volumes, cropping k-space to simulate low-resolution input from high-resolution output). This means the AI models were trained to learn the mapping from manipulated (e.g., noisy, low-resolution, undersampled) inputs to the original, high-quality acquired image data.
    • For Deep Resolve Boost for TSE:

      • Similar to above, the "acquired training/validation datasets" were considered the ground truth. Input data was generated by "data manipulation and augmentation" (e.g., discarding k-space lines, lowering SNR, mirroring k-space data).

    In essence, the AI models are trained to restore or enhance images to resemble the high-quality, fully acquired MRI data that serves as the reference ground truth.


    8. Sample Size for the Training Set

    • Deep Resolve Swift Brain: 20,076 slices
    • Deep Resolve Boost for FL3D_VIBE and Deep Resolve Boost for SPACE: 81% of 1265 measurements. (This equates to approximately 1024 measurements).
    • Deep Resolve Sharp for FL3D_VIBE and Deep Resolve Sharp for SPACE: 70% of 500 measurements. (This equates to 350 measurements).
    • Deep Resolve Boost for TSE: More than 23,250 slices (93% of the total dataset).

    9. How the Ground Truth for the Training Set Was Established

    For all Deep Resolve features, the ground truth for the training set was established from acquired MRI datasets (either "in-house measurements" or from "collaboration partners"). These acquired datasets are implicitly considered the "true" or "high-quality" images. The AI models are designed to process inputs that mimic suboptimal acquisition conditions (e.g., undersampled k-space, lower SNR, lower resolution) and generate outputs that match these high-quality acquired images, which serve as the ground truth for learning. The process involved:

    • Retrospective creation: Input data was created retrospectively from the acquired ground truth data.
    • Data manipulation and augmentation: This involved techniques such as:
      • Discarding k-space lines (undersampling).
      • Lowering the SNR level by adding Gaussian noise to k-space data.
      • Uniformly-random cropping of training data.
      • Creating sub-volumes of acquired data.
      • Cropping k-space to generate low-resolution inputs corresponding to high-resolution ground truth.
      • Mirroring of k-space data.

    This approach demonstrates an unsupervised or self-supervised learning paradigm where the ground truth is derived directly from the complete and high-fidelity raw data, and the AI is trained to reconstruct or enhance images from degraded inputs to match this ideal ground truth.

    Ask a Question

    Ask a specific question about this device

    K Number
    K252608

    Validate with FDA (Live)

    Date Cleared
    2025-09-09

    (22 days)

    Product Code
    Regulation Number
    892.2050
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AI-Rad Companion Prostate MR is indicated for the processing and annotation of DICOM MR prostate images acquired in adult male populations that demonstrate indications of oncological abnormalities in the prostate.

    The AI-Rad Companion Prostate MR software aims to support the radiologist and provides the following functionality:
    • Viewing, analyzing, evaluating prostate MR images including DCE, ADC, T2 and DWI
    • Hosting application for and provides interface to external Prostate MR AI plug-in device
    • Accept/reject/edit the results generated by the plug-in software Prostate MR AI

    Device Description

    AI-Rad Companion Prostate MR is a diagnostic aid in the interpretation of prostate MRI examinations acquired according to the PI-RADS standard.

    AI-Rad Companion Prostate MR provides quantitative and qualitative information based on bi or multiparametric prostate MR DICOM images. It displays information on the segmented gland, prostate volume, and segmented lesions along with their classifications. This information can be used to support the reading and reporting of prostate MR studies, as well as the planning of prostate biopsies in the case of ultrasound guided MR-US fusion biopsies of the prostate gland.

    The primary features of AI-Rad Companion Prostate MR include:
    • Display of Automatic Segmentation and volume of the prostate gland as well as display of automatic segmentation, quantification and classification of lesions
    • Manual Adjustment of gland and lesion segmentation and editing of lesion scores, diameter, and localization of the automated generated lesions
    • Marking of new lesions
    • Export of results as RTSS format for import into supporting ultrasound or fusion biopsy planning systems

    AI/ML Overview

    Based on the provided FDA 510(k) clearance letter for AI-Rad Companion Prostate MR (K252608), there is no specific study described that proves the device meets predefined acceptance criteria for performance metrics (e.g., sensitivity, specificity, accuracy). The document primarily focuses on demonstrating substantial equivalence to a predicate device (AI-Rad Companion Prostate MR K193283) and adherence to non-clinical verification and validation standards for software development and risk management.

    The document explicitly states: "No clinical tests were conducted to test the performance and functionality of the modifications introduced within AI-Rad Companion Prostate MR."

    Therefore, a table of acceptance criteria and reported device performance, information about sample sizes, expert ground truth establishment, adjudication methods, multi-reader multi-case studies, standalone performance, and training set details are not available in this document as no clinical performance study for the modified device was performed.

    The document emphasizes that modifications and improvements were verified and validated through non-clinical tests (software verification and validation, unit, system, and integration tests), which demonstrated conformity to industry standards and the predicate device's existing safety and effectiveness.

    Here’s a breakdown of what is stated in the document regarding testing:

    1. A table of acceptance criteria and the reported device performance:

    • Not provided. The document does not include a table of specific clinical acceptance criteria (e.g., target sensitivity or specificity values) or reported device performance metrics against such criteria. The focus is on demonstrating that software enhancements do not adversely affect safety and effectiveness, assuming the predicate device's performance was already acceptable.

    2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):

    • Not provided. Since no clinical performance study was conducted for this specific submission, details on test set sample sizes and data provenance are not presented.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):

    • Not applicable. As no clinical study is reported, this information is not available.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

    • Not applicable.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • Not done. The document explicitly states "No clinical tests were conducted."

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • Not explicitly stated for the modified device. While the device description mentions automatic segmentation and classification, the overall context emphasizes a "diagnostic aid" that "aims to support the radiologist" and has functionality to "Accept/reject/edit the results generated by the plug-in software Prostate MR AI." This suggests an interactive workflow where standalone performance is not the primary claim for this particular submission. The separate product, "Prostate MR AI (K241770)," which performs the core AI tasks, is likely where standalone performance would be detailed, but not in this document.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • Not applicable for this submission, as no new clinical performance study is detailed for the modified device. The original predicate device's performance would have relied on a ground truth, but that information is not part of this document.

    8. The sample size for the training set:

    • Not provided. Since this submission is for an updated version of an already cleared device and no new clinical performance study is detailed, the training set size for the underlying AI model (likely part of K241770 or the predicate K193283) is not included here.

    9. How the ground truth for the training set was established:

    • Not provided. This information would typically be detailed in the original submission for the AI algorithm (likely K241770 or K193283), not in this update focused on software enhancements and substantial equivalence.

    In summary, the provided document focuses on demonstrating that the enhancements and modifications to the AI-Rad Companion Prostate MR do not adversely affect the safety and effectiveness of the existing predicate device. It relies on non-clinical software verification and validation, and substantial equivalence arguments, rather than presenting a de novo clinical performance study with new acceptance criteria and results.

    Ask a Question

    Ask a specific question about this device

    K Number
    K250443

    Validate with FDA (Live)

    Date Cleared
    2025-06-16

    (122 days)

    Product Code
    Regulation Number
    892.1000
    Age Range
    All
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The MAGNETOM system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces transverse, sagittal, coronal and oblique cross sectional images, spectroscopic images and/or spectra, and that displays the internal structure and/or function of the head, body, or extremities. Other physical parameters derived from the images and/or spectra may also be produced. Depending on the region of interest, contrast agents may be used. These images and/or spectra and the physical parameters derived from the images and/or spectra when interpreted by a trained physician yield information that may assist in diagnosis.

    The MAGNETOM system may also be used for imaging during interventional procedures when performed with MR compatible devices such as in-room displays and MR Safe biopsy needles.

    Device Description

    The subject device, MAGNETOM Avanto Fit with software syngo MR XA70A, consists of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Avanto Fit with syngo MR XA50A (K220151).

    A high-level summary of the new and modified hardware and software is provided below:

    For MAGNETOM Avanto Fit with syngo MR XA70:

    Hardware

    New Hardware:
    myExam 3D Camera
    BM Head/Neck 20

    Modified Hardware:
    Sanaflex (cushions for patient positioning)

    Software

    New Features and Applications:
    myExam Autopilot Brain
    myExam Autopilot Knee
    3D Whole Heart
    HASTE_interactive
    GRE_PC
    Open Recon
    Deep Resolve Gain
    Fleet Reference Scan
    Physio logging
    complex averaging
    AutoMate Cardiac
    Ghost Reduction
    BLADE diffusion
    Beat Sensor
    Deep Resolve Sharp
    Deep Resolve Boost and Deep Resolve Boost (TSE)
    Deep Resolve Boost HASTE
    Deep Resolve Boost EPI Diffusion

    Modified Features and Applications:
    SPACE improvement (high band)
    SPACE improvement (incr grad)
    Brain Assist
    Eco power mode
    myExam Angio Advanced Assist (Test Bolus)

    The subject device, MAGNETOM Skyra Fit with software syngo MR XA70A, consists of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Skyra Fit with syngo MR XA50A (K220589).

    A high-level summary of the new and modified hardware and software is provided below:

    For MAGNETOM Skyra Fit with syngo MR XA70:

    Hardware

    New Hardware:
    myExam 3D Camera

    Modified Hardware:
    Sanaflex (cushions for patient positioning)

    Software

    New Features and Applications:
    Beat Sensor
    HASTE_interactive
    GRE_PC
    3D Whole Heart
    Deep Resolve Gain
    Open Recon
    Ghost Reduction
    Fleet Reference Scan
    BLADE diffusion
    HASTE diffusion
    Physio logging
    complex averaging
    Deep Resolve Swift Brain
    Deep Resolve Sharp
    Deep Resolve Boost and Deep Resolve Boost (TSE)
    Deep Resolve Boost HASTE
    Deep Resolve Boost EPI Diffusion
    AutoMate Cardiac
    SVS_EDIT

    Modified Features and Applications:
    SPACE improvement (high band)
    SPACE improvement (incr grad)
    Brain Assist
    Eco power mode
    myExam Angio Advanced Assist (Test Bolus)

    The subject device, MAGNETOM Sola Fit with software syngo MR XA70A, consists of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Sola Fit with syngo MR XA51A (K221733).

    A high-level summary of the new and modified hardware and software is provided below:

    For MAGNETOM Sola Fit with syngo MR XA70:

    Hardware

    New Hardware:
    myExam 3D Camera

    Modified Hardware:
    Sanaflex (cushions for patient positioning)

    Software

    New Features and Applications:
    GRE_PC
    3D Whole Heart
    Ghost Reduction
    Fleet Reference Scan
    BLADE diffusion
    Physio logging
    Open Recon
    Complex averaging
    Deep Resolve Sharp
    Deep Resolve Boost and Deep Resolve Boost (TSE)
    Deep Resolve Boost HASTE
    Deep Resolve Boost EPI Diffusion
    AutoMate Cardiac
    Implant suite

    Modified Features and Applications:
    SPACE improvement (high band)
    SPACE improvement (incr grad)
    Brain Assist
    Eco power mode

    The subject device, MAGNETOM Viato.Mobile with software syngo MR XA70A, consists of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Viato.Mobile with syngo MR XA51A (K240608).

    A high-level summary of the new and modified hardware and software is provided below:

    For MAGNETOM Viato.Mobile with syngo MR XA70:

    Hardware

    New Hardware:
    n.a.

    Modified Hardware:
    Sanaflex (cushions for patient positioning)

    Software

    New Features and Applications:
    GRE_PC
    3D Whole Heart
    Ghost Reduction
    Fleet Reference Scan
    BLADE diffusion
    Physio logging
    Open Recon
    Complex averaging
    Deep Resolve Sharp
    Deep Resolve Boost and Deep Resolve Boost (TSE)
    Deep Resolve Boost HASTE
    Deep Resolve Boost EPI Diffusion
    AutoMate Cardiac
    Implant suite

    Modified Features and Applications:
    SPACE improvement (high band)
    SPACE improvement (incr grad)
    Brain Assist
    Eco power mode

    Furthermore, the following minor updates and changes were conducted for the subject devices:

    Low SAR Protocol minor update (for all subject devices but MAGNETOM Skyra Fit): the goal of the SAR adaptive protocols was to be able to perform knee, spine, heart and brain examinations with 50% of the max allowed SAR values in normal mode for head and whole-body SAR. The SAR reduction was achieved by parameter adaptations like Flip angle, TR, RF Pulse Type, Turbo Factor, concatenations. For cardiac clinically accepted alternative imaging contrasts are used (submitted with K232494).

    Implementation of image sorting prepare for PACS (submitted with K231560).

    Implementation of improved DICOM color support (submitted with K232494).

    Needle intervention AddIn was added all subject device (submitted with K232494).

    Inline Image Filter switchable for users: in the subject device, users have the ability to switch the "Inline image filter" (implicite Filter) on or off. This filter is an image-based filter that can be applied to specific pulse sequence types. The function of the filter remains unchanged from the previous device MAGNETOM Sola with syngo MR XA61A (K232535).

    SVS_EDIT is newly added for MAGNETOM Skyra Fit, but without any changes (submitted with K203443)

    Brain Assist received an improvement and is identical to that of snygo MR XA61A (K232535)

    Open Recon is introduced for all systems. The function of Open Recon remains unchanged from the previous submissions (submitted with K221733).

    Lock TR and FA in Bold received a minor UI update

    Implant Suite is newly introduced for MAGNETOM Sola Fit and MAGNETOM Viato.Mobile, but without any changes (submitted with K232535)

    myExam Autopilot Brain and myExam Autopilot Knee are newly introduced for the subject device MAGNETOM AVANTO Fit and are unchanged from previous submissions (submitted with K221733).

    myExam Angio Advanced Assist (Test Bolus) received a bug fixing and minimal UI improvements.

    AI/ML Overview

    The provided text is an FDA 510(k) clearance letter for various MAGNETOM MRI Systems. While it details new and modified software and hardware features, it does not include specific acceptance criteria or a study that "proves the device meets the acceptance criteria" in terms of performance metrics like sensitivity, specificity, or accuracy for a diagnostic task.

    Instead, the document focuses on demonstrating substantial equivalence to predicate devices. This is achieved by:

    • Stating that the indications for use are the same.
    • Listing numerous predicate and reference devices.
    • Detailing hardware and software changes.
    • Mentioning non-clinical tests like software verification and validation, sample clinical images, and image quality assessment to show that the new features maintain an "equivalent safety and performance profile" to the predicate devices.
    • Referencing scientific publications for certain features to support their underlying principles and utility.
    • Briefly describing the training and validation data for two AI features: Deep Resolve Boost and Deep Resolve Sharp, but without performance acceptance criteria or detailed results.

    Therefore, much of the requested information cannot be extracted from this document because it is not a study report detailing clinical performance against predefined acceptance criteria for a specific diagnostic outcome.

    However, I can extract the information related to the AI features as best as possible from the "AI Features/Applications training and validation" section (Page 16).


    Acceptance Criteria and Study Details (Limited to AI Features)

    1. Table of Acceptance Criteria and Reported Device Performance

    FeatureAcceptance CriteriaReported Device Performance
    Deep Resolve Boost(Not explicitly stated in the provided document as specific numerical thresholds, but implied through evaluation metrics.)"The impact of the network has been characterized by several quality metrics such as peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Most importantly, the performance was evaluated by visual comparisons to evaluate e.g., aliasing artifacts, image sharpness and denoising levels." (Exact numerical results not provided).
    Deep Resolve Sharp(Not explicitly stated in the provided document as specific numerical thresholds, but implied through evaluation metrics and verification activities.)"The impact of the network has been characterized by several quality metrics such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and perceptual loss. In addition, the feature has been verified and validated by inhouse tests. These tests include visual rating and an evaluation of image sharpness by intensity profile comparisons of reconstructions with and without Deep Resolve Sharp." (Exact numerical results not provided).

    2. Sample size used for the test set and the data provenance

    • Deep Resolve Boost:
      • Test Set Sample Size: Not explicitly stated as a separate "test set" size. The document mentions "training and validation data" for over 25,000 TSE slices, over 10,000 HASTE slices (for refinement), and over 1,000,000 EPI Diffusion slices. It's unclear what proportion of this was used specifically for final testing, or if the "validation" mentioned includes the final performance evaluation.
      • Data Provenance: Retrospective, described as "Input data was retrospectively created from the ground truth by data manipulation and augmentation." Country of origin is not specified.
    • Deep Resolve Sharp:
      • Test Set Sample Size: Not explicitly stated as a separate "test set" size. The document mentions "training and validation" on more than 10,000 high resolution 2D images. Similar to Deep Resolve Boost, it's unclear what proportion was specifically for final testing.
      • Data Provenance: Retrospective, described as "Input data was retrospectively created from the ground truth by data manipulation." Country of origin is not specified.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    This information is not provided in the document. The definition of "ground truth" for the AI features refers to the acquired datasets themselves rather than expert-labeled annotations. Visual comparisons are mentioned as part of the evaluation, but without details on expert involvement or qualifications.

    4. Adjudication method for the test set

    This information is not provided in the document. While "visual comparisons" and "visual rating" are mentioned, no specific adjudication method (e.g., 2+1, 3+1) is described.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No, a MRMC comparative effectiveness study demonstrating human reader improvement with AI assistance is not described in this document. The focus of the AI features (Deep Resolve Boost and Deep Resolve Sharp) is on image quality enhancement (denoising, sharpness) and reconstruction rather than assisting human readers in a diagnostic task that can be quantified by an effect size.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, the evaluation of Deep Resolve Boost and Deep Resolve Sharp, based on metrics like PSNR, SSIM, and perceptual loss, and "visual comparisons" or "visual rating" appears to be an assessment of the algorithm's performance in enhancing image quality in a standalone capacity, without direct human-in-the-loop interaction for diagnosis.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • Deep Resolve Boost: "The acquired datasets (as described above) represent the ground truth for the training and validation." This implies the original, full-quality, unaltered MRI scan data. Further, "Input data was retrospectively created from the ground truth by data manipulation and augmentation. This process includes further under-sampling of the data by discarding k-space lines, lowering of the SNR level by addition Restricted of noise and mirroring of k-space data."
    • Deep Resolve Sharp: "The acquired datasets represent the ground truth for the training and validation." Similar to Boost, this refers to original, high-resolution MRI scan data. For training, "k-space data has been cropped such that only the center part of the data was used as input. With this method corresponding low-resolution data as input and high-resolution data as output / ground truth were created for training and validation."

    8. The sample size for the training set

    • Deep Resolve Boost:
      • TSE: more than 25,000 slices
      • HASTE (for refinement): more than 10,000 HASTE slices
      • EPI Diffusion: more than 1,000,000 slices
    • Deep Resolve Sharp: more than 10,000 high resolution 2D images.

    9. How the ground truth for the training set was established

    • Deep Resolve Boost: The ground truth was established by the "acquired datasets" themselves (full-quality MRI scans). The training input data was then derived from this ground truth by simulating degraded images (e.g., under-sampling, adding noise).
    • Deep Resolve Sharp: Similarly, the ground truth was the "acquired datasets" (high-resolution MRI scans). The training input data was derived by cropping k-space data to create corresponding low-resolution inputs.
    Ask a Question

    Ask a specific question about this device

    K Number
    K242551

    Validate with FDA (Live)

    Date Cleared
    2025-04-03

    (219 days)

    Product Code
    Regulation Number
    892.2050
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    syngo Dynamics is a multimodality, vendor agnostic Cardiology image and information system intended for medical image management and processing that provides capabilities relating to the review and digital processing of medical images.

    syngo Dynamics supports clinicians by providing post image processing functions for image manipulation, and/or quantification that are intended for use in the interpretation and analysis of medical images for disease detection, diagnosis, and/or patient management within the healthcare institution's network.

    syngo Dynamics is not intended to be used for display or diagnosis of digital mammography images in the U.S.

    Device Description

    syngo Dynamics is a software only medical device which is used with common IT hardware. Recommended configurations are defined for the hardware required to run the device, and hardware is not considered as part of the medical device.

    syngo Dynamics is intended to be used by trained healthcare professionals in a professional healthcare facility to review, edit, and manipulate image data, as well as to generate quantitative data, qualitative data, and diagnostic reports.

    syngo Dynamics is a digital image display and reporting system with flexible deployment – it can function as a standalone medical device that includes a DICOM Server or as an integrated module within an Electronic Health Record (EHR) System with a DICOM Archive that receives images from digital image acquisition devices such as ultrasound and x-ray angiography machines. There are three deployments: Standalone, EHR/EHS Integrated, and Multi-Modality Cardiovascular (MMCV). MMCV deployment functions as a standalone medical device with capability of natively support 2D and 3D CT and MR image types.

    The use of syngo Dynamics is focused on cardiac ultrasound (echocardiography), angiography (x-ray), cardiac nuclear medicine (NM), CT and MR studies that cover both adult and pediatric medicine. Also supported is vascular ultrasound and ultrasound in Obstetrics/Gynecology and Maternal Fetal Medicine (fetal echocardiography during pregnancy).

    syngo Dynamics is based on a client-server architecture. The syngo Dynamics server processes the data from the connected imaging modalities, and stores data and images to a DICOM server and routes them for permanent storage, printing, and review. The client provides the user interface for interactive image viewing, reporting, and processing; and can be installed on network connected workstations.

    syngo Dynamics provides various semi-automated anatomical visualization tools.

    syngo Dynamics offers multiple access strategies: A Workplace that provides full functionality for reading and reporting; A Remote Workplace that provides additionally compressed images with access to full fidelity images for reading and reporting; and a browser based WebViewer that provides access to additionally compressed images and reports from compatible devices (including mobile devices).

    In the United States, monitors (displays) should not be used for diagnosis, unless the monitor (display) has specifically received 510(k) clearance for this purpose.

    AI/ML Overview

    This FDA 510(k) clearance letter pertains to syngo Dynamics (Version VA41D), a Medical Image Management and Processing System (MIMPS). While the document broadly discusses the device's substantial equivalence to a predicate device (syngo Dynamics VA40F) and its general functionalities, the only specific AI/ML-enabled function for which performance data and acceptance criteria are detailed is the Auto EF algorithm for calculating left ventricular ejection fraction from ultrasound images.

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based specifically on the Auto EF algorithm information provided:


    1. Table of Acceptance Criteria and Reported Device Performance (Auto EF Algorithm)

    The document states that "Additional acceptance criteria were defined with a total of 12 predetermined acceptance criteria," but only explicitly details one primary statistical criterion and provides summarized performance for a few other aspects.

    Acceptance CriterionReported Device Performance (syngo Dynamics VA41D)
    Pearson's correlation coefficient (r) between biplane EF generated by Auto EF and ground truth $\ge 0.800$0.822 (compared to 0.826 for predicate VA40F)
    Increased percentage of cases with biplane EF results93.3% (140 of 150 cases, compared to 92.0% for predicate VA40F)
    Bias of absolute EFMinimal, -0.2% (unchanged from predicate VA40F)
    Percentage of cases where absolute biplane EF delta between Auto EF and GT $\le$ 10%87.9% (compared to 83.7% for predicate VA40F)
    All 12 predetermined acceptance criteriaExceeded all 12 defined acceptance criteria.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: n = 150 cases.
    • Data Provenance: The test data originated from 3 sites in the U.S., representing geographic diversity from 2 different regions. The data was collected retrospectively, as it was independent of the training data. The document states it is "representative of the intended use population for Auto EF" and balanced for gender, covering ages 21-93 years and BMIs 16.5-48.8. It also included data from three ultrasound manufacturers (Philips, GE, and Siemens).

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Number of Experts: 2 experienced sonographers.
    • Qualifications: "experienced sonographers." Specific details regarding their years of experience or board certifications are not provided in the document.

    4. Adjudication Method for the Test Set

    • Adjudication Method: The two sonographers worked independently to establish the ground truth. There is no mention of a formal adjudication process (e.g., 2+1, 3+1), arbitration by a third expert, or a consensus meeting after independent readings. They "did not have access to Auto EF when establishing the ground truth."

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • MRMC Study: No. The document explicitly states, "No clinical studies were carried out for syngo Dynamics (Version VA41D). All performance testing was conducted in a non-clinical fashion as part of the verification and validation activities for the medical device." The evaluation focused on the algorithm's performance against ground truth, not on human reader performance with or without AI assistance.
    • Effect Size of Human Reader Improvement: Not applicable, as no MRMC study was performed.

    6. Standalone (Algorithm-Only) Performance Study

    • Standalone Study: Yes. The performance validation of the Auto EF algorithm was conducted in a standalone manner. The "Auto EF results with the subject device" were compared directly against the established ground truth. The algorithm processed the images and generated biplane EF values without human intervention in the calculation process, although the system allows users to "review, edit or reject the results."

    7. Type of Ground Truth Used

    • Type of Ground Truth: Expert consensus with a conventional manual method based on the "Method of Disks" (MOD), also known as the Modified Simpson's Rule. The ground truth was established by two independent sonographers calculating left ventricular volumes and ejection fraction.

    8. Sample Size for the Training Set

    • Training Set Sample Size: Not explicitly stated. The document mentions the algorithm was "re-trained with more training data" compared to the predicate device, but does not provide a specific number.

    9. How the Ground Truth for the Training Set Was Established

    • Training Set Ground Truth Establishment: Not explicitly detailed. The document only states that the "LV auto contouring algorithm has been updated with pre-training and additional annotated training data." It does not specify the method (e.g., expert consensus, manual contouring) or the number/qualifications of experts involved in annotating the training data. However, given that the test set ground truth was established by sonographers using the Method of Disks, it is highly probable that a similar methodology was used for the training data annotation.
    Ask a Question

    Ask a specific question about this device

    K Number
    K242745

    Validate with FDA (Live)

    Date Cleared
    2025-03-27

    (197 days)

    Product Code
    Regulation Number
    892.2050
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AI-Rad Companion Organs RT is a post-processing software intended to automatically contour DICOM CT and MR pre-defined structures using deep-learning-based algorithms.

    Contours that are generated by AI-Rad Companion Organs RT may be used as input for clinical workflows including external beam radiation therapy treatment planning. AI-Rad Companion Organs RT must be used in conjunction with appropriate software such as Treatment Planning Systems and Interactive Contouring applications, to review, edit, and accept contours generated by AI-Rad Companion Organs RT.

    The outputs of AI-Rad Companion Organs RT are intended to be used by trained medical professionals.

    The software is not intended to automatically detect or contour lesions.

    Device Description

    AI-Rad Companion Organs RT provides automatic segmentation of pre-defined structures such as Organs-at-risk (OAR) from CT or MR medical series, prior to dosimetry planning in radiation therapy. AI-Rad Companion Organs RT is not intended to be used as a standalone diagnostic device and is not a clinical decision-making software.

    CT or MR series of images serve as input for AI-Rad Companion Organs RT and are acquired as part of a typical scanner acquisition. Once processed by the AI algorithms, generated contours in DICOMRTSTRUCT format are reviewed in a confirmation window, allowing clinical user to confirm or reject the contours before sending to the target system. Optionally, the user may select to directly transfer the contours to a configurable DICOM node (e.g., the Treatment Planning System (TPS), which is the standard location for the planning of radiation therapy).

    AI-Rad Companion Organs RT must be used in conjunction with appropriate software such as Treatment Planning Systems and Interactive Contouring applications, to review, edit, and accept the automatically generated contours. Then the output of AI-Rad Companion Organs RT must be reviewed and, where necessary, edited with appropriate software before accepting generated contours as input to treatment planning steps. The output of AI-Rad Companion Organs RT is intended to be used by qualified medical professionals, who can perform a complementary manual editing of the contours or add any new contours in the TPS (or any other interactive contouring application supporting DICOM-RT objects) as part of the routine clinical workflow.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    Acceptance Criteria and Device Performance Study for AI-Rad Companion Organs RT

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria for the AI-Rad Companion Organs RT device, particularly for the enhanced CT contouring algorithm, are based on comparing its performance to the predicate device and relevant literature/cleared devices. The primary metrics used are Dice coefficient and Absolute Symmetric Surface Distance (ASSD).

    Table 3: Acceptance Criteria of AIRC Organs RT VA50

    Validation Testing SubjectAcceptance CriteriaReported Device Performance (Summary)
    Organs in Predicate DeviceAll organs segmented in the predicate device are also segmented in the subject device.Confirmed. The device continued to segment all organs previously handled by the predicate.
    The average (AVG) Dice score difference between the subject and predicate device is < 3%.Confirmed. "For existing organs, the average (AVG) Dice score difference between the subject device and predicate device is smaller than 3%."
    New Organs for Subject DeviceThe subject device in the selected reference metric has a higher value than the defined baseline value.Confirmed. "The performance results of the subject device for the new CT organs are comparable to the reference literature & cleared devices. Here equivalence for the new organs is defined such that the selected reference metric has a higher value than the defined baseline."

    Table 3: Performance Summary of the Subject Device CT Contouring (Overall Average Dice Coefficients)

    Anatomic RegionAvg Dice (%)Std Dice (%)95% CI
    Head & Neck76.114.3[75.1, 77.2]
    Head & Neck lymph nodes69.313.9[68.7, 70.0]
    Thorax76.915.8[76.2, 77.6]
    Abdomen87.310.1[86.3, 88.2]
    Pelvis85.79.6[85.0, 86.5]
    Cardiac75.615.1[74.1, 77.1]

    Table 4: Detailed Performance Evaluation of the New Organs in the Subject Device (Selected Examples)

    Organ NameNo.AVG Dice (%)STD Dice (%)MED Dice (%)95%CI DiceAVG ASSD (mm)STD ASSD (mm)MED ASSD (mm)95%CI ASSD
    Left Breast3090.43.891[89, 91.8]2.42.21.8[1.5, 3.2]
    Right Breast3090.23.790.8[88.8, 91.5]1.90.71.8[1.7, 2.2]
    Bowel Bag33953.696.5[93.7, 96.3]1.91.51.4[1.4, 2.5]
    Pituitary3075.87.477[73.1, 78.6]0.70.30.6[0.5, 0.8]
    Brainstem3088.42.588.8[87.5, 89.3]10.30.9[0.9, 1.1]
    Esophagus3085.64.286[84, 87.2]0.60.30.6[0.5, 0.7]
    MEDIASTINAL LN 9L3138.321.142.9[30.6, 46.1]5.34.43.7[3.7, 6.9]

    (Note: The full Table 4 from the document provides detailed performance for all 37 new organs. This table includes a selection for illustrative purposes.)

    2. Sample Sizes and Data Provenance

    • Test Set Sample Size:
      • CT Contouring Algorithm: N = 579 cases
      • MR Contouring Algorithm: The MR algorithm is unchanged from the predicate, so its performance is unchanged. The predicate was validated using 66 cases.
    • Data Provenance (CT Contouring Algorithm Test Set):
      • Geographic Origin (Overall N=579): Data from multiple clinical sites across North American, South American, Asia, Australia, and Europe.
      • Example Cohorts (Table 5: Validation Testing Data Information based on Cohort):
        • Cohort A.1 (N=73): Germany (14), Brazil (59)
        • Cohort A.2 (N=40): Canada (40)
        • Cohort A.3 (N=301): South/North America (184), EU (44), Asia (33), Australia (28), Unknown (12)
        • Cohort B (N=165): South/North America (100), EU (51), Asia (6), Australia (3), Unknown (5)
      • Retrospective/Prospective: "retrospective performance study on CT data previously acquired for RT treatment planning."

    3. Number of Experts Used to Establish Ground Truth and Qualifications

    • Number of Experts for Ground Truth: "a team of experienced annotators mentored by radiologists or radiation oncologists" for initial manual annotation. "a board-certified radiation oncologist" performed a quality assessment including review and correction of each annotation. The document does not specify an exact number of individuals for these teams, but describes the roles and qualifications.
    • Qualifications of Experts:
      • "experienced annotators"
      • "radiologists or radiation oncologists" (mentors for annotators)
      • "board-certified radiation oncologist" (for quality assessment/review)

    4. Adjudication Method for the Test Set

    The document describes the ground truth establishment process as: "manual annotation" by experienced annotators mentored by radiologists/radiation oncologists, followed by a "quality assessment including review and correction of each annotation was done by a board-certified radiation oncologist." This indicates a hierarchical review/correction process rather than a multi-reader consensus adjudication between equally-weighted readers (e.g., 2+1 or 3+1). The final accepted contour after the board-certified radiation oncologist's review served as the ground truth.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No MRMC comparative effectiveness study was described. The study focused on the standalone performance of the AI algorithm against established ground truth and comparison with a predicate device and literature. The document does not mention an effect size of how much human readers improve with AI vs. without AI assistance. The intended use specifies that the AI-generated contours must be reviewed, edited, and accepted by trained medical professionals, implying a human-in-the-loop workflow, but the validation study presented focuses on the AI's autonomous segmentation accuracy.

    6. Standalone (Algorithm Only) Performance Study

    Yes, a standalone performance study was done. The performance metrics (Dice coefficient, ASSD) and the comparison to an expert-established ground truth demonstrate the algorithm's autonomous segmentation capability. The study validated the "autocontouring algorithms" and their performance.

    7. Type of Ground Truth Used

    The ground truth used for the test set was expert consensus / manual annotation based on clinical guidelines. Specifically: "Ground truth annotations were established following RTOG and clinical guidelines using manual annotation." This was further reviewed and corrected by a board-certified radiation oncologist.

    8. Sample Size for the Training Set

    The document provides the sample sizes for the training set for new organs introduced:

    • Table 6: Training Dataset Characteristics (Examples):
      • Lacrimal Glands Left/Right: 247
      • Pituitary Gland: 247
      • Humeral Head Left/Right: 207
      • Bowel Bag: 544
      • Pelvic Bone Left/Right: 160
      • Sacrum: 160
      • Mediastinal LN (various): 136
      • Femoral Head Left/Right: 160
      • Brainstem: 247
      • Esophagus: 247
      • Breast Left/Right: 172
      • Supraglottic Larynx: 247
      • Glottis: 247

    The total training set size for all organs is not explicitly summed, but these numbers indicate the scale of the training data used for the specific new organs.

    9. How the Ground Truth for the Training Set Was Established

    "In both the annotation process for the training and validation testing data, the annotation protocols for the OAR were defined following the applicable guidelines. The ground truth annotations were drawn manually by a team of experienced annotators mentored by radiologists or radiation oncologists using an internal annotation tool. Additionally, a quality assessment including review and correction of each annotation was done by a board-certified radiation oncologist using validated medical image annotation tools."

    This indicates the same rigorous process of expert manual annotation and review was applied to establish ground truth for the training set as for the test set. The validation testing and training data were explicitly stated to be independent.

    Ask a Question

    Ask a specific question about this device

    K Number
    K241770

    Validate with FDA (Live)

    Date Cleared
    2025-03-05

    (258 days)

    Product Code
    Regulation Number
    892.2090
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Prostate MR AI is a plug-in Radiological Computer Assisted Detection and Diagnosis Software device intended to be used · with a separate hosting application · as a concurrent reading aid to assist radiologists in the interpretation of a prostate MRI examination acquired according to the PI-RADS standard · in adult men (40 years and older) with suspected cancer in treatment naïve prostate glands The plug-in software analyzes non-contrast T2 weighted (T2W) and diffusion weighted image (DWI) series to segment the prostate gland and to provide an automatic detection and segmentation of regions suspicious for cancer. For each suspicious region detected, the algorithm moreover provides a lesion Score, by way of PI-RADS interpretation suggestion. Outputs of the device should be interpreted consistently with ACR recommendations using all available MR data (e.g., dynamic contrast enhanced images [if available]). Patient management decisions should not be made solely based on analysis by the Prostate MR AI algorithm.

    Device Description

    This premarket notification addresses the Siemens Healthineers Prostate MR AI (VA10A) Radiological Computer Assisted Detection and Diagnosis Software (CADe/CADx). Prostate MR AI is a Computer Assisted Detection and Diagnosis algorithm designed to plug into a hosting workflow that assists radiologists in the detection of suspicious lesions and their classification. It is used as a concurrent reading aid to assist radiologists in the interpretation of a prostate MRI examination acquired according to the PI-RADS standard. The automatic lesion detection requires transversal T2W and DWI series as inputs. The device automatically exports a list of detected prostate regions that are suspicious for cancer (each list entry consists of contours and a classification by Score and Level of Suspicion (LoS)), a computed suspicion map, and a per-case LoS. The results of the Prostate MR AI plug-in (with the case-level LoS, lesion center points, lesion diameters, lesion ADC median, lesion 10th percentile, suspicion map, and non-PZ segmentation considered optional) are to be shown in a hosting application that allows the radiologist to view the original case, as well as confirm, reject, or edit lesion candidates with their contours and Scores as generated by the Prostate MR AI plug-in. Moreover, the radiologist can add lesions with contours and PI-RADS scores and finalize the case. In addition, the outputs include an automatically computed prostate segmentation, as well as sub-segmentations of the peripheral zone and the rest of the prostate (non-PZ). The algorithm will augment the prostate workflow of currently cleared syngo.MR General Engine if activated via a separate license on the General Engine.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

    Acceptance Criteria and Reported Device Performance

    Acceptance CriteriaReported Device Performance
    Automatic Prostate Segmentation
    Median Dice score between AI algorithm results and ground truth masks exceeds 0.9.The median of the Dice score between the AI algorithm results and the corresponding ground truth masks exceeds the threshold of 0.9.
    Median normalized volume difference between algorithm results and ground truth masks is within ±5%.The median of the normalized volume difference between the algorithm results and the corresponding ground truth masks is within a ±5% range.
    AI algorithm results are statistically non-inferior to individual reader variability (5% margin of error, 5% significance level).The AI algorithm results as compared to any individual reader are statistically non-inferior based on variabilities that existed among the individual readers within the 5% margin of error and 5% significance level.
    Prostate Lesion Detection and Classification
    Case-level sensitivity of lesion detection ≥ 0.80 for both radiology and pathology ground truth.The case-level sensitivity of the lesion detection is equal or greater than 0.80 for both radiology and pathology ground truth.
    False positive rate per case of lesion detection < 1 false positive per case for radiology ground truth.The false positive rate per case of the lesion detection is smaller than one false positive per case for radiology ground truth.
    Accuracy of PI-RADS classification of radiology ground truth lesions (detected by algorithm) ≥ 0.8.The accuracy of the PI-RADS classification of radiology ground truth lesions detected by the algorithm is equal or greater than 0.8.
    Non-inferior performance in GE vs Siemens and African American vs non-African American cases, and in cases with peripheral zone vs non-peripheral lesions.The non-inferior performance of the subject device in GE vs Siemens and African American vs non-African American cases, and in cases with peripheral zone vs non-peripheral lesions was demonstrated. (Note: Specific metrics for this non-inferiority are not explicitly stated as distinct numerical criteria but are stated as "met".)
    Clinical Performance (Reader Study - Case-level discrimination of Gleason Grade Group ≥ 1)
    Statistically significant improvement in AUROC for aided reading vs unaided reading.Fully Inclusive Analysis: AUROC improved from 0.6758 (unaided) to 0.7010 (aided), difference of 0.0252 (95% C.I. [0.0011, 0.0493]; P=0.040). Maximally Restrictive Analysis: AUROC improved from 0.6579 (unaided) to 0.6948 (aided), difference of 0.0368 (95% C.I. [0.0108, 0.0628]; P=0.006). In both analyses, the improvement was statistically significant and the primary endpoint thus met.
    Clinical Performance (Reader Study - Lesion-level reading performance)
    Statistically significant improvement in AUwAFROC for aided reading vs unaided reading.Fully Inclusive Analysis: AUwAFROC improved in aided reading by 0.0350 (95% C.I.:[0.0020, 0.0681], P=0.037). Maximally Restrictive Analysis: AUwAFROC improved in aided vs. unaided reading by 0.302 (95% C.I.: [0.0080,0.0520], P=0.008). In both analyses, the improvement was statistically significant and the secondary endpoint thus met.
    Statistically significant improvement in Fleiss' Kappa for interreader agreement in per-case PI-RADS scores for aided reading vs unaided reading.Fleiss' Kappa improved from 0.283 (unaided) to 0.371 (aided), with a difference of 0.087 (95% C.I. [0.051, 0.125]). The improvement was statistically significant (P<0.0001).

    Study Information

    2. Sample size used for the test set and the data provenance:

    • Automatic Prostate Segmentation: 222 transversal T2 series.
      • Provenance: More than 10 clinical sites.
      • Retrospective/Prospective: Not explicitly stated, but the description of comparing against ground truth generated implies retrospective use of existing scans.
    • Prostate Lesion Detection and Classification (Standalone Performance):
      • 105 cases from 6 sites (against radiology ground truth).
      • 115 cases from 6 sites (against pathology ground truth).
      • 340 cases from the multi-reader multi-case study (used for evaluation, implied prospective for this part of the evaluation, but the cases themselves were retrospective for the reader study).
      • Provenance: 6 sites (for 105 and 115 cases), and two US sites (for 340 cases).
      • Retrospective/Prospective: The cases for the lesion detection and classification evaluation were used to compare against established ground truths, suggesting retrospective analysis of existing data. The cases for the reader study were retrospectively selected.
    • Multi-Reader Multi-Case (MRMC) Study: 340 cases.
      • Provenance: Two US sites. Cases were consecutive and specifically included additional consecutive patient cases from men of African descent to ensure at least 13% Black or African American ethnicity.
      • Retrospective/Prospective: Cases were selected retrospectively.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Automatic Prostate Segmentation: 3 expert radiologists. No specific years of experience or subspecialty beyond "radiologists" are mentioned but implied as "expert".
    • Prostate Lesion Detection and Classification (Radiology Ground Truth): 3 expert radiologists in prostate MRI reading.
    • MRMC Study (Lesion-level reference standard): 3 experienced radiologists acting as Truthers.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

    • Automatic Prostate Segmentation: Pixel-wise consensus among the 3 expert radiologists.
    • Prostate Lesion Detection and Classification (Radiology Ground Truth): Consensus reading of the 3 expert radiologists.
    • MRMC Study (Case-level reference standard): Biopsy results (Gleason Grade Group GGG ≥ 1), or for cases without biopsy, PSA density and follow-up data.
    • MRMC Study (Lesion-level reference standard): Consensus lesions with a consensus PI-RADS of at least 3 from majority voting among the 3 experienced radiologists. (This implies a form of consensus/majority vote).

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    Yes, an MRMC study was done with a paired split-plot design, combining two fully-crossed MRMC sub-studies.

    • Case-level AUROC improvement (discriminating Gleason Score ≥ 1):
      • Fully Inclusive Analysis: +0.0252 (from 0.6758 unaided to 0.7010 aided).
      • Maximally Restrictive Analysis: +0.0368 (from 0.6579 unaided to 0.6948 aided).
    • Lesion-level AUwAFROC improvement:
      • Fully Inclusive Analysis: +0.0350.
      • Maximally Restrictive Analysis: +0.0302.
    • Fleiss' Kappa (interreader agreement in per-case PI-RADS scores) improvement: +0.087 (from 0.283 unaided to 0.371 aided).

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    Yes, standalone performance was evaluated for:

    • Automatic Prostate Segmentation: Compared algorithm results to ground truth generated by radiologists.
    • Prostate Lesion Detection and Classification: Compared automatic detection and classification results to radiology ground truth and pathology ground truth.
    • MRMC Study (AI Standalone reference): The ROC curves shown graphically include a "grey curve [that] denotes AI standalone performance."

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • For Automatic Prostate Segmentation: Pixel-wise consensus from 3 expert radiologists.
    • For Prostate Lesion Detection and Classification:
      • Consensus reading of 3 expert radiologists (radiology ground truth).
      • Biopsy results for the same patient (pathology ground truth).
    • For MRMC Study (Case-level): Biopsy results (Gleason Grade Group GGG ≥ 1), and in cases where biopsy was unavailable, PSA density and follow-up (12 months negative by PSA or MRI).
    • For MRMC Study (Lesion-level): Consensus lesions with a consensus PI-RADS of at least 3 from majority voting among 3 experienced radiologists.

    8. The sample size for the training set:

    The document states: "The cases for the reader study were kept completely separate from those used for the training of the Prostate MR AI algorithm." However, it does not specify the sample size for the training set. It only mentions that the AI algorithm was "trained on a database of prostate MR image series acquired according to the PI-RADS standard (non-contrast T2W and DWI image series), and corresponding radiological and/or biopsy findings."

    9. How the ground truth for the training set was established:

    The ground truth for the training set was established based on "corresponding radiological and/or biopsy findings." Specific details on the adjudication method (e.g., number of experts, consensus process) for the training set are not provided in this document, only the source of the ground truth.

    Ask a Question

    Ask a specific question about this device

    K Number
    K240796

    Validate with FDA (Live)

    Date Cleared
    2024-08-06

    (137 days)

    Product Code
    Regulation Number
    892.2050
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    MyAblation Guide is a software application for image processing, 2D/3D visualization, and comparison of medical images imported from multiple imaging modalities.

    The software is controlled by the end user interface on a workstation with DICOM connectivity or as an integrated version on a Siemens CT scanner workstation.

    The application is used to assist in the preparation and performance of ablative procedures, including of ablation targets, virtual ablation probe placement and contouring of ablated areas, as well as supporting the User in their assessment of the treatment. The application can only be used by trained Users.

    The software is not intended for diagnosis and is not intended to predict ablation volumes or predict ablation success.

    Device Description

    myAblation Guide is a software medical device that is used in the context of percutaneous ablative procedures with straight instruments. It is used by clinical professionals in a hospital premise; it can be either deployed on compatible CT scanners or a computer workstation.

    The application is operated by medical professionals such as Interventional Radiologists and medical technologists with current license and/or certification as required by regional authority. myAblation Guide allows operating functions in an arbitrary sequence. In addition, it includes a structured sequence of steps for ease of utility.

    The application supports anatomical datasets from CT, MR, CBCT, as well as PET/CT.

    The application includes means and functionalities to support in:

    · Multimodality viewing and contouring of anatomical, and multi-parametric images such as CT, CBCT, PET/CT, MRI

    · Multiplanar reconstruction (MPR) thin/thick, minimum intensity projection (MIP), volume rendering technique (VRT)

    · Freehand and semi-automatic contouring of regions-of-interest on any orientation including oblique

    • Manual and semi-automatic registration using rigid and deformable registration
    • · Expansion of created contour structures to visualize a safety margin

    · Functionality to support the user in creating virtual ablation needle paths and associated virtual ablation zones derived from manufacturer data

    • · Export of virtual needle paths in the Dicom SSO format
    • · Supports the user in comparing, contouring, and ablation needle planning based on datasets acquired with different imaging modalities
    • Supports multimodality image fusion
    • · Supports user's procedure flow via a task stepper

    Thermal ablation cannot be triggered from myAblation Guide.

    AI/ML Overview

    The provided text details the 510(k) submission for the myAblation Guide (VB80A) device. It includes information on non-clinical testing performed to demonstrate the device meets established design criteria.

    Here's an organized breakdown of the acceptance criteria and study proving the device meets them, based on the provided text:

    Acceptance Criteria and Reported Device Performance

    MetricAcceptance Criteria (Implied by Study Target/Reference)Reported Device Performance (myAblation Guide)
    Lesion SegmentationDice Score: 0.82 (from Moltz et al. study)Dice Score: 0.65 (All Lesion Types)
    Sensitivity: N/ASensitivity: 0.82 (All Lesion Types)
    Ablation Zone SegmentationN/A (no specific numerical target stated)Dice Score: 0.65
    N/ASensitivity: 0.95

    Note on Acceptance Criteria: The document implies the Moltz et al. study's Dice coefficient of 0.82 on liver metastases as a benchmark, stating "the algorithm effectively demonstrated the segmentation of both hyperdense and hypodense lesions... With a Dice coefficient (Dice similarity index) of 0.82". For the internal study, the reported Dice scores and sensitivities appear to be the performance metrics being presented to demonstrate functionality rather than explicitly stated "acceptance criteria" that must be met. However, for the purpose of this exercise, we can infer that these reported values demonstrate the device's acceptable performance.

    Study Details

    1. Sample Size Used for the Test Set and Data Provenance:

      • Lesion Segmentation (Moltz et al. study): 5 different datasets comprising 10 liver metastases. Data provenance is not specified (e.g., country of origin, retrospective/prospective).
      • Lesion Segmentation (Internal Study): 50 patients. Data provenance is not specified (e.g., country of origin, retrospective/prospective), but it is referred to as an "internal study," suggesting it was conducted by the manufacturer or an affiliated entity.
      • Ablation Zone Segmentation: 33 patients with 41 available ablation zones. Data provenance is not specified.
    2. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:

      • The document does not specify the number of experts or their qualifications for establishing ground truth for the test sets in either the Moltz et al. study or the internal studies.
    3. Adjudication Method for the Test Set:

      • The document does not provide details on any adjudication method (e.g., 2+1, 3+1, none) used for the test sets. It only mentions the comparison of algorithm performance against a reference.
    4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

      • No MRMC comparative effectiveness study was done. The document explicitly states: "No clinical studies were carried out for the subject device, and therefore, no such clinical data is provided within this submission." The study focuses on "algorithm's performance" and "semi-automatic liver ablation zone segmentation."
    5. Standalone (Algorithm Only Without Human-in-the-Loop) Performance:

      • Yes, the performance data presented (Dice scores, Sensitivity) are indicative of standalone (algorithm only) performance for the semi-automatic segmentation algorithms. The phrasing "To assess the algorithm's performance" and "The internal analysis of the lesion segmentation" supports this. The device is a "software application for image processing," and the described tests evaluate the segmentation algorithms within this software.
    6. Type of Ground Truth Used:

      • The document does not explicitly state the type of ground truth used (e.g., expert consensus, pathology, outcomes data). However, for segmentation tasks, ground truth is typically established by expert manual annotation or referencing pathology for pathological confirmation. Given the context of "assessed" cases and "segmentation," it is highly probable that the ground truth was established by expert review/annotation of the medical images.
    7. Sample Size for the Training Set:

      • The document does not specify the sample size for the training set. The provided information relates only to the test sets used for evaluating the semi-automatic segmentation algorithms.
    8. How the Ground Truth for the Training Set Was Established:

      • The document does not provide information on how the ground truth for the training set was established, as the size and specifics of the training set are not mentioned.
    Ask a Question

    Ask a specific question about this device

    K Number
    K240294

    Validate with FDA (Live)

    Date Cleared
    2024-05-23

    (112 days)

    Product Code
    Regulation Number
    892.2050
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Syngo Carbon Enterprise Access is indicated for display and rendering of medical data within healthcare institutions.

    Device Description

    Syngo Carbon Enterprise Access is a software only medical device which is intended to be installed on recommended common IT Hardware. The hardware is not seen as part of the medical device. Syngo Carbon Enterprise Access is intended to be used in clinical image and result distribution for diagnostic purposes by trained medical professionals and provides standardized generic interfaces to connect to medical devices without controlling or altering their functions.

    Syngo Carbon Enterprise Access provides an enterprise-wide web application for viewing DICOM, non-DICOM, multimedia data and clinical documents to facilitate image and result distribution.

    AI/ML Overview

    The provided text is a 510(k) summary for the Syngo Carbon Enterprise Access (VA40A) device. It describes the device, its intended use, and compares it to a predicate device (Syngo Carbon Space VA30A). However, it explicitly states:

    "No clinical studies were carried out for the product, all performance testing was conducted in a non-clinical fashion as part of verification and validation activities of the medical device."

    Therefore, I cannot provide the information requested in your prompt regarding acceptance criteria, sample sizes, expert involvement, adjudication, MRMC studies, standalone performance, and ground truth establishment for a clinical study.

    The document outlines "Non-clinical Performance Testing" and "Software Verification and Validation." These sections describe how the device's functionality was tested and validated to ensure it meets specifications and is substantially equivalent to the predicate device.

    Here's what can be extracted about the "study" that proves the device meets "acceptance criteria" from a non-clinical performance testing and software verification/validation perspective:

    1. Table of acceptance criteria and reported device performance:

    The document broadly states that "The testing results support that all the software specifications have met the acceptance criteria" and "Results of all conducted testing were found acceptable in supporting the claim of substantial equivalence." However, it does not provide a specific, detailed table of acceptance criteria and quantitative performance metrics for each criterion. It rather focuses on demonstrating equivalence through feature comparison and general assertions of testing success.

    2. Sample size used for the test set and the data provenance:

    The document mentions "non-clinical tests" and "software verification and validation testing." It does not specify the sample size for the test set or the provenance of the data used for this non-clinical testing (e.g., country of origin, retrospective or prospective). This testing would typically involve various types of simulated data, pre-recorded medical images, and functional tests rather than patient studies.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    Since no clinical studies were performed and the testing was non-clinical and technical, there's no mention of "experts" establishing ground truth in the way it would be done for a clinical performance study. The "ground truth" for non-clinical software testing would be derived from the product's functional specifications and expected outputs.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    Not applicable, as no clinical study with human interpretation/adjudication was conducted.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    Not applicable, as no clinical studies, especially MRMC studies, were conducted. The device is a medical image management and processing system, not an AI-assisted diagnostic tool.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    The testing was on the device's functionality as a standalone software, but this refers to its technical performance in image management and display, not a diagnostic algorithm.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    For this type of device and testing, the "ground truth" would be the functional specifications and expected behavior of the software based on its design documents. For example, if a function is to display a DICOM image, the ground truth is that the image should be displayed correctly according to DICOM standards. It's not based on clinical or pathological findings.

    8. The sample size for the training set:

    Not applicable. This device is a medical image management and processing system, not an AI/ML diagnostic algorithm that requires a "training set."

    9. How the ground truth for the training set was established:

    Not applicable, as there is no training set for this type of device.

    In summary, the provided document focuses on demonstrating substantial equivalence through a comparison of features with a predicate device and general non-clinical verification and validation activities, rather than a clinical performance study with defined acceptance criteria and statistical analysis of human or AI diagnostic performance.

    Ask a Question

    Ask a specific question about this device

    K Number
    K233753

    Validate with FDA (Live)

    Date Cleared
    2024-03-21

    (120 days)

    Product Code
    Regulation Number
    892.1750
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AI-Rad Companion (Pulmonary) is image processing software that provides quantitative and qualitative analysis from previously acquired Computed Tomography DICOM images to support radiologists and physicians from specialty care and general practice in the evaluation and assessment of disease of the lungs.

    It provides the following functionality:

    • Segmentation and measurements of complete lung and lung lobes
    • · Identification of areas with lower Hounsfield values in comparison to a predefined threshold for complete lung and lung lobes
    • · Providing an interface to external Medical Device syngo.CT Lung CAD
    • · Segmentation and measurements of solid and sub-solid lung nodules
    • Dedication of found lung nodules to corresponding lung lobe
    • Correlation of segmented lung nodules of current scan with known priors and quantitative assessment of changes of the correlated data.
    • Identification of areas with elevated Hounsfield values, where areas with elevated versus high opacities are distinquished.

    The software has been validated for data from Siemens Healthineers (filtered backprojection and iterative reconstruction), GE Healthcare (filtered backprojection reconstruction), and Philips (filtered backprojection reconstruction).

    Only DICOM images of adult patients are considered to be valid input.

    Device Description

    The subject device AI-Rad Companion (Pulmonary) is an image processing software that utilizes machine learning and deep learning algorithms to provide quantitative and qualitative analysis from previously acquired Computed Tomography DICOM images to support qualified clinicians in the evaluation and assessment of disease of the thorax. AI-Rad Companion (Pulmonary) builds on platform functionality provided by the AI-Rad Companion Engine and cloud/edge functionality provided by the Siemens Healthineers teamplay digital platform. AI-Rad Companion (Pulmonary) is an adjunct tool and does not replace the role of a qualified medical professional. AI-Rad Companion (Pulmonary) is also not designed to detect the presence of radiographic findings other than the prespecified list. Qualified medical professionals should review original images for all suspected pathologies.

    AI-Rad Companion (Pulmonary) offers:

    • Segmentation of lungs, ●
    • Segmentation of lung lobes.
    • Parenchyma evaluation, ●
    • Parenchyma ranges,
    • Pulmonary density,
    • Visualization of segmentation and parenchyma results,
    • Interface to LungCAD,
    • Lesion segmentation, ●
    • Visualization of lesion segmentation results, ●
    • Lesion follow-up

    AI-Rad Companion (Pulmonary) requires images of patients of 22 years and older.

    AI-Rad Companion (Pulmonary) SW version VA40 is an enhancement to the previously cleared device AI-Rad Companion (Pulmonary) (K213713) that utilizes machine and deep learning algorithms to provide quantitative and qualitative analysis to computed tomography DICOM images to support qualified clinicians in the evaluation and assessment of disease of the thorax.

    As an update to the previously cleared device, the following modifications have been made:

    • Sub-solid Lung Nodule Segmentation ●

    This feature provides the ability to segment and measure all subtypes of lesions including solid and sub-solid lesions.

    • . Modified Indications for Use Statement The indications for use statement was updated to include descriptive text for sub-solid lung nodule addition.
    • Updated Subject Device Claims List The claims list was updated to reflect the new device functionality
    • . Updated Limitations for Use Additional limitations for use has been added to the subject device.
    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device's performance, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    Validation TypeTarget (Acceptance Criteria)Reported Device Performance
    Failure Rate< 1%Not explicitly stated as a percentage, but implied to have passed since overall performance goals were met.
    Accuracy for Solid & Calcified NodulesAverage DICE ≥ predicate Bias & RMSE ≤ predicate LoA in line with predicate LoAAverage DICE coefficient was greater than that of the predicate. (Bias, RMSE, LoA not explicitly detailed beyond "met goals")
    Accuracy for Sub-solid NodulesPerformance was analyzed in relation to the predicate's solid nodule segmentation performance Median nodule size range for sub-solid is 10mm-20mm LoA for 10mm-20mm ≥ 95% for all three diameter metricsAverage DICE coefficient for sub-solid nodules was superior to the average DICE coefficient of the predicate device for solid nodules. (LoA not explicitly detailed beyond "met goals")
    DICE scoreAverage DICE score for sub-solid nodules > average DICE for predicate solid nodulesAverage DICE coefficient for sub-solid nodules was superior to the average DICE coefficient of the predicate device for solid nodules (repetition of earlier point, but reinforces direct comparison).
    Consistency of Subgroup resultsAverage DICE not smaller than DICE of overall cohort minus 1 STD Bias of three metrics not exceed ±1 STD RMSE of three metrics not exceed RMSE of overall cohort +1 STD eachThe subject device met its individual subgroup analysis acceptance criterion for all subgroups.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: 273 subjects from the United States and 254 subjects from Germany, for a total of 527 subjects.
    • Data Provenance: The data originated from the United States (69% of cases) and Germany (31% of cases). The data was retrospective, as it refers to "previously acquired Computed Tomography DICOM images."
    • Imaging Vendors: The test data included images from Canon/Toshiba (18%), GE (35%), Philips (15%), and Siemens (32%).

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications

    • Number of Experts: Two board-certified radiologists, with a third radiologist for adjudication.
    • Qualifications:
      • Radiologist 1: 10 years of experience (board-certified)
      • Radiologist 2: 7 years of experience (board-certified)
      • Adjudicating Radiologist 3: 9 years of experience

    4. Adjudication Method for the Test Set

    • Method: 2+1 (Two experts independently established ground truth, and in case of disagreement, a third expert served as an adjudicator.)

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done

    • Was it done?: No, a traditional MRMC comparative effectiveness study involving human readers was not performed in the context of this specific submission. The study focuses on the standalone performance of the AI algorithm in comparison to the predicate device's performance, particularly for the new sub-solid nodule segmentation feature. The device is described as an "adjunct tool," but the presented study validates the algorithm's performance against expert consensus, not against human readers with and without AI.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    • Was it done?: Yes, the performance testing described directly evaluates the AI-Rad Companion (Pulmonary) lesion segmentation algorithm's accuracy (measured by DICE score, bias, and RMSE) against established ground truth. This is a standalone performance evaluation of the algorithm.

    7. The Type of Ground Truth Used

    • Type: Expert Consensus. The ground truth annotations for the test data were established independently by two board-certified radiologists, with a third radiologist serving as an adjudicator in cases of disagreement.

    8. The Sample Size for the Training Set

    • The sample size for the training set is not explicitly stated in the provided document. However, it is mentioned that "None of the clinical sites providing the test data provided data for training of any of the algorithms. Therefore there is a clear independence on site level between training and test data." This indicates that a distinct training set (or sets) was used.

    9. How the Ground Truth for the Training Set was Established

    • The document does not explicitly state how the ground truth for the training set was established. It only emphasizes the independence of the training and test data sites.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 3