Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K210611
    Date Cleared
    2021-07-01

    (122 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K202014, K082331

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The MAGNETOM MR system is indicated for use as a magnetic resonance diagnostic device (MRDD), which produces transverse, sagittal, coronal, and oblique cross sectional images that display the internal structure and/or function of the head, body, or extremities. Other physical parameters derived from the images may also be produced. Depending on the region of interest, contrast agents may be used.

    These images and the physical parameters derived from the images when interpreted by a trained physician yield information that may assist in diagnosis.

    Device Description

    The subject device, MAGNETOM Free.Max with software syngo MR XA40A, is an 80 cm bore Magnetic Resonance Imaging system with an actively shielded 0.55T superconducting magnet. Which is the first 0.55T MRI system for clinical use in the U.S.

    AI/ML Overview

    This FDA 510(k) summary for the MAGNETOM Free.Max MRI system focuses on demonstrating substantial equivalence to a predicate device rather than providing specific acceptance criteria for a new AI/CADe algorithm. Therefore, much of the requested information regarding AI performance metrics, sample sizes for test/training sets, expert qualifications, and ground truth establishment is not present in this document.

    However, I can extract the information that is available:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly define acceptance criteria in terms of numerical thresholds for specific performance metrics (e.g., sensitivity, specificity for a diagnostic task). Instead, the performance testing aims to demonstrate equivalence in image quality and safety to the predicate device.

    Performance Test TypeTested Hardware or SoftwareRationale/Goal
    Sample clinical imagesCoils, new and modified software features, pulse sequence typesGuidance for Submission of Premarket Notifications for Magnetic Resonance Diagnostic Devices (to show comparable image quality)
    Image quality assessments by sample clinical images (including comparison with predicate device features)New/modified pulse sequence types and algorithmsDiagnostic Devices (to demonstrate equivalent image quality/quantitative data)
    Performance bench testSNR and image uniformity measurements for coils; heating measurements for coils(Implicitly, to ensure performance within expected limits and safety standards)
    Software verification and validationMainly new and modified software featuresGuidance for the Content of Premarket Submissions for Software Contained in Medical Devices (to ensure software functions as intended and safely)
    Peripheral Nerve Stimulation (PNS) effects studySubject systemTo understand and assess PNS effects.

    2. Sample size used for the test set and the data provenance

    • Test Set (for PNS study): 12 individuals
    • Data Provenance: Not explicitly stated, but the PNS study was a "clinical study" suggesting prospective data collection. The software verification and validation would likely use a mix of internally generated and potentially simulated data. Sample clinical images would be from human subjects but their precise origin isn't detailed beyond being "sample clinical images."

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Not specified. The document does not describe the establishment of a "ground truth" by experts in the context of an algorithmic diagnostic performance study. The images are "interpreted by a trained physician" as per the Indications for Use, which is general clinical practice, not a specific ground truth establishment for algorithm evaluation.

    4. Adjudication method for the test set

    • Not applicable. No expert adjudication method is described for an algorithmic performance evaluation.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No MRMC comparative effectiveness study involving human readers with and without AI assistance is mentioned. This submission is for the MRI system itself, not an AI-powered diagnostic aid that assists human readers.

    6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done

    • Not applicable. The "software verification and validation" would assess the software's functional performance, but not in the context of a standalone diagnostic algorithm providing a clinical output that would typically be evaluated for sensitivity/specificity. The Deep Resolve Gain and Deep Resolve Sharp features hint at image processing algorithms, but their standalone diagnostic performance is not presented.

    7. The type of ground truth used

    • For the PNS study, the "ground truth" would be the physiological response of the individuals.
    • For image quality assessments, the ground truth is subjective visual assessment and objective metrics (SNR, uniformity) compared against engineering specifications and predicate device performance.
    • For software verification and validation, the ground truth is adherence to design specifications and expected functional behavior.
    • No "expert consensus, pathology, or outcomes data" ground truth is described in the context of validating a diagnostic algorithm's performance against clinical findings.

    8. The sample size for the training set

    • Not specified. The document refers to "Deep Resolve Gain" and "Deep Resolve Sharp" which are likely AI-based image processing features. However, the details of their training data (sample size, origin, ground truth) are not provided. The listed clinical publications for these features (e.g., "Residual Dense Network for Image Super-Resolution") suggest they are based on deep learning techniques that would require training data.

    9. How the ground truth for the training set was established

    • Not specified. As noted above, details about training data for any potential AI components (like Deep Resolve) and their ground truth are not included in this summary.

    Summary of what the document indicates about the device:

    This 510(k) submission for the MAGNETOM Free.Max MRI system focuses on demonstrating substantial equivalence to an existing predicate device (MAGNETOM Sempra) by:

    • Comparing technological characteristics (hardware and software).
    • Performing non-clinical tests (sample clinical images, image quality assessments, bench tests for SNR/uniformity/heating, and general software V&V) to ensure the new device performs effectively and safely in a manner equivalent to the predicate.
    • Conducting a small clinical study on Peripheral Nerve Stimulation (PNS) effects for safety.
    • Referencing clinical publications for various new software features, implying that the underlying scientific principles and expected clinical utility of these features are generally accepted.

    The document does not detail the validation of a specific AI/CADe diagnostic algorithm with acceptance criteria related to clinical diagnostic performance metrics. If "Deep Resolve Gain" and "Deep Resolve Sharp" involve AI, their validation is presented as part of overall system performance and image quality rather than as a standalone diagnostic aid.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1