K Number
K153343
Date Cleared
2016-04-15

(148 days)

Product Code
Regulation Number
892.1000
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The MAGNETOM systems are indicated for use as magnetic resonance diagnostic devices (MRDD) that produce transverse, sagittal, coronal and oblique cross sectional images, spectroscopic images and/or spectra, and that display the internal structure and/or function of the head, body or extremities.

Other physical parameters derived from the images and/or spectra may also be produced. Depending on the region of interest, contrast agents may be used. These images and/or spectra and the physical parameters derived from the images and/or spectra when interpreted by a trained physician, yield information that may assist in diagnosis.

The MAGNETOM systems described above may also be used for imaging during interventional procedures when performed with MR compatible devices such as in-room display and MR-Safe biopsy needles.

Device Description

The subject device, syngo MR E11C system software, is being made available for the following MAGNETOM MR Systems:

  • MAGNETOM Aera,
  • MAGNETOM Skyra, ●
  • MAGNETOM Prisma and
  • MAGNETOM Prisma™ ●

The syngo MR E11C SW includes new sequences. new features and minor modifications of already existing features.

AI/ML Overview

The provided text describes a 510(k) premarket notification for new software (syngo MR E11C) for Siemens MAGNETOM MR systems. However, it does not contain the detailed information required to answer all aspects of your request regarding acceptance criteria and a study proving device performance as typically expected for AI/ML device submissions.

This submission is for a software update to existing Magnetic Resonance Diagnostic Devices (MRDDs), and the focus is on demonstrating substantial equivalence to previously cleared predicate devices. The "study" mentioned is primarily non-clinical performance testing and software verification/validation, rather than a clinical study with acceptance criteria for specific diagnostic outcomes.

Here's an attempt to extract and infer information based on the provided text, highlighting what is present and what is missing:


1. Table of acceptance criteria and the reported device performance

The document does not explicitly state quantitative acceptance criteria for diagnostic performance or specific metrics. Instead, it relies on demonstrating that the new software's features perform "as intended" and maintain "equivalent safety and performance profile" compared to predicate devices.

Acceptance CriterionReported Device Performance
Qualitative Image Quality AssessmentNew/modified sequences and algorithms underwent image quality assessments, and the results "demonstrate that the device performs as intended."
Acoustic Noise Reduction (for qDWI)Acoustic noise measurements were performed for quiet sequences, implying that the qDWI sequence met its objective of being "noise reduced."
Functionality as Intended"Results from each set of tests demonstrate that the device performs as intended and is thus substantially equivalent to the predicate devices..."
Software Verification and ValidationCompleted in accordance with FDA guidance, implying the software meets specified requirements.
Safety and Effectiveness Equivalence"The features with different technological characteristics from the predicate devices bear an equivalent safety and performance profile as that of the predicate and secondary predicate devices."

2. Sample size used for the test set and the data provenance

  • Test Set Sample Size: "Sample clinical images were taken for particular new and modified sequences." The specific number or characteristics of these images (sample size) is not provided.
  • Data Provenance: The document does not specify the country of origin of the data or whether it was retrospective or prospective. It only mentions "sample clinical images," suggesting clinical data was used for assessment.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

  • This information is not provided. The document states "Image quality assessments... were completed," but does not detail who performed these assessments or how ground truth was established for them. For a diagnostic device, interpretation by a "trained physician" is mentioned in the Indications for Use, but this is a general statement about the device's usage, not specific to the assessment of the new software.

4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

  • This information is not provided.

5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

  • No, an MRMC comparative effectiveness study was not done. The document explicitly states: "No clinical tests were conducted to support the subject device and the substantial equivalence argument..."
  • This submission is not for an AI-enhanced diagnostic tool in the sense of providing automated interpretations or assisting human readers in a measurable way with specific diagnostic outcomes. It's an update to MR imaging acquisition software. Therefore, the concept of "how much human readers improve with AI vs without AI assistance" does not apply in this context.

6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

  • The device is a Magnetic Resonance Diagnostic Device (MRDD) software update. Its output is images and/or spectra that are "interpreted by a trained physician" to "assist in diagnosis." As such, it is inherently a human-in-the-loop system. The non-clinical tests involved "Image quality assessments" and "Acoustic noise measurements," which are performance evaluations of the acquisition capabilities, not a standalone diagnostic interpretation by the algorithm.
  • Therefore, a standalone diagnostic performance evaluation (algorithm only) in the context of providing a diagnosis was not performed or described.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

  • For "Image quality assessments," the type of ground truth is not explicitly stated. It can be inferred that it would likely involve visual assessment by experts against what is considered normal or expected for an MR image, potentially comparing to images acquired with predicate software or known anatomical/pathological features. However, specific ground truth methods like pathology or long-term outcomes data are not mentioned.

8. The sample size for the training set

  • The document does not mention a separate training set or details about its size. This submission focuses on software changes and their verification, not on the development of a new AI model that requires a distinct training phase.

9. How the ground truth for the training set was established

  • Since a separate training set is not mentioned, the method for establishing its ground truth is also not provided.

Summary of what's present and what's missing:

This 510(k) submission primarily focuses on demonstrating that new software features (like quiet diffusion imaging, improved fast TSE, simultaneous multi-slice imaging, and a short acquisition time brain examination protocol) for existing MR systems maintain the fundamental technological characteristics, safety, and effectiveness of predicate devices. The "study" here is a series of non-clinical tests (image quality review, acoustic noise measurements, software V&V) rather than a clinical trial measuring diagnostic accuracy or reader performance. The level of detail you're asking for, especially concerning clinical study design elements like sample size, expert reader qualifications, adjudication methods, and ground truth establishment for diagnostic output, is typically found in submissions for AI/ML diagnostic tools that directly interpret images or provide diagnostic assistance, which is not the primary claim of this particular device update.

§ 892.1000 Magnetic resonance diagnostic device.

(a)
Identification. A magnetic resonance diagnostic device is intended for general diagnostic use to present images which reflect the spatial distribution and/or magnetic resonance spectra which reflect frequency and distribution of nuclei exhibiting nuclear magnetic resonance. Other physical parameters derived from the images and/or spectra may also be produced. The device includes hydrogen-1 (proton) imaging, sodium-23 imaging, hydrogen-1 spectroscopy, phosphorus-31 spectroscopy, and chemical shift imaging (preserving simultaneous frequency and spatial information).(b)
Classification. Class II (special controls). A magnetic resonance imaging disposable kit intended for use with a magnetic resonance diagnostic device only is exempt from the premarket notification procedures in subpart E of part 807 of this chapter subject to the limitations in § 892.9.