Search Filters

Search Results

Found 4 results

510(k) Data Aggregation

    K Number
    K231560
    Date Cleared
    2023-10-23

    (146 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    MAGNETOM Vida; MAGNETOM Lumina; MAGNETOM Aera; MAGNETOM Skyra; MAGNETOM Prisma; MAGNETOM Prisma fit

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The MAGNETOM system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces transverse, sagittal, coronal and oblique cross-sectional images, spectroscopic images and/or spectra, and that displays the internal structure and/or function of the head, body, or extremities. Other physical parameters derived from the images and/or spectra may also be produced. Depending on the region of interest, contrast agents may be used. These images and/or spectra and the physical parameters derived from the images and/or spectra when interpreted by a trained physician yield information that may assist in diagnosis.

    The MAGNETOM system may also be used for imaging during interventional procedures when performed with MR compatible devices such as in-room displays and MR Safe biopsy needles.

    Device Description

    The subject devices, MAGNETOM Aera (including MAGNETOM Aera Mobile), MAGNETOM Skyra, MAGNETOM Prisma, MAGNETOM Prisma™, MAGNETOM Vida, MAGNETOM Lumina with software syngo MR XA60A, consist of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Vida with syngo MR XA50A (K213693).

    AI/ML Overview

    This FDA 510(k) summary describes several updates to existing Siemens Medical Solutions MRI systems (MAGNETOM Vida, Lumina, Aera, Skyra, Prisma, and Prisma fit), primarily focusing on software updates (syngo MR XA60A) and some modified/new hardware components. The document highlights the evaluation of new AI features, specifically "Deep Resolve Boost" and "Deep Resolve Sharp."

    Here's an analysis of the acceptance criteria and the study details for the AI features:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document provides a general overview of the evaluation metrics used but does not explicitly state acceptance criteria in a quantitative format (e.g., "Deep Resolve Boost must achieve a PSNR of X" or "Deep Resolve Sharp must achieve Y SSIM"). Instead, it describes the types of metrics used and qualitative assessments.

    AI FeatureAcceptance Criteria (Implicit from Evaluation)Reported Device Performance (Summary)
    Deep Resolve Boost- Preservation of image quality (aliasing artifacts, image sharpness, denoising levels) compared to original.
    • Impact characterized by PSNR and SSIM. | The impact of the network has been characterized by several quality metrics such as peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Most importantly, the performance was evaluated by visual comparisons to evaluate e.g., aliasing artifacts, image sharpness and denoising levels. |
      | Deep Resolve Sharp | - Preservation of image quality (image sharpness) compared to original.
    • Impact characterized by PSNR, SSIM, and perceptual loss.
    • Verification and validation by visual rating and evaluation of image sharpness by intensity profile comparisons. | The impact of the network has been characterized by several quality metrics such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and perceptual loss. In addition, the feature has been verified and validated by inhouse tests. These tests include visual rating and an evaluation of image sharpness by intensity profile comparisons of reconstructions with and without Deep Resolve Sharp. |

    2. Sample Size Used for the Test Set and Data Provenance

    • Deep Resolve Boost: The document doesn't explicitly state a separate "test set" size. It mentions the "Training and Validation data" which includes:
      • TSE: more than 25,000 slices
      • HASTE: pre-trained on the TSE dataset and refined with more than 10,000 HASTE slices
      • EPI Diffusion: more than 1,000,000 slices
      • Data Provenance: The data covered a broad range of body parts, contrasts, fat suppression techniques, orientations, and field strength. No specific country of origin is mentioned, but the manufacturer (Siemens Healthcare GmbH) is based in Germany, and Siemens Medical Solutions USA, Inc. is the submitter. The data was "retrospectively created from the ground truth by data manipulation and augmentation."
    • Deep Resolve Sharp: The document doesn't explicitly state a separate "test set" size. It mentions "Training and Validation data" from "on more than 10,000 high resolution 2D images."
      • Data Provenance: Similar to Deep Resolve Boost, the data covered a broad range of body parts, contrasts, fat suppression techniques, orientations, and field strength. Data was "retrospectively created from the ground truth by data manipulation." No specific country of origin is mentioned.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    Not specified. The document states that the acquired datasets "represent the ground truth." There is no mention of expert involvement in establishing ground truth for the test sets. The focus is on technical metrics (PSNR, SSIM) and "visual comparisons" or "visual rating" which implies expert review, but the number and qualifications are not provided.

    4. Adjudication Method for the Test Set

    Not explicitly stated. The document mentions "visual comparisons" for Deep Resolve Boost and "visual rating" for Deep Resolve Sharp. This suggests subjective human review, but no specific adjudication method (like 2+1 or 3+1 consensus) is detailed.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs without AI Assistance

    No MRMC comparative effectiveness study is described for the AI features. The studies mentioned (sections 8 and 9) focus on evaluating the technical performance and image quality of the AI algorithms themselves, not on their impact on human reader performance.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    Yes, standalone performance evaluation of the algorithms was conducted. The "Test Statistics and Test Results Summary" for both Deep Resolve Boost and Deep Resolve Sharp detail the evaluation of the network's impact using quantitative metrics (PSNR, SSIM, perceptual loss) and qualitative assessments ("visual comparisons," "visual rating," "intensity profile comparisons"). This represents the algorithm's performance independent of a human reader's diagnostic accuracy.

    7. The Type of Ground Truth Used

    The ground truth used for both Deep Resolve Boost and Deep Resolve Sharp was the acquired datasets themselves, representing the original high-quality or reference images/slices.

    • For Deep Resolve Boost, input data was "retrospectively created from the ground truth by data manipulation and augmentation," including undersampling k-space lines, lowering SNR, and mirroring k-space data. The original acquired data serves as the target "ground truth" for the AI to reconstruct/denoise.
    • For Deep Resolve Sharp, input data was "retrospectively created from the ground truth by data manipulation," specifically by cropping k-space data to create low-resolution input, with the original high-resolution data serving as the "output / ground truth" for training and validation.

    8. The Sample Size for the Training Set

    • Deep Resolve Boost:
      • TSE: more than 25,000 slices
      • HASTE: pre-trained on the TSE dataset and refined with more than 10,000 HASTE slices
      • EPI Diffusion: more than 1,000,000 slices
    • Deep Resolve Sharp: more than 10,000 high resolution 2D images.

    9. How the Ground Truth for the Training Set Was Established

    The ground truth for the training set was established as the acquired, unaltered (or minimally altered, e.g., removal of k-space lines to simulate lower quality input from high quality ground truth) raw imaging data.

    • For Deep Resolve Boost: "The acquired datasets (as described above) represent the ground truth for the training and validation. Input data was retrospectively created from the ground truth by data manipulation and augmentation." This implies that the original, high-quality scans were considered the ground truth, and the AI was trained to restore manipulated, lower-quality versions to this original quality.
    • For Deep Resolve Sharp: "The acquired datasets represent the ground truth for the training and validation. Input data was retrospectively created from the ground truth by data manipulation. k-space data has been cropped such that only the center part of the data was used as input. With this method corresponding low-resolution data as input and high-resolution data as output / ground truth were created for training and validation." Similar to Boost, the original, higher-resolution scans served as the ground truth.
    Ask a Question

    Ask a specific question about this device

    K Number
    K153343
    Date Cleared
    2016-04-15

    (148 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    MAGNETOM Aera, MAGNETOM Skyra, MAGNETOM Prisma, MAGNETOM Prisma fit

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The MAGNETOM systems are indicated for use as magnetic resonance diagnostic devices (MRDD) that produce transverse, sagittal, coronal and oblique cross sectional images, spectroscopic images and/or spectra, and that display the internal structure and/or function of the head, body or extremities.

    Other physical parameters derived from the images and/or spectra may also be produced. Depending on the region of interest, contrast agents may be used. These images and/or spectra and the physical parameters derived from the images and/or spectra when interpreted by a trained physician, yield information that may assist in diagnosis.

    The MAGNETOM systems described above may also be used for imaging during interventional procedures when performed with MR compatible devices such as in-room display and MR-Safe biopsy needles.

    Device Description

    The subject device, syngo MR E11C system software, is being made available for the following MAGNETOM MR Systems:

    • MAGNETOM Aera,
    • MAGNETOM Skyra, ●
    • MAGNETOM Prisma and
    • MAGNETOM Prisma™ ●

    The syngo MR E11C SW includes new sequences. new features and minor modifications of already existing features.

    AI/ML Overview

    The provided text describes a 510(k) premarket notification for new software (syngo MR E11C) for Siemens MAGNETOM MR systems. However, it does not contain the detailed information required to answer all aspects of your request regarding acceptance criteria and a study proving device performance as typically expected for AI/ML device submissions.

    This submission is for a software update to existing Magnetic Resonance Diagnostic Devices (MRDDs), and the focus is on demonstrating substantial equivalence to previously cleared predicate devices. The "study" mentioned is primarily non-clinical performance testing and software verification/validation, rather than a clinical study with acceptance criteria for specific diagnostic outcomes.

    Here's an attempt to extract and infer information based on the provided text, highlighting what is present and what is missing:


    1. Table of acceptance criteria and the reported device performance

    The document does not explicitly state quantitative acceptance criteria for diagnostic performance or specific metrics. Instead, it relies on demonstrating that the new software's features perform "as intended" and maintain "equivalent safety and performance profile" compared to predicate devices.

    Acceptance CriterionReported Device Performance
    Qualitative Image Quality AssessmentNew/modified sequences and algorithms underwent image quality assessments, and the results "demonstrate that the device performs as intended."
    Acoustic Noise Reduction (for qDWI)Acoustic noise measurements were performed for quiet sequences, implying that the qDWI sequence met its objective of being "noise reduced."
    Functionality as Intended"Results from each set of tests demonstrate that the device performs as intended and is thus substantially equivalent to the predicate devices..."
    Software Verification and ValidationCompleted in accordance with FDA guidance, implying the software meets specified requirements.
    Safety and Effectiveness Equivalence"The features with different technological characteristics from the predicate devices bear an equivalent safety and performance profile as that of the predicate and secondary predicate devices."

    2. Sample size used for the test set and the data provenance

    • Test Set Sample Size: "Sample clinical images were taken for particular new and modified sequences." The specific number or characteristics of these images (sample size) is not provided.
    • Data Provenance: The document does not specify the country of origin of the data or whether it was retrospective or prospective. It only mentions "sample clinical images," suggesting clinical data was used for assessment.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • This information is not provided. The document states "Image quality assessments... were completed," but does not detail who performed these assessments or how ground truth was established for them. For a diagnostic device, interpretation by a "trained physician" is mentioned in the Indications for Use, but this is a general statement about the device's usage, not specific to the assessment of the new software.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    • This information is not provided.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No, an MRMC comparative effectiveness study was not done. The document explicitly states: "No clinical tests were conducted to support the subject device and the substantial equivalence argument..."
    • This submission is not for an AI-enhanced diagnostic tool in the sense of providing automated interpretations or assisting human readers in a measurable way with specific diagnostic outcomes. It's an update to MR imaging acquisition software. Therefore, the concept of "how much human readers improve with AI vs without AI assistance" does not apply in this context.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • The device is a Magnetic Resonance Diagnostic Device (MRDD) software update. Its output is images and/or spectra that are "interpreted by a trained physician" to "assist in diagnosis." As such, it is inherently a human-in-the-loop system. The non-clinical tests involved "Image quality assessments" and "Acoustic noise measurements," which are performance evaluations of the acquisition capabilities, not a standalone diagnostic interpretation by the algorithm.
    • Therefore, a standalone diagnostic performance evaluation (algorithm only) in the context of providing a diagnosis was not performed or described.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • For "Image quality assessments," the type of ground truth is not explicitly stated. It can be inferred that it would likely involve visual assessment by experts against what is considered normal or expected for an MR image, potentially comparing to images acquired with predicate software or known anatomical/pathological features. However, specific ground truth methods like pathology or long-term outcomes data are not mentioned.

    8. The sample size for the training set

    • The document does not mention a separate training set or details about its size. This submission focuses on software changes and their verification, not on the development of a new AI model that requires a distinct training phase.

    9. How the ground truth for the training set was established

    • Since a separate training set is not mentioned, the method for establishing its ground truth is also not provided.

    Summary of what's present and what's missing:

    This 510(k) submission primarily focuses on demonstrating that new software features (like quiet diffusion imaging, improved fast TSE, simultaneous multi-slice imaging, and a short acquisition time brain examination protocol) for existing MR systems maintain the fundamental technological characteristics, safety, and effectiveness of predicate devices. The "study" here is a series of non-clinical tests (image quality review, acoustic noise measurements, software V&V) rather than a clinical trial measuring diagnostic accuracy or reader performance. The level of detail you're asking for, especially concerning clinical study design elements like sample size, expert reader qualifications, adjudication methods, and ground truth establishment for diagnostic output, is typically found in submissions for AI/ML diagnostic tools that directly interpret images or provide diagnostic assistance, which is not the primary claim of this particular device update.

    Ask a Question

    Ask a specific question about this device

    Why did this record match?
    Device Name :

    MAGNETOM Aera (24-channel), MAGNETOM Avanto fit, MAGNETOM Skyra fit, MAGNETOM Prisma, MAGNETOM Prisma
    fit

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The MAGNETOM systems are indicated for use as magnetic resonance diagnostic devices (MRDD) that produce transverse, sagittal, coronal and oblique cross sectional images, spectroscopic images and or spectra, and that display the internal structure and/or function of the head, body or extremities.

    Other physical parameters derived from the images and/or spectra may also be produced. Depending on the region of interest, contrast agents may be used. These images and the physical parameters derived from the inages and/or spectra when interpreted by a trained physician, yield information that may assist in diagnosis.

    The MAGNETOM systems described above may also be used for imaging during interventional procedures when performed with MR compatible devices such as in-room display and MR-Safe biopsy needles.

    Device Description

    The subject device, synqo MR E11B system software, is being made available for the following MAGNETOM MR Systems:

    • MAGNETOM Aera (24-channel configuration), .
    • MAGNETOM Avanto™ ●
    • MAGNETOM Skyra™, ●
    • . MAGNETOM Prisma and
    • . MAGNETOM Prisma™

    Two new coils, Body 30/60 and Body 6 long, will be available for the subject device systems. The feature FREEZEit will be extended to other body regions. In addition to the abdomen region, FREEZEit will be extended to other regions such as the head, head and neck, pelvis, and chest region. . The syngo MR E11B SW also includes new sequences as well as minor modifications of already existing features. A high level summary of the new sequences can be viewed below:

    DSI
    With software version syngo MR E11B Siemens offers DSI for MAGNETOM Prisma, Prismall and Skyra" systems. The DSI option allows diffusion-weighted images to be acquired according to a DSI-compatible q-space sampling scheme.

    QISS evaluation
    QISS (Quiescent-Interval Single-Shot) MR Angiography is a technique for non-contrastenhanced MR Angiography (non-CEMRA) that is particularly suited for examinations of patients with PAD. Since patients with PAD may also suffer from additional impairments such as renal dysfunction, the administration of contrast agent may often be unadvisable in this patient group. Siemens provides a manageable and optimized QISS workflow for imaging peripheral arteries, which can be easily adapted by the customer based on the patient's needs.

    A new "Dot Engine" is provided to ease MRI acquisitions in Radiation Therapy.

    RT Dot Engine
    RT Dot Engine is a new Dot Engine for aiding in Radiation Therapy planning. The RT Dot Engine does not provide new functionality, but collects and displays existing system information for the user. The RT Dot Engine comprises existing protocols, enhanced with the RT Planning Dot Add-in and the "MPR Planning" interaction step. The RT (Radiation Therapy) Dot Engine is used to ease MRI acquisitions of the head and the head/neck region with stereotactic frames or mask-based fixation techniques. RT Dot Engine is a workflow solution for acquiring MR images intended to aid in Radiation Therapy Planning. RT Dot engine helps streamline acquisition of MR images to be used along with any RT planning software that uses MR images in addition to CT images.

    AI/ML Overview

    The provided text is a 510(k) summary for a medical device and does not contain the level of detail typically found in a clinical study report regarding acceptance criteria and performance studies for an AI-powered device.

    This document describes a Magnetic Resonance Diagnostic Device (MRDD) software upgrade (syngo MR E11B) for existing Siemens MAGNETOM MR systems. The submission is a 510(k) premarket notification, which seeks to demonstrate substantial equivalence to a legally marketed predicate device, rather than proving performance against specific acceptance criteria for a novel AI algorithm.

    Therefore, many of the requested details about acceptance criteria, clinical study design, sample sizes, ground truth establishment, and expert adjudication are not present in this type of regulatory document.

    However, I can extract the information that is available and clarify what is missing based on the context of a 510(k) submission for an MRI system software upgrade:

    1. A table of acceptance criteria and the reported device performance

    Acceptance CriteriaReported Device Performance (Summary)
    Safety and EffectivenessThe device performs as intended and is substantially equivalent to predicate devices. Risk management followed ISO 14971:2007. Adherence to IEC 60601-1 series to minimize electrical and mechanical risk. Conforms to applicable FDA recognized and international IEC, ISO, and NEMA standards.
    Technological CharacteristicsSame technological characteristics as predicate device systems (K141977). Substantially equivalent in acquiring MR images steps/features, operational environment, programming language, operating system, and performance. Conforms to IEC 62304:2006 for software medical devices and IEC/NEMA standards.
    New Coils (Body 30/60, Body 6 long)Coils tested for SNR, image uniformity, and heating. Clinical images provided to support new coils.
    New/Modified Sequences & AlgorithmsDedicated phantom testing conducted for particular new sequences (e.g., DSI, QISS, RT Dot Engine). Acoustic noise measurements performed for quiet sequences. Image quality assessments completed; comparisons made to predicate features where applicable. Clinical images provided to support new software features.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Sample Size for Test Set: Not explicitly stated as a formal "test set" in the context of an algorithm evaluation. The document mentions "clinical images were provided to support the new coils as well as the new software features," but the number of images or patients is not specified.
    • Data Provenance: Not specified. Given the nature of a 510(k) for a software upgrade to an MRI machine, the "clinical images" likely came from internal testing or routine clinical acquisitions.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    • The document states "These images and the physical parameters derived from the images and/or spectra when interpreted by a trained physician, yield information that may assist in diagnosis." However, it does not specify the number or qualifications of experts used to establish a formal ground truth for testing the software's performance, as this is an MRI system software upgrade, not a diagnostic AI algorithm.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    • No adjudication method is mentioned.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No MRMC study was conducted or reported. This device is a software upgrade for an MRI system, not an AI diagnostic assistant tool.

    6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done

    • Not applicable in the context of this device. The software "produces transverse, sagittal, coronal and oblique cross sectional images, spectroscopic images and or spectra," which are then "interpreted by a trained physician." It is not a standalone diagnostic algorithm.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • For the nonclinical tests (SNR, uniformity, heating, acoustic noise), the "ground truth" would be established by technical specifications and phantom measurements.
    • For image quality assessments, a "ground truth" (e.g., against specific diagnostic findings) is not detailed. The assessment likely involved expert review of image quality (e.g., resolution, artifact reduction, diagnostic clarity) rather than a comparison to a definitive clinical ground truth established by pathology or long-term outcomes. The primary focus is on demonstrating that the images produced are diagnostically acceptable and equivalent to the predicate.

    8. The sample size for the training set

    • Not applicable. This document describes a software upgrade for an MRI system, which includes new sequences and features (e.g., DSI, QISS, RT Dot Engine). It is not an AI algorithm that would typically have a "training set" in the machine learning sense. The software development follows traditional engineering and quality assurance practices.

    9. How the ground truth for the training set was established

    • Not applicable, as no training set (in the AI/ML context) is mentioned for this device.
    Ask a Question

    Ask a specific question about this device

    K Number
    K132119
    Date Cleared
    2013-11-22

    (136 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    MAGNETOM PRISMA, MAGNETOM PRISMA FIT

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The MAGNETOM Prisms and MAGNETOM Prismalit systems are indicated for use as a magnetic resomance diagnostic device (MRDD) that produces transverse, sagital, coronal and oblique cross sectional images and/or spectra, and that displays the internal structure and/or function of the head, body, or extremities.

    Other physical parameters derived from the images and/or spectra may also be produced. Depending on the region of interest, contrast agents may be used. These images and the physical parameters derived from the images and/or spectra when interpreted by a trained physician yield information that may assist in diagnosis.

    The MAGNETOM Prisma and MAGNETOM Prismalit systems may also be imaging during interventional procedures when performed with MR compatible devices such as in-room display and MR Safe biopsy needles.

    Device Description

    MAGNETOM Prisma (3 T) and MAGNETOM Prisma" (3 T) are similar to the previously cleared MAGNETOM Skyra (3 T) and MAGNETOM Trio a Tim System (TaTS) (3 T) systems utilizing a superconducting magnet design. The open bore, whole body scanners are designed for increased patient comfort. They focus on ergonomics and usability to reduce complexity of the MR workflow.

    The MAGNETOM Prisma will be offered as ex-factory (new production) and the MAGNETOM Prismall will be offered as an upgrade to the currently installed MAGNETOM Trio a Tim System (TaTS) systems.

    AI/ML Overview

    The provided text is a 510(k) summary for the Siemens MAGNETOM Prisma and MAGNETOM Prisma™ MR systems. It asserts substantial equivalence to predicate devices and describes safety and performance testing. However, it does not contain specific acceptance criteria values, reported device performance metrics against those criteria, or the detailed study design (like sample sizes for test sets, data provenance, expert qualifications, or ground truth establishment) typically associated with such criteria for a medical device's performance.

    The document states that the new devices conform to "measurements of safety parameters to the international IEC, ISO and NEMA standards for safety issues with Magnetic Resonance Imaging Diagnostic Devices" and that "performance testing has been completed to show that the performance... is equivalent with respect to the predicate devices." It lists areas of safety and performance evaluated but does not provide quantitative results or explicit acceptance thresholds.

    Summary of Missing Information:

    The input document does not contain the level of detail required to complete the requested table and study information. Specifically, it lacks:

    • Quantitative acceptance criteria values.
    • Reported device performance metrics.
    • Sample sizes for test sets.
    • Data provenance for test sets.
    • Number and qualifications of experts for ground truth.
    • Adjudication method for test sets.
    • Information on MRMC comparative effectiveness studies (effect size of AI improvement).
    • Information on standalone algorithm performance.
    • Type of ground truth used (beyond implying "equivalence" to predicate devices).
    • Sample size and ground truth establishment for the training set.

    Based on the provided text, here is what can be extracted:

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria CategorySpecific Metric (as mentioned or implied)Acceptance CriteriaReported Device Performance
    SafetyMaximum Static FieldConforms to IEC, ISO, NEMA standardsShown to conform to standards and be equivalent to predicate devices
    Rate of Change of Magnetic FieldConforms to IEC, ISO, NEMA standardsShown to conform to standards and be equivalent to predicate devices
    RF Power DepositionConforms to IEC, ISO, NEMA standardsShown to conform to standards and be equivalent to predicate devices
    Acoustic Noise LevelsConforms to IEC, ISO, NEMA standardsShown to conform to standards and be equivalent to predicate devices
    PerformanceSpecification VolumeEquivalent to predicate devicesShown to be equivalent to predicate devices
    Signal to NoiseEquivalent to predicate devicesShown to be equivalent to predicate devices
    Image UniformityEquivalent to predicate devicesShown to be equivalent to predicate devices
    Geometric DistortionEquivalent to predicate devicesShown to be equivalent to predicate devices
    Slice Profile, Thickness and GapEquivalent to predicate devicesShown to be equivalent to predicate devices
    High Contrast Spatial ResolutionEquivalent to predicate devicesShown to be equivalent to predicate devices

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • The document states that "performance testing has been completed," but it does not specify the sample size, data provenance, or whether the study was retrospective or prospective.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • The document does not provide any information regarding the number or qualifications of experts used to establish ground truth for test sets.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    • The document does not specify any adjudication method for test sets.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • The document describes the device as a Magnetic Resonance Diagnostic Device (MRDD), implying it's an imaging system, not an AI-assisted diagnostic tool for human readers. Therefore, an MRMC comparative effectiveness study regarding "human readers improve with AI vs without AI assistance" is not applicable or mentioned in this context. The focus is on the equivalence of the MR system's performance itself.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • This refers to the performance of the MR system itself. The document confirms that performance testing was completed to show equivalence to predicate devices, which implies standalone performance evaluation of the imaging system. However, specific metrics and methodologies for a "standalone algorithm" performance in the sense of AI are not described, as this device is a hardware imaging system.

    7. The type of ground truth used (expert concensus, pathology, outcomes data, etc)

    • The document states that performance testing was done "following NEMA or equivalent IEC and ISO standards." This suggests that physical phantoms and standardized measurement techniques, which serve as the "ground truth" for technical imaging specifications, were likely used. However, it does not explicitly state "expert consensus, pathology, or outcomes data" as ground truth for clinical cases. The ground truth for proving substantial equivalence appears to be established by demonstrating that the new system's technical specifications and image quality measurements are comparable to the predicate devices and conform to industry standards.

    8. The sample size for the training set

    • This document describes a medical imaging device (hardware and core software), not an AI algorithm that undergoes "training." Therefore, the concept of a "training set sample size" is not applicable in this context.

    9. How the ground truth for the training set was established

    • As per point 8, the concept of a "training set" for AI is not applicable to this device description.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1