Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K160152
    Date Cleared
    2016-05-20

    (119 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K151015

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The TRILLIUM Oval MRI System is an imaging device, and is intended to provide the physician with physiological and clinical information, obtained non-invasively and without the use of ionizing radiation. The MR system produces transverse, coronal, sagittal, oblique, and curved cross-sectional images that display the internal structure of the head, body, or extremities. The images produced by the MR system reflect the spatial distribution of protons (hydrogen nuclei) exhibiting magnetic resonance. The NMR properties that determine are proton density, spin-lattice relaxation time (T1), spin-spin relaxation time (T2), and flow. When interpreted by a trained physician, these images provide information that can be useful in diagnosis determination.

    Anatomical Region: Head, Body, Spine, Extremities
    Nucleus excited: Proton
    Diagnostic uses:
    T1, T2, proton density weighted imaging
    Diffusion weighted imaging
    MR Angiography
    Image processing
    Spectroscopy
    Whole Body

    Device Description

    The TRILLIUM OVAL is a Magnetic Resonance Imaging System that utilizes a 2.9 Tesla superconducting magnet in a gantry design. The TRILLIUM OVAL has been designed to enhance clinical utility as compared to the ECHELON Oval by taking advantage of the stronger magnetic field and stronger gradient field and slew rate. There is no change in the system composition from the predicate device.

    AI/ML Overview

    The document you provided is a 510(k) premarket notification for the Hitachi TRILLIUM Oval V5.1 MRI System. It describes a software update to an existing MRI system (TRILLIUM OVAL, K142734) and therefore focuses on demonstrating substantial equivalence to the predicate device, rather than proving that the device meets specific acceptance criteria through a standalone study with defined performance thresholds.

    The "acceptance criteria" discussed in this document are primarily compliance with international and national standards for medical electrical equipment and MRI devices, and the demonstration that new software features perform as intended without compromising safety or effectiveness.

    Here's an analysis based on the information provided, framed to address your questions as closely as possible, while noting where specific answers are not available due to the nature of a 510(k) for a software update:


    1. A table of acceptance criteria and the reported device performance

    For a software update to an existing MRI system, the "acceptance criteria" are generally about maintaining compliance with safety and performance standards established for the predicate device and confirming the intended function of new features.

    Acceptance Criteria (Standards Compliance & Functional Performance)Reported Device Performance (Summary of Non-Clinical Testing)
    General Safety and Essential Performance: Adherence to IEC 60601-1 (general medical electrical equipment safety) and IEC 60601-2-33 (specific MRI safety).The device conforms to AAMI / ANSI ES60601-1:2005/(R)2012 and IEC 60601-2-33 Edition 3.1 2013-04. SAR and dB/dt management methods comply with these standards.
    Electromagnetic Compatibility: Compliance with IEC 60601-1-2.Conforms to IEC 60601-1-2 Edition 3:2007-03.
    Software Life Cycle Processes: Compliance with IEC 62304.Conforms to IEC 62304 First edition 2006-05.
    Image Quality (Specific NEMA Standards): Signal-to-noise ratio (SNR), geometric distortion, image uniformity, slice thickness.The revisions to the software have "no effect on the standards tests which were conducted on the TRILLIUM OVAL MRI System (K142734)." Implies predicate device's performance is maintained.
    Acoustic Noise: Compliance with NEMA MS 4.Audible Noise (MCAN) values changed slightly (e.g., LAeq from 118.8 dBA to 124.3 dBA) due to parameter changes to improve image quality. Measurement method (NEMA MS 4:2010) and risk analysis remain the same as the predicate. Considered safe.
    SAR Characterization: Compliance with NEMA MS 8.SAR management modified to improve accuracy (coil loss coefficient from measured Q values). SAR monitor upgraded for precision. SAR is limited by IEC 60601-2-33. Conforms to NEMA MS 8-2008.
    Multi-b and DKI: Ability to acquire multi-b DKI images in one scan. Diffusion Kurtosis Imaging performs as expected.Test results from phantom simulations and volunteer studies confirm Multi b DKI images can be acquired utilizing Tensor 15 and Tensor 30.
    k-Space Parallel Imaging: Accelerate scan time by acquiring k-space data with skipped phase encoding/position and filling with estimated data. Reduces wrap artifacts.Test results from phantom simulations and volunteer studies indicate that k-space parallel imaging technique accelerates the scan and reduces wrap artifacts.
    T2 RelaxMap:* Ability to map T2* relaxation time on morphological images in color, using multi-echo images and T2* analysis.Test results from phantom simulations and volunteer studies confirm T2* relaxation time can be mapped.
    Vivid Image: Enhancement of overall SNR in 2D processing tasks.Test results from phantom simulations and volunteer studies confirm improvement of overall SNR.
    RADAR-GE/TOF: RADAR motion correction feature functions with GE and TOF sequences.Test results from phantom simulations and volunteer studies confirm the RADAR measurement feature is functioning.
    ASL-Perfusion: Acquire non-contrast brain perfusion images using labeled blood.Test results confirm ASL-Perfusion acquires perfusion images both in phantom simulations and volunteer studies.
    Breast MRS: Acquire MR signal of in vivo metabolites (e.g., Choline) in the breast area.Test results from phantom simulations and volunteer studies confirm MRS can detect Choline as metabolite in the breast area.
    Enhanced PC: Reduce scan time of phase contrast (PC) sequence in 2D and 3D by shortening TR.Test results from phantom simulations and volunteer studies indicate a reduction in scan time.
    PBSG: Improve mitigation of dark band artifact unique to BASG sequence, allowing BASG images under inhomogeneous magnetic field with less artifact.Test results from phantom simulations and volunteer studies confirm PBSG improves to mitigate the dark band artifact.
    Volume RF Shimming: Improve B1 uniformity.Test results and volunteer studies confirm that Volume RF shimming improves B1 uniformity.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document primarily describes non-clinical testing involving phantom simulations and volunteer studies. It does not specify a quantitative "test set" in terms of patient data or detailed sample sizes.

    • Sample Size for Test Set: Not explicitly stated as a number of cases/patients for each feature. The studies involved "phantom simulations and volunteer studies." This suggests a small, controlled set of healthy volunteers rather than a large patient cohort.
    • Data Provenance: Not specified. "Volunteer studies" typically implies prospective data collection. Country of origin is not mentioned. Given Hitachi Medical Systems America, Inc. is the submitter, the studies could be internal or conducted in an associated facility.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    This information is not provided in the document. For non-clinical performance evaluations of MRI system features, "ground truth" might be established through:

    • Physical measurements on phantoms.
    • Physiological parameters measured during volunteer studies.
    • Comparison to established imaging techniques or qualitative assessment by experienced MR physicists or radiologists, but specifics are absent here.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    This information is not provided. Given the nature of a software update for an MRI system demonstrating substantial equivalence, formal adjudication methods (common in AI/CADe studies) are generally not performed or required for these types of submissions. The evaluations are more focused on technical performance and image quality by MR physicists and system engineers, potentially with clinical input from radiologists for qualitative assessment of images.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No, a multi-reader, multi-case (MRMC) comparative effectiveness study was not performed and is not described for this submission. This is because the device is an MRI system itself and its software, not an AI/CADe product intended to assist human readers. The document focuses on the technical performance of the MRI system's new sequences and processing capabilities.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The entire performance evaluation described is essentially "standalone" in the sense that it assesses the technical capabilities of the MRI system and its new software features directly (e.g., confirming acquisition, mapping, or reduction of scan time). It's not an "algorithm-only" performance as one might describe an AI model, but rather the performance of the full integrated imaging system. The "test results from phantom simulations and volunteer studies" for all the new features (Multi-b and DKI, k-Space Parallel Imaging, T2* RelaxMap, etc.) represent the demonstrated performance of the system directly, without explicitly quantifying 'human-in-the-loop' interaction in terms of diagnostic performance metrics.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    For the new features, the "ground truth" likely refers to:

    • Physical measurements/known properties of phantoms: For features like SNR, geometric distortion, T2* relaxation times in phantoms, or artifact reduction.
    • Physiological measurements/expected outcomes in volunteers: For features like ASL-Perfusion (labeled blood flow), or the detection of metabolites in Breast MRS.
    • Qualitative assessment of image quality and feature functionality: By qualified personnel (e.g., MR physicists, radiologists) for aspects like "improvement of overall SNR," "reduction of scan time," or "mitigation of dark band artifact."

    No mention of pathology or outcomes data is made, which is typical for demonstrating substantial equivalence of an imaging system software update.

    8. The sample size for the training set

    The document describes software updates and performance validation for an MRI system. It does not mention "training sets" in the context of machine learning or AI models. The software features are likely based on established MR physics principles and algorithms, rather than being data-driven machine learning models that require training sets. Therefore, this question is not applicable in the context of this submission.

    9. How the ground truth for the training set was established

    As there is no mention of a "training set" for machine learning, this question is not applicable.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1