Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K211356
    Device Name
    GIQuant
    Manufacturer
    Date Cleared
    2021-11-08

    (189 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    GIQuant is a post-processing software into existing medical imaging workflows that is intended to derive motion related parameters from abdominal data obtained during magnetic resonance imaging (MRI).

    GIQuant is designed to aid trained physicians in advanced image assessment, treatment consideration, and monitoring of therapeutic response. The information provided by GIQuant should not be used in isolation when making patient management decisions.

    Device Description

    GIQuant is a standalone software medical imaging post processing application that runs on standard computer hardware. GIQuant performs numerical analysis and generates image parameter maps, based on DICOM images captured via Magnetic Resonance Imaging.

    These actions include:

    • Receipt of MR DICOM image studies from DICOM storage and communication devices.
    • Registration of images generated at different time points.
    • Comparison of registered images.
    • Computation of motility parameter maps based on dynamic abdominal MR imaging data.
    • Output of the above maps in DICOM format for export to PACS.

    GIQuant can be deployed within its own image storage and communication infrastructure or alternatively it can be "plugged in" and launched from within other FDA cleared applications.

    AI/ML Overview

    The requested information regarding the acceptance criteria and the study that proves the device meets the acceptance criteria is not fully available in the provided text. The document refers to "Software verification and validation testing," "Technical Performance testing," and "Validation Testing" but does not explicitly detail acceptance criteria or the reported device performance in a table format, nor does it provide specifics about sample sizes, data provenance, number/qualifications of experts, adjudication methods, MRMC studies, standalone performance, or ground truth details for testing and training sets.

    However, based on the provided text, here is what can be extracted:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document broadly states that "Technical performance assessment established the performance and suitability of GlQuant's algorithm in the context of shared questions with relation to safety and effectiveness" and that "The tests results demonstrate that GlQuant functioned as intended, is acceptable for clinical use, and is safe and effective as its predicate device, without introducing new questions of safety and efficacy."

    Although a specific table of acceptance criteria and reported device performance is not provided, the text mentions general performance aspects:

    • Acceptance Criteria (Implied):
      • Functional requirements met.
      • Suitability of algorithm for safety and effectiveness.
      • Functioning as intended for clinical use.
      • Safe and effective as its predicate device.
      • Reproducible results.
    • Reported Device Performance (Methods Mentioned):
      • Utilizing summary metrics obtained through region of interest placement on synthetically manipulated 'ground truth' datasets.
      • Measuring target registration error on a fixed point of anatomy when a region of interest propagated through a dataset is corrected by an expert user.
      • Reproducibility established through test-retest image data sets.

    2. Sample Size and Data Provenance for Test Set

    This information is not explicitly provided in the given document.

    3. Number of Experts and Qualifications for Ground Truth (Test Set)

    The document mentions "corrected by an expert user" in the context of measuring target registration error. However, the number of experts and their qualifications are not specified.

    4. Adjudication Method for Test Set

    This information is not explicitly provided in the given document.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study, nor does it describe any effect size of human readers improving with AI vs. without AI assistance. The focus described is on the software's technical performance and validation rather than human-in-the-loop performance improvement.

    6. Standalone Performance

    Yes, a standalone performance assessment was done. The device is described as "a standalone software medical imaging post processing application." The "Technical Performance testing" and "Software verification testing" as well as "Validation Testing" described confirm the standalone performance of the algorithm.

    7. Type of Ground Truth Used (Test Set)

    The document mentions "synthetically manipulated 'ground truth' datasets" and "fixed point of anatomy" for measuring target registration error. This indicates a form of expert-defined or synthetically generated ground truth for technical performance. It doesn't explicitly state pathology or outcomes data as ground truth for the test set.

    8. Sample Size for Training Set

    This information is not explicitly provided in the given document.

    9. How Ground Truth for Training Set was Established

    This information is not explicitly provided in the given document. The document refers to "synthetically manipulated 'ground truth' datasets" in the context of performance testing, but not specifically for training data.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1