Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K231929
    Device Name
    iQ-solutions
    Date Cleared
    2023-12-18

    (171 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    iQ-solutions™ is a software medical device intended for automatic annotation, visualization, and quantification of segmentable brain structures from a set of brain MRI scans. It is intended to accelerate and improve the quantification of brain structures that would otherwise require a manual process of identifying, annotating, and measuring regions of interest in brain MRI scans.

    iQ-solutions™ consists of both cross-sectional and longitudinal analysis pipelines.

    • . The cross-sectional pipeline is intended to conduct structure segmentation and volume analysis on brain MRI scans at a single time point.
    • . The longitudinal pipeline is intended to conduct volume change analysis on a single patient's MRI scans acquired on the same scanner, with consistent image acquisition protocol and with consistent contrast at two different time points.

    The results of the cross-sectional pipeline cannot be compared with the results of the longitudinal pipeline.

    Device Description

    iQ-solutions™ is a standalone software device that processes brain MRI scans to outline and quantify the brain structures described in the intended use. The iQ-solutions™ software interacts with the user's picture archiving and communication system (PACS) to receive scans and returns the results to the same destination or other user-specified radiology software system or database.

    iQ-solutions™ analysis module consists of pre-trained convolutional neural networks (CNN) that have been verified and validated to segment the specific brain structures and create binary masks accordingly using the incoming head MRI scans. Each convolutional neural network is coupled with a pre-processing component that transforms the MRI scans to the standard position and contrast; and a post-processing component that prepares for the output annotations, statistical calculation, and the input of the next analysis component in either analysis pipeline.

    iQ-solutions™ operates together with a medical image routing software or an integration platform that can connect with iQ-solutions™ and a PACS system that can fulfill the requirements of iQsolutions™.

    MRI scans are sent to iQ-solutions™ by means of transmission functions within the user's PACS system. Upon completion of processing, iQ-solutions™ returns results to the user's PACS or other userspecified radiology software system or database.

    The results from iQ-solutions™ are not intended to be used as a diagnostic tool, and the interpretation of measurements contained in the iQ-solutions™ report remains entirely the responsibility of the qualified radiologists or physicians managing the individual patient's care.

    AI/ML Overview

    Here's a summary of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    Acceptance Criteria and Device Performance Study for iQ-solutions™

    1. Acceptance Criteria and Reported Device Performance

    Analysis ModuleAcceptance Metric & CriteriaReported PerformancePass/Fail
    Sequence ClassificationAccuracy > 0.9Accuracy = 100%Pass
    Brain ExtractionDICE > 0.9DICE = 0.982Pass
    Scaling Factor EstimationMetrics better than previously accepted resultsSTD = 0.0096Pass
    White matter hyperintensity SegmentationDICE > 0.6DICE = 0.789Pass
    Contrast-Enhancing Lesion SegmentationDICE > 0.6DICE = 0.790Pass
    Lesion InpaintingMetrics better than previously accepted resultsPSNR = 30.79dBPass
    Brain Tissue SegmentationDICE > 0.9DICE = 0.972Pass
    Brain Volume Change EstimationR² > 0.7R² = 0.869Pass
    WMH Lesion ActivityR² > 0.7R² = 0.833Pass
    Cortical Lobar and Subcortical Structure SegmentationR² > 0.7R² = 0.833Pass
    Substructure Volume Change EstimationMetrics better than previously accepted resultsSTD = 0.0102Pass
    Manual Acceptable Rate (for all models)Higher than 80%Ranges from 92.1% to 100%Pass

    2. Sample Size for Test Set and Data Provenance

    The test set included various numbers of cases and subjects depending on the specific analysis module being evaluated. Here's a breakdown:

    • Sequence Classification: 1207 cases
    • Brain Extraction: 458 cases
    • Scaling Factor Estimation: 527 cases
    • White matter hyperintensity Segmentation: 464 cases
    • Contrast-Enhancing Lesion Segmentation: 737 cases
    • Lesion Inpainting: 80 cases
    • Brain Tissue Segmentation: 500 cases
    • Brain Volume Change Estimation: 166 cases
    • WMH Lesion Activity: 64 cases
    • Cortical Lobar and Subcortical Structure Segmentation: 1504 cases
    • Test-ReTest Dataset: 120 cases
    • Comprehensive Test Set (for all analysis components): 81 patients (81 cases)

    Data Provenance:

    The data used for both development and testing was a mix of:

    • Private Data (in-house datasets): Collected from Sydney Neuroimaging Analysis Centre Pty Ltd (SNAC) over ten years (commencing July 2012). This included over 1500 patients with Multiple Sclerosis, acquired from clinical scanners (1.5T and 3T) from Siemens, GE, and Philips.
    • Publicly Available MRI Datasets: From healthy individuals, including 2629 scans obtained from 2159 subjects.

    The text does not explicitly state the specific countries of origin for the public datasets, but the in-house data is from SNAC in Australia. The data is retrospective as it was collected over a prior ten-year period.

    3. Number of Experts used to Establish Ground Truth and Qualifications

    The ground truth for the test set (and training set) was established through a combination of widely used software (FSL, FreeSurfer) and manual review:

    • Initial Generation: Annotations for skullstripping, brain tissues, cortical gray matter lobes, thalamus, hippocampus, and brain volume changes were initially generated using FSL and FreeSurfer.
    • Manual Review/Annotation:
      • These initial annotations were manually reviewed and accepted by trained neuroimaging analysts.
      • Annotations for white matter hyperintensities (pre-contrast, contrast-enhancing, and lesion changes) were manually annotated by trained neuroimaging analysts.
    • Further Review: All annotations were further reviewed by senior neuroimaging analysts according to SNAC SOP.
    • Expert Radiologists: The test dataset was "further verified by trained neuroimaging analysts and expert radiologists."

    The specific number of neuroimaging analysts or expert radiologists is not explicitly stated, nor are their exact years of experience, beyond being "trained" and "senior" neuroimaging analysts and "expert" radiologists.

    4. Adjudication Method for the Test Set

    The adjudication method involved a multi-stage process:

    1. Initial generation by FSL/FreeSurfer (for some structures).
    2. Manual review and acceptance by trained neuroimaging analysts.
    3. Manual annotation by trained neuroimaging analysts (for WMH).
    4. Further review by senior neuroimaging analysts.
    5. Verification by expert radiologists.

    This implies a form of consensus-based adjudication, likely involving multiple individuals, but a specific "2+1" or "3+1" methodology is not detailed. The "manual acceptable rate" criterion suggests a qualitative evaluation by these human experts.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No MRMC comparative effectiveness study was done. The document states: "No clinical tests were conducted to test the performance and functionality of the subject device as verification and validation of the subject device have been performed through non-clinical bench testing." The acceptance criteria focus on the algorithm's performance against established ground truth, not on its assistance to human readers.

    6. Standalone (Algorithm Only) Performance Study

    Yes, a standalone (algorithm only) performance study was done. The entire "Performance Data" section (Section 5.8 to 5.14) details the results of "Software 'bench' testing" and the metrics presented in Table 5 ("Summary of Performances of all the Analysis Modules") are purely based on the algorithm's output compared to the ground truth.

    7. Type of Ground Truth Used

    The ground truth used was a combination of:

    • Expert Consensus / Expert-Reviewed Software Output: For many structures (skullstripping, brain tissues, cortical gray matter lobes, thalamus, hippocampus, brain volume changes), the ground truth was "originally generated using widely used software items such as FSL and FreeSurfer, and the annotations were manually reviewed and accepted by trained neuroimaging analysts."
    • Manual Annotation by Trained Experts: For white matter hyperintensities (including pre-contrast ones, contrast-enhancing ones, and lesion changes), the annotations were "manually annotated by trained neuroimaging analysts."
    • Senior Expert Review: All annotations (across both categories) underwent further review by senior neuroimaging analysts.

    8. Sample Size for the Training Set

    The sample sizes for the training set (including validation data which is part of the development phase before the final test set) varied by analysis module:

    • Sequence Classification: 4862 cases
    • Brain Extraction: 1843 cases
    • Scaling Factor Estimation: 2986 cases
    • White matter hyperintensity Segmentation: 1858 cases
    • Contrast-Enhancing Lesion Segmentation: 79 cases
    • Lesion Inpainting: 449 cases
    • Brain Tissue Segmentation: 4179 cases
    • Brain Volume Change Estimation: 1487 cases
    • WMH Lesion Activity: 116 cases
    • Cortical Lobar and Subcortical Structure Segmentation: Not applicable (likely handled by other modules or a different training approach).
    • Test-ReTest Dataset: Not applicable.
    • Comprehensive Test Set: Not applicable.

    The total number of patients/scans used in development (training and validation) drew from a pool of:

    • 2159 subjects / 2629 scans (Public Data)
    • 1570 subjects / 5664 scans (Private Data)
      The total number of subjects for training and validation combined across all modules is not explicitly summed, but overall, it drew from "more than 1500 patients" of private data and 2159 subjects of public data.

    9. How the Ground Truth for the Training Set was Established

    The ground truth for the training set was established in the exact same manner as described for the test set (see point 7). It involved:

    • Initial generation by widely used software (FSL, FreeSurfer), followed by manual review and acceptance by trained neuroimaging analysts.
    • Direct manual annotation by trained neuroimaging analysts for certain structures like white matter hyperintensities.
    • Further review and acceptance by senior neuroimaging analysts according to SNAC SOP.

    This robust process of combining automated tools with multiple levels of expert human review ensured the quality of the ground truth used for both training and evaluating the algorithms.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1