Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K213706
    Date Cleared
    2022-04-15

    (142 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AI-Rad Companion Brain MR is a post-processing image analysis software that assists clinicians in viewing, analyzing, and evaluating MR brain images.

    Al-Rad Companion Brain MR provides the following functionalities:

    • Automated segmentation and quantitative analysis of includual brain structures and white matter hyperintensities
    • Quantitative comparison of brain structure with normative data from a healthy population
    • Presentation of results of reporting that includes all numerical values as well as visualization of these results
    Device Description

    AI-Rad Companion Brain MR VA40 is an enhancement to the predicate. AI-Rad Companion Brain MR VA20 (K193290). Just as in the predicate, AI-Rad Companion Brain MR addresses the automatic quantification and visual assessment of the volumetric properties of various brain structures based on T1 MPRAGE datasets. In AI-Rad Companion Brain MR VA40, the quantification and visual assessment extends to white matter hyperintensities on the basis of T1 MPRAGE and T2 weighted FLAIR datasets. These datasets are acquired as part of a typical head MR acquisition. The results are directly archived in PACS as this is the standard location for reading by radiologist. From a predefined list of 30 structures (e.g. Hippocampus, Left Frontal Grey Matter, etc.), volumetric properties are calculated as absolute and normalized volumes with respect to the total intercranial volume. The normalized values for a given patient are compared against age-matched mean and standard deviations obtained from a population of healthy reference subjects.

    The white matter hyperintensities can be visualized as a 3D overlay map and the quantification in count and volume as per 4 brain regions in the report.

    As an update to the previously cleared device, the following modifications have been made:

      1. Modified Intended Use Statement
      1. Addition of white matter hyperintensities overlay map, count and volume as per 4 brain regions
      1. Enhanced DICOM Structured Report (DICOM SR)
      1. Updated deployment structure
    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

    Acceptance Criteria and Device Performance

    1. A table of acceptance criteria and the reported device performance:

    Validation TypeAcceptance CriteriaReported Device Performance AVGReported Device Performance 95% CI
    Volumetric Segmentation Accuracy (PCC)PCC 95% Confidence Interval includes 0.910.98[0.97, 0.99]
    Volumetric Segmentation Accuracy (ICC)ICC 95% Confidence Interval includes 0.950.97[0.96, 0.98]
    Voxel-wise Segmentation AccuracyMean Dice score >= 0.580.60[0.53, 0.63]
    WMH Lesion-wise Segmentation AccuracyMean F1-score >= 0.570.60[0.57, 0.64]
    ReproducibilityLower Bound of the 95% Bootstrap CI Dice >= 0.630.79[0.77, 0.81]

    All reported device performance metrics meet or exceed the specified acceptance criteria, as their 95% Confidence Intervals either include the criterion or are entirely above it (for metrics requiring a minimum value).

    Study Details

    2. Sample size used for the test set and the data provenance:

    • Test Set Sample Size: 64 subjects for the main testing cohort, and 25 subjects for the reproducibility cohort.
      • Total Subjects: 89 subjects (64 + 25)
      • Total Studies: 164 studies (64 for testing cohort, and 100 for reproducibility cohort)
    • Data Provenance (Country of Origin): United States (Cleveland, Baltimore, New York, ADNI), Switzerland (Lausanne, CLEMENS), France (Montpellier).
    • Retrospective or Prospective: The text does not explicitly state whether the data was retrospective or prospective, but the description of "test data" and "training data" suggests retrospective data collection from existing datasets.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Number of Experts: Three distinct groups involved in establishing ground truth for each dataset: an annotator, a reviewer, and a clinical expert. The text implies a total of three individuals per case, forming a disjoint group (i.e., three different people for annotation, review, and expert correction).
    • Qualifications of Experts: The text refers to them as "in-house annotators," "in-house reviewer," and "referred clinical expert." Specific qualifications (e.g., years of experience, board certification) are not explicitly detailed beyond the "clinical expert" designation.

    4. Adjudication method for the test set:

    • Adjudication Method: A multi-step process: "For each dataset, three sets of white matter hyperintensity ground truth are annotated manually. Each set is annotated by a disjoint group of annotator, reviewer, and clinical expert with the expert randomly assigned per case to minimize annotation bias. For each test dataset, the three initial annotations are annotated by three different in-house annotators. Then, each initial annotation is reviewed by the in-house reviewer. Afterwards, each initial annotation is reviewed by the referred clinical expert. The clinical expert reviews and corrects the initial annotation of the WMH according to the annotation protocol."
      • This is a form of cascading/sequential review and consensus, rather than a direct voting or "X+Y" adjudication, with the clinical expert performing the final review and correction.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done:

    • No, an MRMC comparative effectiveness study was not done. The study focuses on the standalone performance of the AI algorithm against expert-established ground truth, not on how human readers' performance improves with or without AI assistance.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    • Yes, a standalone performance study was done. The testing validated the "AI-Rad Companion Brain MR WMH" algorithm's performance by comparing its outputs directly to manually annotated ground truth. The results table explicitly presents the "Volumetric Segmentation Accuracy," "Voxel-wise Segmentation Accuracy," "WMH Lesion-wise Segmentation Accuracy," and "Reproducibility" of the device.

    7. The type of ground truth used:

    • Expert Consensus / Expert-Annotated Ground Truth. The ground truth was established through a multi-step manual annotation, review, and correction process by "in-house annotators," "in-house reviewer," and a "clinical expert."

    8. The sample size for the training set:

    • The sample size for the training set is not explicitly stated. The text only mentions: "The training data used for the training of the White matter hyperintensity algorithm is independent of the data used to test the white matter hyperintensity algorithm."

    9. How the ground truth for the training set was established:

    • The text does not provide details on how the ground truth for the training set was established. It only ensures that the training data and testing data are independent.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1