Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K230534
    Date Cleared
    2023-11-08

    (254 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    BriefCase-Quantification is a radiological image management and processing system software indicated for use in the analysis of CT exams with contrast, that include the abdominal aorta, in adults or transitional adolescents aged 18 and older.

    The device is intended to assist appropriately trained medical specialists by providing the user with the maximum abdominal aortic axial diameter measurement of cases that include the abdominal aorta (M-AbdAo), BriefCase-Quantification is indicated to evaluate normal and aneurysmal abdominal aortas and is not intended to evaluate postoperative aortas.

    The BriefCase-Quantification results are not intended to be used on a stand-alone basis for clinical decision-making or otherwise preclude clinical assessment of cases. These measurements are unofficial, are not final, and are subject to change after review by a radiologist. For final clinically approved measurements, please refer to the official radiology report. Clinicians are responsible for viewing full images per the standard of care.

    Device Description

    BriefCase-Quantification is a radiological medical image management and processing device. The software consists of a single module based on an algorithm programmed component and is intended to run on a linux-based server in a cloud environment.

    The BriefCase-Quantification receives filtered DICOM Images, and processes them chronologically by running the algorithm on relevant series to measure the maximum abdominal aortic diameter. Following the Al processing, the output of the algorithm analysis is transferred to an image review software (desktop application), and forwarded to user review in the PACS.

    The BriefCase-Quantification produces a preview image annotated with the maximum axial diameter measurement. The diameter marking is not intended to be a final output, but serves the purpose of visualization and measurement. The original, unmarked series remains available in the PACS as well. The preview image presents an unofficial and not final measurement, and the user is instructed to review the full image and any other clinical information before making a clinical decision. The image includes a disclaimer: "Not for diagnostic use. The measurement is unofficial, not final, and must be reviewed by a radiologist."

    BriefCase-Quantification is not intended to evaluate post-operative aortas.

    AI/ML Overview

    1. A table of acceptance criteria and the reported device performance

    Acceptance CriteriaReported Device Performance
    Mean absolute error between ground truth measurement and algorithm1.95 mm (95% Cl: 1.59 mm, 2.32 mm)
    Performance GoalMean absolute error estimate below prespecified performance goal (specific numerical goal not explicitly stated, but was met)

    2. Sample size used for the test set and the data provenance

    • Test set sample size: 160 cases
    • Data provenance: Retrospective, from 6 US-based clinical sites (both academic and community centers). The cases were distinct in time and/or center from the training data.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Number of experts: 3
    • Qualifications of experts: US board-certified radiologists.

    4. Adjudication method for the test set

    The document does not explicitly state the adjudication method (e.g., 2+1, 3+1). It only mentions that the ground truth was "determined by three US board-certified radiologists." This implies a consensus-based approach, but the specific decision rule is not detailed.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No, a multi-reader multi-case (MRMC) comparative effectiveness study evaluating human reader improvement with AI assistance versus without AI assistance was not done. This study focused on the standalone performance of the algorithm against a ground truth established by experts.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, a standalone performance study of the algorithm was done. The "Primary Endpoint" section details the algorithm's performance (mean absolute error) compared to the ground truth.

    7. The type of ground truth used

    Expert consensus was used as the ground truth. The ground truth measurements were "determined by three US board-certified radiologists."

    8. The sample size for the training set

    The document does not specify the sample size for the training set. It only states that the cases collected for the pivotal dataset (test set) were "distinct in time and/or center from the cases used to train the algorithm."

    9. How the ground truth for the training set was established

    The document does not explicitly describe how the ground truth for the training set was established. It only mentions that the cases were "distinct in time and/or center from the cases used to train the algorithm," implying that training data also had a ground truth, likely established by similar expert review, but this is not detailed.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1