Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K243548
    Device Name
    BriefCase-Triage
    Date Cleared
    2024-12-11

    (26 days)

    Product Code
    Regulation Number
    892.2080
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    BriefCase-Triage is a radiological computer aided triage and notification software indicated for use in the analysis of CT images with or without contrast that include the ribs, in adults or transitional adolescents aged 18 and older. The device is intended to assist hospital networks and appropriately trained medical specialists in workflow triage by flagging and communicating suspect cases of three or more acute Rib fracture (RibFx) pathologies.

    BriefCase-Triage uses an artificial intelligence algorithm to analyze images and highlight cases with detected findings in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for cases with suspected RibFx findings. Notifications include compressed preview images that are meant for informational purposes only, and not intended for diagnostic use beyond notification. The device does not alter the original medical image and is not intended to be used as a diagnostic device.

    The results of BriefCase-Triage are intended to be used in conjunction with other patient information and based on their professional judgment, to assist with triage/prioritization of medical images. Notified clinicians are responsible for viewing full images per the standard of care.

    Device Description

    BriefCase-Triage is a radiological computer-assisted triage and notification software device. The software is based on an algorithm programmed component and is intended to run on a linuxbased server in a cloud environment.

    The BriefCase-Triage receives filtered DICOM Images, and processes them chronologically by running the algorithms on each series to detect suspected cases. Following the Al processing, the output of the algorithm analysis is transferred to an image review software (desktop application). When a suspected case is detected, the user receives a pop-up notification and is presented with a compressed, low-quality, grayscale image that is captioned "not for diagnostic use, for prioritization only" which is displayed as a preview function. This preview is meant for informational purposes only, does not contain any marking of the findings, and is not intended for primary diagnosis beyond notification.

    Presenting the users with worklist prioritization facilitates efficient triage by prompting the user to assess the relevant original images in the PACS. Thus, the suspect case receives attention earlier than would have been the case in the standard of care practice alone.

    AI/ML Overview

    Acceptance Criteria and Device Performance for BriefCase-Triage

    1. Table of Acceptance Criteria and Reported Device Performance

    MetricAcceptance Criteria (Performance Goal)Reported Device Performance
    AUC> 0.9597.2% (95% Cl: 95.5%-99.0%)
    Sensitivity> 80%95.2% (95% Cl: 89.1%-98.4%)
    Specificity> 80%95.1% (95% Cl: 91.2%-97.6%)
    Time-to-notification (Mean)Comparability with predicate (70.1 seconds)41.4 seconds (95% Cl: 40.4-42.5)

    Note: The acceptance criteria for sensitivity and specificity are extrapolated from the statement "As the AUC exceeded 0.95 and sensitivity and specificity both exceeded 80%, the study's primary endpoints were met."

    2. Sample Size and Data Provenance for Test Set

    • Sample Size for Test Set: 308 cases
    • Data Provenance: Retrospective, multicenter study from 5 US-based clinical sites. The cases collected for the pivotal dataset were distinct in time or center from the cases used to train the algorithm.

    3. Number and Qualifications of Experts for Ground Truth

    • Number of Experts: Three senior board-certified radiologists.
    • Qualifications: Senior board-certified radiologists. (Specific years of experience are not provided in the document).

    4. Adjudication Method for Test Set

    The adjudication method is not explicitly stated. The document mentions "ground truth, as determined by three senior board-certified radiologists," but does not detail how disagreements among these radiologists were resolved (e.g., 2+1, 3+1, or simple consensus).

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No MRMC comparative effectiveness study was done to assess the effect of AI assistance on human readers' performance. The study focused on the standalone performance of the AI algorithm and its time-to-notification compared to a predicate device.

    6. Standalone Performance Study

    Yes, a standalone performance study was done. The "Pivotal Study Summary" section explicitly details the evaluation of the software's performance (AUC, sensitivity, specificity, PPV, NPV, PLR, NLR) in identifying RibFx without human intervention, comparing it to the established ground truth.

    7. Type of Ground Truth Used

    The ground truth used was expert consensus, determined by three senior board-certified radiologists.

    8. Sample Size for Training Set

    The sample size for the training set is not explicitly provided in the document. The document states, "The algorithm was trained during software development on images of the pathology. As is customary in the field of machine learning, deep learning algorithm development consisted of training on labeled ("tagged") images." It also mentions, "The cases collected for the pivotal dataset were all distinct in time or center from the cases used to train the algorithm, as was used for the most recent clearance (K230020)."

    9. How Ground Truth for Training Set Was Established

    The ground truth for the training set was established by labeling ("tagging") images based on the presence of the critical finding (three or more acute Rib fractures). This process is described as "each image in the training dataset was tagged based on the presence of the critical finding." The document does not specify who performed this tagging or the exact methodology for establishing the ground truth for the training set (e.g., expert consensus, pathology).

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1