Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K242837
    Device Name
    BriefCase-Triage
    Date Cleared
    2024-10-18

    (29 days)

    Product Code
    Regulation Number
    892.2080
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    BriefCase-Triage is a radiological computer aided triage and notification software indicated for use in the analysis of CT scans that include the cervical spine, in adults or transitional adolescents aged 18 and older. The device is intended to assist hospital networks and appropriately trained medical specialists in workflow triage by flagging and communication of linear lucencies in the cervical spine bone in patterns compatible with fractures.

    BriefCase-Triage uses an artificial intelligence algorithm to analyze images and highlight cases with detected findings in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for cases with suspected findings. Notifications include compressed preview images that are meant for informational purposes only and not intended for diagnostic use beyond notification. The device does not alter the original medical image and is not intended to be used as a diagnostic device.

    The results of BriefCase-Triage are intended to be used in conjunction with other patient information and based on their professional judgment, to assist with triage/prioritization of medical images. Notified clinicians are responsible for viewing full images per the standard of care.

    Device Description

    BriefCase-Triage is a radiological computer-assisted triage and notification software device.

    The software is based on an algorithm programmed component and is intended to run on a linuxbased server in a cloud environment.

    The BriefCase-Triage receives filtered DICOM Images, and processes them chronologically by running the algorithms on each series to detect suspected cases. Following the Al processing, the output of the algorithm analysis is transferred to an image review software (desktop application). When a suspected case is detected, the user receives a pop-up notification and is presented with a compressed, low-quality, grayscale image that is captioned "not for diagnostic use, for prioritization only" which is displayed as a preview function. This preview is meant for informational purposes only, does not contain any marking of the findings, and is not intended for primary diagnosis beyond notification.

    Presenting the users with worklist prioritization facilitates efficient triage by prompting the user to assess the relevant original images in the PACS. Thus, the suspect case receives attention earlier than would have been the case in the standard of care practice alone.

    The algorithm was trained during software development on images of the pathology. As is customary in the field of machine learning algorithm development consisted of training on manually labeled ("tagged") images. In that process, critical findings were tagged in all CTs in the training data set.

    AI/ML Overview

    The Aidoc BriefCase-Triage device, intended for triaging cervical spine CT scans for fractures, underwent a retrospective, blinded, multicenter study to evaluate its performance.

    Here's a breakdown of the acceptance criteria and study details:

    1. A table of acceptance criteria and the reported device performance

    Acceptance CriteriaReported Device Performance
    Sensitivity (performance goal ≥ 80%)92.1% (95% CI: 87.5%, 95.4%)
    Specificity (performance goal ≥ 80%)92.6% (95% CI: 89.0%, 95.4%)
    Time-to-Notification (comparable to predicate)15.1 seconds (95% CI: 14.1-16.2)

    2. Sample size used for the test set and the data provenance

    • Sample Size: 487 cases
    • Data Provenance: Retrospective, multicenter study from 6 US-based clinical sites. The cases collected for the pivotal dataset were all distinct in time or center from the cases used to train the algorithm.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Number of Experts: Three
    • Qualifications: Senior board-certified radiologists

    4. Adjudication method for the test set

    The document does not explicitly state the adjudication method (e.g., 2+1, 3+1). However, it implies that the ground truth was "as determined by three senior board-certified radiologists," suggesting a consensus-based approach among these experts.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    There is no MRMC comparative effectiveness study presented in this document that evaluates human readers' improvement with AI assistance versus without AI assistance. The study focuses on the standalone performance of the AI device and its time-to-notification compared to a predicate device.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, a standalone algorithm-only performance study was done. The primary endpoints (sensitivity and specificity) and secondary endpoints (time-to-notification, PPV, NPV, PLR, NLR) directly measure the performance of the BriefCase-Triage software itself.

    7. The type of ground truth used

    The ground truth was established by expert consensus (determined by three senior board-certified radiologists).

    8. The sample size for the training set

    The document states, "The algorithm was trained during software development on images of the pathology." However, it does not specify the sample size for the training set. It only mentions that the test pivotal study data was sequestered from algorithm development activities.

    9. How the ground truth for the training set was established

    The ground truth for the training set was established by manually labeled ("tagged") images. The document states, "critical findings were tagged in all CTs in the training data set." This implies expert annotation of the training data.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1