Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K221314
    Device Name
    BriefCase
    Date Cleared
    2022-06-03

    (29 days)

    Product Code
    Regulation Number
    892.2080
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    BriefCase is a radiological computer aided triage and notification software indicated for use in the analysis of head CTA images in adults or transitional adolescents aged 18 and older. The device is intended to assist hospital networks and appropriately trained medical specialists in workflow triage by flagging and communication of M1 Large Vessel Occlusion (M1-LVO) pathologies.

    BriefCase uses an artificial intelligence algorithm to analyze images and highlight cases with detected findings on a standalone desktop application in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for cases with suspected findings. Notifications include compressed preview images that are meant for informational purposes only and not intended for diagnostic use beyond notification. The device does not alter the original medical image and is not intended to be used as a diagnostic device.

    The results of BriefCase are intended to be used in conjunction with other patient information and based on their professional judgment, to assist with triage/prioritization of medical images. Notified clinicians are responsible for viewing full images per the standard of care.

    Device Description

    BriefCase is a radiological computer-assisted triage and notification software system is based on an algorithm programmed component and consists of a standard off-the-shelf operating system, the Microsoft Windows server 2012 64bit, and additional applications, which include PostgreSQL, DICOM module and the BriefCase Image Processing Application. The device consists of the following three modules: (1) Aidoc Hospital Server (AHS/Orchestrator) for image acquisition; (2) Aidoc Cloud Server (ACS) for image processing; and (3) Aidoc Desktop Application for workflow integration.

    DICOM images are received, saved, filtered and de-identified before processing. Filtration matches metadata fields with keywords. Series are processed chronologically by running the algorithms on each series to detect suspected cases. The software then flags suspect cases by sending notifications to the desktop application, thereby facilitating triage and prioritization by the user. As the BriefCase software platform harbors several triage algorithms, the user may opt to filter out notifications by pathology, e.g., a chest radiologist may choose to filter out alerts on ICH cases, and a neuro-radiologist would opt to divert PE alerts. Where several medical centers are linked to a shared PACS, a user may read cases for a certain center but not for another, and thus may opt to filter out alerts by center. Activating the filter does not impact the order in which notifications are presented in the Aidoc desktop application.

    The desktop application feed displays all incoming suspect cases, each notified case in a line. Hovering over a line in the feed pops up a compressed, low-quality, grayscale, unannotated image that is captioned "not for diagnostic use" and is displayed as a preview function. This compressed preview is meant for informational purposes only, does not contain any marking of the findings, and is not intended for primary diagnosis beyond notification.

    Presenting the users with worklist prioritization facilitates earlier triage by prompting the user to assess the relevant original images in the PACS. Thus, the suspect case receives attention earlier than would have been the case in the standard of care practice alone.

    AI/ML Overview

    Here is a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    MetricAcceptance Criteria (Performance Goal)Reported Device Performance (BriefCase M1-LVO)
    SensitivityExceeded 80%88.8% (95% CI: 81.9%, 93.8%)
    SpecificityExceeded 80%87.2% (95% CI: 82.5%, 91.1%)
    Time-to-NotificationComparability with predicate device3.8 minutes (95% CI: 3.6-4.0)
    NPVNot explicitly stated as goal99.5% (95% CI: 99.3%-99.8%)
    PPVNot explicitly stated as goal20.1% (95% CI: 13.2%-24.2%)

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: 383 cases (implied by age, gender, and scanner distribution tables).
    • Data Provenance: Cases were collected from medical centers in the US (United States). The data was retrospective. It's explicitly stated that these datasets are "distinct datasets from the ones used to train the algorithm."

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The document states that the ground truth was "identified as positive both by the reviewers," implying multiple reviewers. However, the exact number of experts and their specific qualifications (e.g., number of years of experience, specific board certifications) are not explicitly stated in the provided text.

    4. Adjudication Method for the Test Set

    The adjudication method used to establish the ground truth ("identified as positive both by the reviewers") is not explicitly detailed beyond this general statement. It's unclear if a 2+1, 3+1, or other consensus method was used, or if it was a simpler agreement model.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    No, an MRMC comparative effectiveness study involving human readers improving with AI vs. without AI assistance was not described in this document. The study primarily focuses on the standalone performance of the AI algorithm for triage. The time-to-notification metric compares the AI's speed to the predicate device's speed, not to human performance with or without AI.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    Yes, a standalone performance evaluation of the algorithm was done. The sensitivity and specificity results are for the "BriefCase software's performance in identifying head CTA images containing M1 Large Vessel Occlusion (M1-LVO)," which demonstrates the algorithm's performance independent of human interaction for diagnostic purposes. The device is intended to assist in workflow triage by flagging and communicating findings, not for primary diagnosis, and operates "in parallel to the ongoing standard of care image interpretation."

    7. The Type of Ground Truth Used

    The ground truth was established by "reviewers" identifying positive cases. This most closely aligns with expert consensus (clinical interpretation) of the CTA images. The document does not mention pathology or outcomes data as the definitive ground truth.

    8. The Sample Size for the Training Set

    The sample size for the training set is not explicitly stated in the provided text. It only mentions that the test set datasets are "distinct datasets from the ones used to train the algorithm."

    9. How the Ground Truth for the Training Set Was Established

    The document does not explicitly detail how the ground truth for the training set was established. It only broadly mentions that the algorithm was "trained on medical images." Given the context, it's highly probable that the training set's ground truth was also established via similar expert review/consensus, but this is not confirmed in the provided text.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1