Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K251195

    Validate with FDA (Live)

    Device Name
    BriefCase-Triage
    Date Cleared
    2026-01-27

    (285 days)

    Product Code
    Regulation Number
    892.2080
    Age Range
    18 - 120
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    Reference Devices :

    K250248, K230074, K213319

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    BriefCase-Triage is a radiological computer aided triage and notification software indicated for use in the analysis of contrast-enhanced CT images that include the brain, in adults or transitional adolescents aged 18 and older. The device is intended to assist hospital networks and appropriately trained medical specialists in workflow triage by flagging and communication of suspected positive cases of Brain Aneurysm (BA) findings that are 3.0 mm or larger.

    BriefCase-Triage uses an artificial intelligence algorithm to analyze images and flag suspect cases in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for suspect cases. Notifications include compressed preview images that are meant for informational purposes only and not intended for diagnostic use beyond notification. The device does not alter the original medical image and is not intended to be used as a diagnostic device.

    The results of BriefCase-Triage are intended to be used in conjunction with other patient information and based on professional judgment, to assist with triage/prioritization of medical images. Notified clinicians are responsible for viewing full images per the standard of care.

    Device Description

    BriefCase-Triage is a radiological computer-assisted triage and notification software device.

    The software is based on an algorithm programmed component and is intended to run on a linux-based server in a cloud environment.

    The BriefCase-Triage receives filtered DICOM Images, and processes them chronologically by running the algorithms on each series to detect suspected cases. Following the AI processing, the output of the algorithm analysis is transferred to an image review software (desktop application). When a suspected case is detected, the user receives a pop-up notification and is presented with a compressed, low-quality, grayscale image that is captioned "not for diagnostic use, for prioritization only" which is displayed as a preview function. This preview is meant for informational purposes only, does not contain any marking of the findings, and is not intended for primary diagnosis beyond notification.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the BriefCase-Triage device, based on the provided FDA 510(k) clearance letter:

    Acceptance Criteria and Reported Device Performance

    MetricAcceptance Criteria (Performance Goal)Reported Device Performance
    Primary Endpoints
    Sensitivity80%87.8% (95% CI: 83.1%-91.6%)
    Specificity80%91.6% (95% CI: 87.9%-94.5%)
    Secondary Endpoints
    Time-to-Notification (mean)Comparable to predicate device44.8 seconds (95% CI: 41.4-48.2)
    Negative Predictive Value (NPV)N/A98.9% (95% CI: 98.4%-99.2%)
    Positive Predictive Value (PPV)N/A47.6% (95% CI: 38.4%-57.1%)
    Positive Likelihood Ratio (PLR)N/A10.5 (95% CI: 7.2-15.3)
    Negative Likelihood Ratio (NLR)N/A0.13 (95% CI: 0.1-0.19)

    Note on Additional Operating Points (AOPs): The device also met performance goals (80% sensitivity and specificity) for three additional operating points (AOP1, AOP2, AOP3) with slightly varying sensitivity/specificity trade-offs (e.g., AOP3: Sensitivity 86.2%, Specificity 93.6%).

    Study Details

    1. Sample size used for the test set and the data provenance:

    • Sample Size: 544 cases
    • Data Provenance: Retrospective, blinded, multicenter study from 6 US-based clinical sites. The cases were distinct in time or center from those used for algorithm training.

    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Number of Experts: Three (3) senior board-certified radiologists.
    • Qualifications: "Senior board-certified radiologists." (Specific number of years of experience not detailed in the provided text).

    3. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    • The text states the ground truth was "determined by three senior board-certified radiologists." It doesn't explicitly describe an adjudication method like "2+1" or "3+1." This implies a consensus approach where all three radiologists agreed, or a majority rule, but the exact mechanism for resolving discrepancies (if any) is not specified.

    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No, an MRMC comparative effectiveness study was NOT done. The study's primary objective was to evaluate the standalone performance of the BriefCase-Triage software. The secondary endpoint compared the device's time-to-notification to that of the predicate device, but not its impact on human reader performance.

    5. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    • Yes, a standalone performance study was done. The primary endpoints (sensitivity and specificity) measure the algorithm's performance in identifying Brain Aneurysm (BA) findings.

    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • Expert Consensus: The ground truth was "determined by three senior board-certified radiologists."

    7. The sample size for the training set:

    • Not explicitly stated. The document mentions the algorithm was "trained during software development on images of the pathology" and that "critical findings were tagged in all CTs in the training data set." However, the specific sample size for this training data is not provided.

    8. How the ground truth for the training set was established:

    • Manually labeled ("tagged") images: The text states, "As is customary in the field of machine learning, deep learning algorithm development consisted of training on manually labeled ('tagged') images. In that process, critical findings were tagged in all CTs in the training data set." It does not specify who performed the tagging or their qualifications, nor the method of consensus if multiple taggers were involved.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1