Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K253578

    Validate with FDA (Live)

    Date Cleared
    2026-02-26

    (101 days)

    Product Code
    Regulation Number
    892.2080
    Age Range
    18 - 120
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    Reference Devices :

    Predicate Device (K251406)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    BriefCase-Triage: CARE (Clinical AI Reasoning Engine) Multi-Triage CT for Pneumothorax; Pericardial effusion; Large aortic aneurysm; Shoulder fracture or dislocation device is a radiological computer aided triage and notification software indicated for use in the analysis of contrast and non-contrast CT images of the chest, abdomen, or chest/abdomen, in adults or transitional adolescents aged 18 and older. The device is intended to assist hospital networks and appropriately trained medical specialists in workflow triage by flagging and communicating suspected positive findings, per study, of:

    • Pneumothorax;
    • Pericardial effusion;
    • Large aortic aneurysm
    • Shoulder Fracture or Dislocation

    The device flags cases with at least one suspected finding to assist with triage/prioritization of medical images. The device will provide a flag for each suspected finding within this study. A preview image will be provided for each distinct suspected finding.

    BriefCase-Triage uses a foundation model-based artificial intelligence (AI) system to analyze images and highlight cases with detected findings in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for cases with suspected findings. Notifications include compressed preview images for each suspected finding that are meant for informational purposes only and not intended for diagnostic use beyond notification. The device does not alter the original medical images and is not intended to be used as a diagnostic device.

    The results of BriefCase-Triage are intended to be used in conjunction with other patient information and based on their professional judgment to assist with triage/prioritization of medical images. Notified clinicians are responsible for viewing full images per the standard of care.

    Device Description

    BriefCase-Triage: CARE Multi-Triage CT for Pneumothorax; Pericardial effusion; Large aortic aneurysm; Shoulder fracture or dislocation device is a radiological computer-assisted triage and notification software device. The software is based on an algorithm programmed component and is intended to run on a linux-based server in a cloud environment.

    The BriefCase-Triage device receives images that match meta-data criteria according to the BriefCase-Triage predefined set of parameters. Then, the BriefCase-Triage processes the series chronologically, identifying cases with suspected positive finding(s) and selecting key slice(s) for preview. BriefCase-Triage output consists of suspected positive flag/notification regarding the existence of each finding in the analyzed study. Each finding includes a Representative Key Slice. The Key Slice(s) may be presented to the users as compressed, low-quality, grayscale, preview images with the date and time imprinted. The previews are not annotated and are captioned with the disclaimer "Not for diagnostic use, for prioritization only" according to the device requirement from the Image Communication Platform (ICP).

    Presenting the users with worklist prioritization facilitates efficient triage by prompting the user to assess the relevant original images in the PACS. Thus, the suspect case receives attention earlier than would have been the case in the standard of care practice alone.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving device performance, based on the provided FDA 510(k) clearance letter:


    1. Acceptance Criteria and Reported Device Performance

    The core acceptance criteria are based on standalone performance metrics for each of the four clinical indications.

    IndicationAcceptance Criteria (Default Operating Point)Reported Device Performance (Default Operating Point)
    PneumothoraxAUC > 0.95 (lower bound 95% CI); Sensitivity > 80%; Specificity > 80%AUC: 98.9 (95% CI: 97.8-99.7) Sensitivity: 94.8% (95% CI: 89.5%-97.9%) Specificity: 95.9% (95% CI: 91.3%-98.5%)
    Pericardial effusionAUC > 0.95 (lower bound 95% CI); Sensitivity > 80%; Specificity > 80%AUC: 99.1 (95% CI: 98.0-99.8) Sensitivity: 96.4% (95% CI: 91.7%-98.8%) Specificity: 96.5% (95% CI: 92.0%-98.8%)
    Large aortic aneurysmAUC > 0.95 (lower bound 95% CI); Sensitivity > 80%; Specificity > 80%AUC: 99.5 (95% CI: 98.9-99.9) Sensitivity: 97.1% (95% CI: 92.7%-99.2%) Specificity: 97.2% (95% CI: 92.9%-99.2%)
    Shoulder fracture or dislocationAUC > 0.95 (lower bound 95% CI); Sensitivity > 80%; Specificity > 80%AUC: 99.9 (95% CI: 99.7-100) Sensitivity: 97.8% (95% CI: 93.7%-99.5%) Specificity: 99.3% (95% CI: 96.2%-100.0%)
    Time-to-notificationComparability with predicate device in time savings to standard of care.Subject Device Mean: 49.9 seconds (95% CI: 46.4-53.5) Predicate Device Mean: 10.7 seconds (95% CI: 10.5-10.9) Note: While the subject device's time is longer, the conclusion states comparability regarding time savings to standard of care review, implying it still offers significant benefit.

    Study Proving Device Meets Acceptance Criteria

    The study conducted was a retrospective, blinded, multicenter standalone performance analysis.

    2. Sample size used for the test set and the data provenance:
    * Sample Size: N = 280 for each of the 4 clinical indications, totaling 772 unique scans across all indications.
    * Data Provenance: The cases were collected from 6 US-based clinical sites, representing diverse geographic locations and site types. The data was "distinct in time or center from the cases used to train the algorithm," and "sequestered from algorithm development activities." This indicates a high level of independence for the test set. The data is retrospective.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
    * Number of Experts: Three (3)
    * Qualifications: Senior board-certified radiologists.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
    * The document states ground truth was "determined by three senior board-certified radiologists." It does not explicitly mention an adjudication method like 2+1 or 3+1, but the plural "radiologists" and the method of "determined by" suggests a consensus or majority opinion among these three, rather than individual opinions without interaction.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
    * No MRMC comparative effectiveness study was explicitly described. The study was a "standalone performance analysis" of the software itself. The comparison of "time-to-notification" with the predicate device implies a comparison of software performance characteristics related to triage, not a study of human readers with and without AI assistance.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
    * Yes, a standalone performance study was done. The document explicitly refers to it as a "standalone performance analysis" to "evaluate the software's performance."

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
    * Expert Consensus: The ground truth was established by "three senior board-certified radiologists."

    8. The sample size for the training set:
    * The document does not specify the exact sample size for the training set. It only mentions that the "algorithm was trained during software development on images of the pathology."

    9. How the ground truth for the training set was established:
    * The ground truth for the training set was established by "labeled ('tagged') images. In that process, each image in the training dataset was tagged based on the presence of the critical finding." The method or type of tagging (e.g., by experts, automated, etc.) is not detailed, but it's implied that there was a process of assigning labels/tags to the images to indicate the presence or absence of the target pathologies.

    Ask a Question

    Ask a specific question about this device

    K Number
    K253265

    Validate with FDA (Live)

    Device Name
    BriefCase-Triage
    Date Cleared
    2025-11-06

    (38 days)

    Product Code
    Regulation Number
    892.2080
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    Reference Devices :

    K251406

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    BriefCase-Triage is a radiological computer aided triage and notification software indicated for use in the analysis of abdominal CT images in adults or transitional adolescents aged 18 and older. The device is intended to assist hospital networks and appropriately trained medical specialists in workflow triage by flagging and communication of suspected positive findings of Intra-abdominal free gas (IFG) pathologies.

    BriefCase-Triage uses an artificial intelligence algorithm to analyze images and highlight cases with the detected findings in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for cases with suspected findings. Notifications include compressed preview images that are meant for informational purposes only and not intended for diagnostic use beyond notification. The device does not alter the original medical image and is not intended to be used as a diagnostic device.

    The results of BriefCase-Triage are intended to be used in conjunction with other patient information and based on their professional judgment, to assist with triage/prioritization of medical images. Notified clinicians are responsible for viewing full images per the standard of care.

    Device Description

    Briefcase-Triage is a radiological computer-assisted triage and notification software device.

    The software is based on an algorithm programmed component and is intended to run on a linux-based server in a cloud environment.

    The Briefcase-Triage receives filtered DICOM Images, and processes them chronologically by running the algorithms on each series to detect suspected cases. Following the AI processing, the output of the algorithm analysis is transferred to an image review software (desktop application). When a suspected case is detected, the user receives a pop-up notification and is presented with a compressed, low-quality, grayscale image that is captioned "not for diagnostic use, for prioritization only" which is displayed as a preview function. This preview is meant for informational purposes only, does not contain any marking of the findings, and is not intended for primary diagnosis beyond notification.

    Presenting the users with worklist prioritization facilitates efficient triage by prompting the user to assess the relevant original images in the PACS. Thus, the suspect case receives attention earlier than would have been the case in the standard of care practice alone.

    The algorithm was trained during software development on images of the pathology. As is customary in the field of machine learning, deep learning algorithm development consisted of training on labeled ("tagged") images. In that process, each image in the training dataset was tagged based on the presence of the critical finding.

    AI/ML Overview

    Here's a detailed breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) clearance letter for BriefCase-Triage:

    Acceptance Criteria and Reported Device Performance

    ParameterAcceptance Criteria (Performance Goal)Reported Device Performance
    Primary Endpoints
    Sensitivity80%94.2% (95% CI: 89.6%, 97.2%)
    Specificity80%94.6% (95% CI: 90.7%, 97.2%)
    Secondary Endpoint
    Time-to-notification (Subject Device)Comparability with predicate (time savings to standard of care review)10.4 seconds (95% CI: 10.1-10.8)
    Time-to-notification (Predicate Device)(for comparison)264.4 seconds (95% CI: 222-300)

    Note: The document explicitly states that the primary endpoints were "sensitivity and specificity with an 80% performance goal." The reported performance for both sensitivity and specificity (94.2% and 94.6% respectively) significantly exceeds this 80% goal. The time-to-notification for the subject device is significantly faster than the predicate, demonstrating improved "time savings to the standard of care review."

    Study Information

    1. Sample Size Used for the Test Set and Data Provenance:
    * Sample Size: 394 cases
    * Data Provenance:
    * Country of Origin: US (6 clinical sites)
    * Retrospective/Prospective: Retrospective
    * Additional Detail: Cases were distinct in time or center from the training data.

    2. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:
    * Number of Experts: 3
    * Qualifications: Senior board-certified radiologists

    3. Adjudication Method for the Test Set:
    * The document states "as determined by three senior board-certified radiologists." While it doesn't explicitly state "2+1" or "3+1," this implies a consensus-based approach among the three experts. Without further detail, it's reasonable to infer a consensus was reached, or a specific rule for disagreement (e.g., majority) was applied.

    4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
    * No, a multi-reader multi-case (MRMC) comparative effectiveness study was not conducted to assess how much human readers improve with AI vs. without AI assistance. The study focuses purely on the standalone performance of the AI algorithm.

    5. Standalone Performance Study (Algorithm Only):
    * Yes, a standalone study was performed. The "Pivotal Study Summary" describes evaluating "the software's performance to the ground truth," indicating a standalone performance assessment of the algorithm without human-in-the-loop performance measurement.

    6. Type of Ground Truth Used:
    * Expert consensus (as determined by three senior board-certified radiologists).

    7. Sample Size for the Training Set:
    * The document states, "The algorithm was trained during software development on images of the pathology." However, it does not provide a specific sample size for the training set.

    8. How the Ground Truth for the Training Set Was Established:
    * "each image in the training dataset was tagged based on the presence of the critical finding." This indicates that human experts (or a similar method to the test set ground truth) labeled the images in the training set for the presence of the pathology. However, the specific number and qualifications of these experts are not explicitly stated for the training set.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1