Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K241727
    Device Name
    BriefCase-Triage
    Date Cleared
    2024-07-12

    (28 days)

    Product Code
    Regulation Number
    892.2080
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K222277

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Briefcase-Triage is a radiological computer- aided triage and notification software indicated for use in the analysis of CTPA images, in adults or transitional adolescents aged 18 and older. The device is intended to assist hospital networks and appropriately trained medical specialists in workflow triage by flagging and communicating suspected positive cases of Pulmonary Embolism (PE) pathologies.

    Briefcase-Triage uses an artificial intelligence algorithm to analyze images and highlight cases with detected findings in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for cases with suspected PE findings. Notifications include compressed preview images that are meant for informational purposes only and not intended for diagnostic use beyond notification. The device does not alter the original medical image and is not intended to be used as a diagnostic device.

    The results of Briefcase-Triage are intended to be used in conjunction with other patient information and based on their professional judgment, to assist with triage/prioritization of medical images. Notified clinicians are responsible for viewing full images per the standard of care.

    Device Description

    Briefcase-Triage is a radiological computer-assisted triage and notification software device. The software is based on an algorithm programmed component and is intended to run on a linux-based server in a cloud environment.

    The Briefcase-Triage receives filtered DICOM Images, and processes them chronologically by running the algorithms on each series to detect suspected cases. Following the Al processing, the output of the algorithm analysis is transferred to an image review software (desktop application). When a suspected case is detected, the user receives a pop-up notification and is presented with a compressed, low-quality, grayscale image that is captioned "not for diagnostic use, for prioritization only" which is displayed as a preview function. This preview is meant for informational purposes only, does not contain any marking of the findings, and is not intended for primary diagnosis beyond notification.

    Presenting the users with worklist prioritization facilitates efficient triage by prompting the user to assess the relevant original images in the PACS. Thus, the suspect case receives attention earlier than would have been the case in the standard of care practice alone.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the Aidoc BriefCase-Triage device, based on the provided document:

    1. Table of Acceptance Criteria and Reported Device Performance

    Device Name: BriefCase-Triage (for Pulmonary Embolism - PE)

    Acceptance CriteriaPerformance GoalReported Device Performance (Default Operating Point)
    Sensitivity≥ 80%94.39% (95% CI: 90.41%, 97.07%)
    Specificity≥ 80%94.39% (95% CI: 91.04%, 96.67%)
    Time-to-notification (compared to predicate)Comparable benefit in time savingMean 26.42 seconds (95% CI: 25.3-27.54) vs Predicate's 78.0 seconds (95% CI: 73.6-82.3)

    Additional Operating Points (AOPs) Performance:

    Operating PointSensitivity (95% CI)Specificity (95% CI)
    AOP199.53% (97.42%-99.99%)86.67% (82.16%-90.39%)
    AOP297.66% (94.63%-99.24%)91.93% (88.14%-94.82%)
    AOP391.59% (87.03%-94.94%)96.49% (93.64%-98.3%)
    AOP485.98% (80.6%-90.34%)98.25% (95.95%-99.43%)

    Other reported secondary endpoints for the default operating point:

    • NPV: 98.96% (95% CI: 98.21%- 99.4%)
    • PPV: 74.79% (95% CI: 64.80%- 82.7%)
    • PLR: 16.81 (95% CI: 10.43- 27.09)
    • NLR: 0.059 (95% CI: 0.034- 0.103)

    2. Sample Size and Data Provenance for the Test Set

    • Sample Size: 499 cases
    • Data Provenance: Retrospective, multicenter study from 6 US-based clinical sites. The cases were distinct in time or center from the cases used to train the algorithm.

    3. Number of Experts and Qualifications for Ground Truth

    • Number of Experts: Three (3)
    • Qualifications: Senior board-certified radiologists.

    4. Adjudication Method

    • The document states "the ground truth as determined by three senior board-certified radiologists." It does not explicitly state the adjudication method (e.g., 2+1, consensus, majority vote). However, "determined by" implies that their collective judgment established the ground truth for each case.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly described. The study focuses on the standalone performance of the AI algorithm against a ground truth established by experts, and a comparison of its notification time with a predicate device. It does not evaluate human reader performance with and without AI assistance.

    6. Standalone Performance Study

    • Yes, a standalone study (algorithm only without human-in-the-loop performance) was conducted. The primary endpoints (sensitivity and specificity) evaluated the device's ability to identify PE cases independently.

    7. Type of Ground Truth Used

    • Expert Consensus: The ground truth was "determined by three senior board-certified radiologists"

    8. Sample Size for the Training Set

    • The document states that the algorithm was "trained during software development on images of the pathology" and that the subject device was trained on a "larger data set" compared to the predicate. However, it does not specify the exact sample size of the training set.

    9. How Ground Truth for the Training Set was Established

    • "critical findings were tagged in all CTs in the training data set." This implies manual labeling (annotation) of findings by experts on the training images. While not explicitly stated, it is common practice that these tags are done by medical professionals.
    Ask a Question

    Ask a specific question about this device

    K Number
    K232751
    Device Name
    BriefCase-Triage
    Date Cleared
    2023-10-30

    (52 days)

    Product Code
    Regulation Number
    892.2080
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    BriefCase of Pulmonary Embolism (PE) (K222277)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    BriefCase-Triage is a radiological computer-aided triaqe and notification software indicated for use in the analysis of CTPA images in adults or transitional adolescents aged 18 and older. The device is intended to assist hospital networks and appropriately trained medical specialists in workflow triage by flagging and communicating suspected positive cases of Central Pulmonary Embolism (Central PE).

    BriefCase-Triage uses an artificial intelligence algorithm to analyze images and highlight cases with detected findings in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for cases with suspected Central PE findings. Notifications include compressed preview images that are meant for informational purposes only and not intended for diagnostic use beyond notification. The device does not alter the original medical image and is not intended to be used as a diagnostic device.

    The results of BriefCase-Triage are intended to be used in conjunction with other patient information and based on their professional judgment, to assist with triage/prioritization of medical images. Notified clinicians are responsible for viewing full images per the standard of care.

    Device Description

    BriefCase-Triage is a radiological computer-assisted triage and notification software device. The software is based on an algorithm programmed component and is intended to run on a linux-based server in a cloud environment.

    The BriefCase-Triage receives filtered DICOM images, and processes them chronologically by running the algorithms on each series to detect suspected cases. Following the Al processing, the output of the algorithm analysis is transferred to an image review software (desktop application). When a suspected case is detected, the user receives a pop-up notification and is presented with a compressed, low-quality, grayscale image that is captioned "not for diagnostic use, for prioritization only" which is displayed as a preview function. This preview is meant for informational purposes only, does not contain any marking of the findings, and is not intended for primary diagnosis beyond notification.

    Presenting the users with worklist prioritization facilitates efficient triage by prompting the user to assess the relevant original images in the PACS. Thus, the suspect case receives attention earlier than would have been the case in the standard of care practice alone.

    The algorithm was trained during software development on images of the pathology. As is customary in the field of machine learning, deep learning algorithm development consisted of training on manually labeled ("tagged") images. In that process, critical findings were tagged in all CTs in the training data set.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the Aidoc BriefCase-Triage device, based on the provided FDA 510(k) summary:

    1. Table of Acceptance Criteria and Reported Device Performance

    The FDA 510(k) summary explicitly states the primary performance goals and the achieved results.

    Acceptance Criteria (Performance Goal)Reported Device Performance
    Sensitivity ≥ 80%89.2% (95% CI: 82.5%, 93.9%)
    Specificity ≥ 80%94.5% (95% CI: 90.3%, 97.2%)

    Note on Secondary Endpoints: While time-to-notification, PPV, NPV, PLR, and NLR were assessed as secondary endpoints, the document does not state explicit acceptance criteria for them, but rather presents them as comparative data or additional performance metrics.

    2. Sample Size and Data Provenance

    • Test Set Sample Size: 328 cases from unique patients.
    • Data Provenance: Retrospective, multi-center study with data from 6 US-based clinical sites. The cases collected for the pivotal dataset were distinct in time or center from the cases used to train the algorithm.

    3. Number of Experts and Qualifications for Ground Truth

    • Number of Experts: 3 senior board-certified radiologists.
    • Qualifications of Experts: Senior board-certified radiologists. (No specific years of experience are detailed, but "senior" implies extensive experience).

    4. Adjudication Method for the Test Set

    • Adjudication Method: Majority voting among the three senior board-certified radiologists was used to establish the ground truth.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? No, a comparative effectiveness study involving human readers with and without AI assistance (MRMC) was not performed as the primary evaluation for this device. The study compared the algorithm's performance to ground truth, and the device is intended for workflow triage/notification, not as a diagnostic tool replacing human interpretation. The time-to-notification comparison was done between the device and a predicate device (PETN), not human readers.

    6. Standalone (Algorithm Only) Performance Study

    • Was a standalone study done? Yes. The pivotal study directly evaluated the BriefCase-Triage software's performance (sensitivity and specificity) in identifying Central PE by comparing its output against the established ground truth. This is a standalone performance evaluation.

    7. Type of Ground Truth Used

    • Type of Ground Truth: Expert consensus (majority voting by three senior board-certified radiologists).

    8. Sample Size for the Training Set

    • The document states that the algorithm was "trained during software development on images of the pathology." However, the exact sample size for the training set is not specified in the provided text.

    9. How Ground Truth for the Training Set Was Established

    • Method: "Critical findings were tagged in all CTs in the training data set." This implies manual labeling/tagging of findings by experts. The specific number or qualifications of these "tagging" experts are not detailed, but it's consistent with a machine learning development process.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1