Search Results
Found 4424 results
510(k) Data Aggregation
(214 days)
Ask a specific question about this device
(112 days)
Ask a specific question about this device
(258 days)
Ask a specific question about this device
(256 days)
Ask a specific question about this device
(91 days)
Ask a specific question about this device
(45 days)
Ask a specific question about this device
(266 days)
Ask a specific question about this device
(45 days)
Ask a specific question about this device
(1 days)
Ask a specific question about this device
(101 days)
BriefCase-Triage: CARE (Clinical AI Reasoning Engine) Multi-Triage CT for Pneumothorax; Pericardial effusion; Large aortic aneurysm; Shoulder fracture or dislocation device is a radiological computer aided triage and notification software indicated for use in the analysis of contrast and non-contrast CT images of the chest, abdomen, or chest/abdomen, in adults or transitional adolescents aged 18 and older. The device is intended to assist hospital networks and appropriately trained medical specialists in workflow triage by flagging and communicating suspected positive findings, per study, of:
- Pneumothorax;
- Pericardial effusion;
- Large aortic aneurysm
- Shoulder Fracture or Dislocation
The device flags cases with at least one suspected finding to assist with triage/prioritization of medical images. The device will provide a flag for each suspected finding within this study. A preview image will be provided for each distinct suspected finding.
BriefCase-Triage uses a foundation model-based artificial intelligence (AI) system to analyze images and highlight cases with detected findings in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for cases with suspected findings. Notifications include compressed preview images for each suspected finding that are meant for informational purposes only and not intended for diagnostic use beyond notification. The device does not alter the original medical images and is not intended to be used as a diagnostic device.
The results of BriefCase-Triage are intended to be used in conjunction with other patient information and based on their professional judgment to assist with triage/prioritization of medical images. Notified clinicians are responsible for viewing full images per the standard of care.
BriefCase-Triage: CARE Multi-Triage CT for Pneumothorax; Pericardial effusion; Large aortic aneurysm; Shoulder fracture or dislocation device is a radiological computer-assisted triage and notification software device. The software is based on an algorithm programmed component and is intended to run on a linux-based server in a cloud environment.
The BriefCase-Triage device receives images that match meta-data criteria according to the BriefCase-Triage predefined set of parameters. Then, the BriefCase-Triage processes the series chronologically, identifying cases with suspected positive finding(s) and selecting key slice(s) for preview. BriefCase-Triage output consists of suspected positive flag/notification regarding the existence of each finding in the analyzed study. Each finding includes a Representative Key Slice. The Key Slice(s) may be presented to the users as compressed, low-quality, grayscale, preview images with the date and time imprinted. The previews are not annotated and are captioned with the disclaimer "Not for diagnostic use, for prioritization only" according to the device requirement from the Image Communication Platform (ICP).
Presenting the users with worklist prioritization facilitates efficient triage by prompting the user to assess the relevant original images in the PACS. Thus, the suspect case receives attention earlier than would have been the case in the standard of care practice alone.
Here's a breakdown of the acceptance criteria and the study proving device performance, based on the provided FDA 510(k) clearance letter:
1. Acceptance Criteria and Reported Device Performance
The core acceptance criteria are based on standalone performance metrics for each of the four clinical indications.
| Indication | Acceptance Criteria (Default Operating Point) | Reported Device Performance (Default Operating Point) |
|---|---|---|
| Pneumothorax | AUC > 0.95 (lower bound 95% CI); Sensitivity > 80%; Specificity > 80% | AUC: 98.9 (95% CI: 97.8-99.7) Sensitivity: 94.8% (95% CI: 89.5%-97.9%) Specificity: 95.9% (95% CI: 91.3%-98.5%) |
| Pericardial effusion | AUC > 0.95 (lower bound 95% CI); Sensitivity > 80%; Specificity > 80% | AUC: 99.1 (95% CI: 98.0-99.8) Sensitivity: 96.4% (95% CI: 91.7%-98.8%) Specificity: 96.5% (95% CI: 92.0%-98.8%) |
| Large aortic aneurysm | AUC > 0.95 (lower bound 95% CI); Sensitivity > 80%; Specificity > 80% | AUC: 99.5 (95% CI: 98.9-99.9) Sensitivity: 97.1% (95% CI: 92.7%-99.2%) Specificity: 97.2% (95% CI: 92.9%-99.2%) |
| Shoulder fracture or dislocation | AUC > 0.95 (lower bound 95% CI); Sensitivity > 80%; Specificity > 80% | AUC: 99.9 (95% CI: 99.7-100) Sensitivity: 97.8% (95% CI: 93.7%-99.5%) Specificity: 99.3% (95% CI: 96.2%-100.0%) |
| Time-to-notification | Comparability with predicate device in time savings to standard of care. | Subject Device Mean: 49.9 seconds (95% CI: 46.4-53.5) Predicate Device Mean: 10.7 seconds (95% CI: 10.5-10.9) Note: While the subject device's time is longer, the conclusion states comparability regarding time savings to standard of care review, implying it still offers significant benefit. |
Study Proving Device Meets Acceptance Criteria
The study conducted was a retrospective, blinded, multicenter standalone performance analysis.
2. Sample size used for the test set and the data provenance:
* Sample Size: N = 280 for each of the 4 clinical indications, totaling 772 unique scans across all indications.
* Data Provenance: The cases were collected from 6 US-based clinical sites, representing diverse geographic locations and site types. The data was "distinct in time or center from the cases used to train the algorithm," and "sequestered from algorithm development activities." This indicates a high level of independence for the test set. The data is retrospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
* Number of Experts: Three (3)
* Qualifications: Senior board-certified radiologists.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
* The document states ground truth was "determined by three senior board-certified radiologists." It does not explicitly mention an adjudication method like 2+1 or 3+1, but the plural "radiologists" and the method of "determined by" suggests a consensus or majority opinion among these three, rather than individual opinions without interaction.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
* No MRMC comparative effectiveness study was explicitly described. The study was a "standalone performance analysis" of the software itself. The comparison of "time-to-notification" with the predicate device implies a comparison of software performance characteristics related to triage, not a study of human readers with and without AI assistance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
* Yes, a standalone performance study was done. The document explicitly refers to it as a "standalone performance analysis" to "evaluate the software's performance."
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
* Expert Consensus: The ground truth was established by "three senior board-certified radiologists."
8. The sample size for the training set:
* The document does not specify the exact sample size for the training set. It only mentions that the "algorithm was trained during software development on images of the pathology."
9. How the ground truth for the training set was established:
* The ground truth for the training set was established by "labeled ('tagged') images. In that process, each image in the training dataset was tagged based on the presence of the critical finding." The method or type of tagging (e.g., by experts, automated, etc.) is not detailed, but it's implied that there was a process of assigning labels/tags to the images to indicate the presence or absence of the target pathologies.
Ask a specific question about this device
Page 1 of 443