Search Results
Found 1 results
510(k) Data Aggregation
(112 days)
BriefCase-Triage: CARE (Clinical AI Reasoning Engine) Multi-Triage CT Body is a radiological computer aided triage and notification software indicated for use in the analysis of contrast and non-contrast CT images of the chest, abdomen, and/or pelvis, in adults or transitional adolescents aged 18 and older. The device is intended to assist hospital networks and appropriately trained medical specialists in workflow triage by flagging and communicating suspected positive findings, per study, of:
- Diverticulitis;
- Abdominal-pelvic abscess;
- Appendicitis;
- Intestinal ischemia and/or pneumatosis;
- Obstructive renal stone;
- Small bowel obstruction;
- Large bowel obstruction;
- Spleen injury;
- Liver injury;
- Kidney injury;
- Pelvic fracture.
The device flags cases with at least one suspected finding to assist with triage/prioritization of medical images. The device will provide a flag for each suspected finding within this study. A preview image will be provided for each distinct suspected finding.
BriefCase-Triage uses a foundation model-based artificial intelligence (AI) system to analyze images and highlight cases with detected findings in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for cases with suspected findings. Notifications include compressed preview images for each suspected finding that are meant for informational purposes only and not intended for diagnostic use beyond notification. The device does not alter the original medical images and is not intended to be used as a diagnostic device.
The results of BriefCase-Triage are intended to be used in conjunction with other patient information and based on their professional judgment to assist with triage/prioritization of medical images. Notified clinicians are responsible for viewing full images per the standard of care.
Briefcase-Triage is a radiological computer-assisted triage and notification software device. The software is based on an algorithm programmed component and is intended to run on a linux-based server in a cloud environment.
The BriefCase-Triage receives images that match meta-data criteria according to BriefCase-Triage: CARE Multi-Triage CT Body's predefined set of parameters. Then, the BriefCase-Triage processes the series chronologically, identifying cases with suspected positive finding(s) and selecting key slice(s) for preview. BriefCase-Triage output consists of suspected positive flag/notification regarding the existence of each finding in the analyzed study. Each finding includes a Representative Key Slice. The Key Slice(s) may be presented to the users as compressed, low-quality, grayscale, preview images with the date and time imprinted. The previews are not annotated and are captioned with the disclaimer "Not for diagnostic use, for prioritization only" according to the device requirement from the Image Communication Platform (ICP).
Acceptance Criteria and Study Details for BriefCase-Triage: CARE Multi-triage CT Body
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria for the BriefCase-Triage: CARE Multi-triage CT Body device were primarily defined by performance goals for Area Under the Curve (AUC), Sensitivity (Se), and Specificity (Sp). The study demonstrated that the device met and exceeded these criteria for all 11 indications.
| Indication | Performance Goal (Acceptance Criteria) | Reported Device Performance (Mean) | 95% Confidence Interval (Reported) |
|---|---|---|---|
| Primary Endpoints | |||
| Finding-level AUC | > 0.95 | 0.974 - 1.00 | 0.952 - 1.00 |
| Sensitivity (Se) | > 80% | 94.0% - 99.3% | 88.9% - 100% |
| Specificity (Sp) | > 80% | 95.7% - 99.3% | 91% - 100% |
| Secondary Endpoints (Comparable to Predicate) | |||
| BriefCase time-to-notification | Comparable to predicate | 45 seconds | 43.4 - 46.5 seconds |
Note: The reported device performance for AUC, Sensitivity, and Specificity are ranges covering the minimum and maximum values observed across the 11 indications in the pivotal study. Detailed values for each indication are provided in the source text.
2. Sample Size and Data Provenance for the Test Set
- Sample Size: N = 280 for each of the 11 clinical indications, resulting in 1769 unique scans included across all device indications.
- Data Provenance: The data was collected from 6 US-based clinical sites. It was retrospective and the cases were distinct in time or center from the cases used to train the algorithm.
3. Number of Experts and Qualifications for Ground Truth
- Number of Experts: Three senior board-certified radiologists.
- Qualifications: The document specifically states "senior board-certified radiologists." No further details on years of experience were provided.
4. Adjudication Method for the Test Set
The adjudication method used to establish ground truth was based on the "consensus" of the three senior board-certified radiologists ("as determined by three senior board-certified radiologists"). This implies a consensus-based adjudication, but the specific mechanics (e.g., majority vote like 2+1, or requiring all three to agree) are not explicitly detailed.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No MRMC study comparing human readers with AI assistance versus without AI assistance was reported in this document. The study described is a standalone performance analysis of the algorithm.
6. Standalone Performance Study
Yes, a standalone performance study was done. The document states: "Aidoc conducted a retrospective, blinded, multicenter study with the Briefcase-Triage software to evaluate the standalone performance analysis individually for each of the 11 clinical indications supported by BriefCase-Triage: CARE Multi-triage CT Body device."
7. Type of Ground Truth Used
The ground truth was established by expert consensus of three senior board-certified radiologists.
8. Sample Size for the Training Set
The sample size for the training set is not explicitly provided in the given text. It is only mentioned that "the algorithm was trained during software development on images of the pathology."
9. How the Ground Truth for the Training Set was Established
The ground truth for the training set was established through labeled ("tagged") images. The document states: "As is customary in the field of machine learning, deep learning algorithm development consisted of training on labeled ("tagged") images. In that process, each image in the training dataset was tagged based on the presence of the critical finding." The specific method or expert involvement in this tagging process is not detailed, but it implies human expert labeling.
Ask a specific question about this device
Page 1 of 1