Search Results
Found 1 results
510(k) Data Aggregation
(142 days)
BriefCase is a radiological computer aided triage and notification software indicated for use in the analysis of non-enhanced head CT images.
The device is intended to assist hospital networks and trained radiologists in workflow triage by flagging and communication of suspected positive findings of pathologies in head CT images, namely Intracranial Hemorrhage (ICH).
BriefCase uses an artificial intelligence algorithm to analyze images and highlight cases with detected ICH on a standalone desktop application in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for cases with suspected ICH findings. Notifications include compressed preview images that are meant for informational purposes only and not intended for diagnostic use beyond notification. The device does not alter the original medical image and is not intended to be used as a diagnostic device.
The results of BriefCase are intended to be used in conjunction with other patient information and based on professional judgment, to assist with triage/prioritization of medical images. Notified clinicians are responsible for viewing full images per the standard of care.
BriefCase is a radiological computer-assisted triage and notification software device. The software system is based on an algorithm programmed component and is comprised of a standard off-the-shelf operating system, the Microsoft Windows server 2012 64bit, and additional applications, which include PostgreSQL. DICOM module and the BriefCase Image Processing Application. The device consists of the following three modules: (1) Aidoc Hospital Server (AHS); (2) Aidoc Cloud Server (ACS); and (3) Aidoc Worklist Application that is installed on the radiologist' desktop and provides the user interface in which notifications from the BriefCase software are received.
DICOM images are received, saved and filtered and de-identified before processing. Series are processed chronologically by running an algorithm on each series to detect suspected findings and then notifications on flagged series are sent to the Worklist desktop application, thereby prompting preemptive triage and prioritization.
The Worklist Application displays the pop-up notifications of new studies with suspected findings when they come in. Notifications are in the form of a small pop-up containing patient name and accession number. A list of all incoming cases with suspected findings is also displayed. In addition, a compressed, small black and white image that is marked "not for diagnostic use" is displayed as a preview function. This compressed preview is meant for informational purposes only, does not contain any marking of the findings, and is not intended for primary diagnosis beyond notification. Presenting the radiologist with notification facilitates earlier triage by allowing one to assess the available images in the PACS. Thus, the suspect case receives attention earlier than would have been the case in the standard of care practice alone.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:
Acceptance Criteria and Device Performance Study for Aidoc Medical, Ltd.'s BriefCase (K180647)
Device: BriefCase, a radiological computer-aided triage and notification software for Intracranial Hemorrhage (ICH) detection on non-enhanced head CT scans.
1. Table of Acceptance Criteria and the Reported Device Performance
The document states a primary performance goal for sensitivity and specificity.
| Metric | Acceptance Criteria (Performance Goal) | Reported Device Performance (95% CI) |
|---|---|---|
| Sensitivity | Exceeded 80% | 93.6% (86.6%-97.6%) |
| Specificity | Exceeded 80% | 92.3% (85.4%-96.6%) |
Additionally, a secondary endpoint related to workflow prioritization was assessed. While not explicitly stated as an "acceptance criterion" with a specific threshold, the study aimed to demonstrate a significant reduction in time to notification compared to standard-of-care time to exam open.
| Metric | Reported Device Performance (Mean, 95% CI) | Statistical Significance (P-value) |
|---|---|---|
| Standard of Care Time-to-exam-open | 72.58 minutes (45.02-100.14) | |
| BriefCase Time-to-notification | 4.46 minutes (4.10-4.83) | |
| Mean Difference (Time-to-exam-open - Time-to-notification) | 68.11 minutes (40.50-95.72) | <0.0001 |
2. Sample Size Used for the Test Set and the Data Provenance
- Sample Size for Test Set: 198 cases
- Data Provenance: Retrospective, multicenter, multinational.
- Countries of Origin: 3 clinical sites (2 US and 1 OUS - outside US, specifically Israel is mentioned for the time-to-notification study). The document specifies "one in Israel and one in the US" for the 59 true positive cases analyzed for time-to-notification.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts
The document does not explicitly state the number of experts or their specific qualifications (e.g., years of experience) used to establish the ground truth for the test set. It only refers to cases being "identified as positive by both the BriefCase and the ground truth," suggesting an expert consensus or review process but providing no details about it.
4. Adjudication Method for the Test Set
The document does not explicitly state the full adjudication method (e.g., 2+1, 3+1). It only mentions "ground truth" without detailing how consensus was reached if multiple readers were involved.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done
No, a pure MRMC comparative effectiveness study that quantitatively measures how much human readers improve with AI vs. without AI assistance was not explicitly described. The study
focused on the standalone performance of the AI algorithm (sensitivity and specificity) and the potential for workflow prioritization (time-to-notification vs. time-to-exam-open). While the latter implies a benefit for radiologists by providing earlier notifications, it's not a direct human reader performance study (e.g., ROC analysis with and without AI assistance).
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done
Yes. The primary performance metrics (sensitivity and specificity) of 93.6% and 92.3% respectively were obtained from the algorithm's performance in identifying ICH findings on the 198 cases, independent of human interpretation at that specific measurement. The time-to-notification metric also reflects the algorithm's speed in processing and flagging cases.
7. The Type of Ground Truth Used
The document refers to the ground truth as "identified as positive by both the BriefCase and the ground truth" for the 59 "true positive" cases, implying expert consensus (radiological interpretation) was used to establish the presence or absence of ICH. It doesn't mention pathology or outcomes data.
8. The Sample Size for the Training Set
The document does not specify the sample size used for the training set. It only mentions that the device utilizes a "deep learning algorithm trained on medical images."
9. How the Ground Truth for the Training Set Was Established
The document does not specify how the ground truth for the training set was established. It generally states that the deep learning algorithm was "trained on medical images," but doesn't detail the process for labeling these images for training purposes.
Ask a specific question about this device
Page 1 of 1