Search Results
Found 1 results
510(k) Data Aggregation
(29 days)
BriefCase is a radiological computer aided triage and notification software indicated for use in the analysis of chest CTs (with or without contrast) images, in adults or transitional adolescents aged 18 and older. The device is intended to assist hospital networks and appropriately trained medical specialists in workflow triage by flagging and communicating suspect cases of three or more acute Rib fracture (RibFx) pathologies.
BriefCase uses an artificial intelligence algorithm to analyze images and highlight cases with detected findings in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for cases with suspected RibFx findings. Notifications include compressed preview images that are meant for informational purposes only, and not intended for diagnostic use beyond notification. The device does not alter the original medical image and is not intended to be used as a diagnostic device.
The results of BriefCase are intended to be used in conjunction with other patient information and based on their professional judgment, to assist with triage/prioritization of medical images. Notified clinicians are responsible for viewing full images per the standard of care.
BriefCase is a radiological computer-assisted triage and notification software is based on an algorithm programmed component and is intended to run on a linux-based server in a cloud environment.
The BriefCase receives filtered DICOM Images, and processes them chronologically by running the algorithms on each series to detect suspected cases. Following the Al processing, the output of the algorithm analysis is transferred to an image review software (desktop application). When a suspected case is detected, the user receives a pop-up notification and is presented with a compressed, lowguality, grayscale image that is captioned "not for diagnostic use, for prioritization only" which is displayed as a preview function. This preview is meant for informational purposes only, does not contain any marking of the findings, and is not intended for primary diagnosis beyond notification.
Presenting the users with worklist prioritization facilitates efficient triaqe by prompting the user to assess the relevant original images in the PACS. Thus, the suspect case receives attention earlier than would have been the case in the standard of care practice alone.
Here's a breakdown of the acceptance criteria and study details for the Aidoc BriefCase device, based on the provided text:
Acceptance Criteria and Device Performance
1. A table of acceptance criteria and the reported device performance
| Performance Metric | Acceptance Criteria (Pre-specified Performance Goal) | Reported Device Performance (95% Confidence Interval) |
|---|---|---|
| Area Under the Curve (AUC) | > 0.95 | 0.98 (0.966, 0.994) |
| Sensitivity | > 80% | 98.08% (93.23%, 99.77%) |
| Specificity | > 80% | 93.14% (88.75%, 96.20%) |
| Time-to-Notification (mean) | Comparable to predicate device (K202992) | 70.1 seconds (64.9-75.4) |
Note: The text also mentions two additional operating points (AOP1 and AOP2) that met specific sensitivity and specificity criteria.
Study Details
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Test Set Sample Size: 308 cases
- Data Provenance: Retrospective, multicenter study from 5 US-based clinical sites. The cases were distinct in time or center from the training data.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Number of Experts: Three (3) senior board-certified radiologists.
- Qualifications: "Senior board-certified radiologists"
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Adjudication Method: Majority voting. (Implies a 3-reader consensus where at least 2 out of 3 agreed).
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- The provided text does not describe an MRMC comparative effectiveness study evaluating the improvement of human readers with AI assistance. The study described is a standalone performance validation of the AI algorithm and a comparison of its Time-to-Notification with a predicate device.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Yes, a standalone performance study was done. The primary endpoint explicitly describes the performance of the algorithm (AUC, Sensitivity, Specificity) compared to the ground truth.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- Type of Ground Truth: Expert consensus, determined by three senior board-certified radiologists using majority voting.
8. The sample size for the training set
- The exact sample size for the training set is not explicitly stated. However, the document mentions that the subject device was trained on "a larger data set" compared to the predicate device, and that the pivotal dataset cases were "distinct in time or center from the cases used to train the algorithm."
9. How the ground truth for the training set was established
- How the ground truth for the training set was established is not explicitly detailed in the provided text. It only states that the algorithm was "trained on a larger data set." This would typically involve annotated data, but the method of annotation (e.g., expert review, semi-automated) is not described.
Ask a specific question about this device
Page 1 of 1