Search Results
Found 2 results
510(k) Data Aggregation
(24 days)
BriefCase-Triage is a radiological computer aided triage and notification software indicated for use in the analysis of CT chest, abdomen, or chest/abdomen exams with contrast (CTA and CT with contrast) in adults or transitional adolescents aged 18 and older. The device is intended to assist hospital networks and appropriately trained medical specialists in workflow triage by flagging and communication of suspected positive findings of Aortic Dissection (AD) pathology.
BriefCase-Triage uses an artificial intelligence algorithm to analyze images and highlight cases with detected findings on a standalone desktop application in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for cases with suspected findings. Notifications include compressed preview images that are meant for informational purposes only and not intended for diagnostic use beyond notification. The device does not alter the original medical image and is not intended to be used as a diagnostic device.
The results of BriefCase-Triage are intended to be used in conjunction with other patient information and based on their professional judgment, to assist with triage/ prioritization.
Briefcase-Triage is a radiological computer-assisted triage and notification software device. The software is based on an algorithm programmed component and is intended to run on a linux-based server in a cloud environment.
The Briefcase-Triage receives filtered DICOM Images, and processes them chronologically by running the algorithms on each series to detect suspected cases. Following the AI processing, the output of the algorithm analysis is transferred to an image review software (desktop application). When a suspected case is detected, the user receives a pop-up notification and is presented with a compressed, low-quality, grayscale image that is captioned "not for diagnostic use, for prioritization only" which is displayed as a preview function. This preview is meant for informational purposes only, does not contain any marking of the findings, and is not intended for primary diagnosis beyond notification.
Presenting the users with worklist prioritization facilitates efficient triage by prompting the user to assess the relevant original images in the PACS. Thus, the suspect case receives attention earlier than would have been the case in the standard of care practice alone.
Here's a detailed breakdown of the acceptance criteria and study findings for BriefCase-Triage, based on the provided FDA 510(k) clearance letter:
1. Table of Acceptance Criteria and Reported Device Performance
Parameter | Acceptance Criteria | Reported Device Performance |
---|---|---|
Primary Endpoints | A lower bound 95% Confidence Interval (CI) of 80% for Sensitivity and Specificity at the default operating point. | Default Operating Point: |
- Sensitivity: 92.7% (95% CI: 88.2%, 95.8%). The lower bound (88.2%) is > 80%.
- Specificity: 92.8% (95% CI: 89.2%, 95.4%). The lower bound (89.2%) is > 80%.
Additional Operating Points (AOPs) meeting criteria:
- AOP1: Sensitivity 95.6% (95% CI: 91.8%-98.0%), Specificity 88.2% (95% CI: 84.0%-91.6%)
- AOP2: Sensitivity 94.1% (95% CI: 90.0%-96.9%), Specificity 89.8% (95% CI: 85.8%-93.0%)
- AOP3: Sensitivity 89.3% (95% CI: 84.2%-93.2%), Specificity 94.7% (95% CI: 91.6%-97.0%)
- AOP4: Sensitivity 86.3% (95% CI: 80.9%-90.7%), Specificity 97.7% (95% CI: 95.3%-99.1%) |
| Secondary Endpoints (Comparability with Predicate) | Time-to-notification metric for the Briefcase-Triage software should demonstrate comparability with the predicate device. | Briefcase-Triage (Subject Device): Mean time-to-notification = 10.7 seconds (95% CI: 10.5-10.9)
Predicate AD: Mean time-to-notification = 38.0 seconds (95% CI: 35.5-40.4)
The subject device's time-to-notification is faster than the predicate, demonstrating comparability and improvement in time savings. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: 509 cases.
- Data Provenance:
- Country of origin: 5 US-based clinical sites.
- Retrospective or Prospective: Retrospective.
- Data Sequestration: Cases collected for the pivotal dataset were "all distinct in time or center from the cases used to train the algorithm," and "Test pivotal study data was sequestered from algorithm development activities."
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- Number of Experts: Three (3) senior board-certified radiologists.
- Qualifications: "Senior board-certified radiologists." (Specific years of experience are not provided.)
4. Adjudication Method for the Test Set
- The text states "the ground truth, as determined by three senior board-certified radiologists." This implies a consensus-based adjudication, likely 3-0 or 2-1 (majority vote), but the exact method (e.g., 2+1, 3+1) is not explicitly detailed.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was it done? No.
- The study primarily focused on the standalone performance of the AI algorithm compared to ground truth and a secondary comparison of time-to-notification with a predicate device. It did not evaluate human reader performance with and without AI assistance.
6. Standalone Performance Study
- Was it done? Yes.
- The study evaluated the algorithm's performance (sensitivity, specificity, PPV, NPV, PLR, NLR) in identifying AD pathology without human intervention as a primary and secondary endpoint. The device's output is "flagging and communication of suspected positive findings" and "notifications include compressed preview images that are meant for informational purposes only and not intended for diagnostic use beyond notification," confirming a standalone function.
7. Type of Ground Truth Used
- Ground Truth: Expert Consensus, specifically "as determined by three senior board-certified radiologists."
8. Sample Size for the Training Set
- The document states, "The algorithm was trained during software development on images of the pathology." However, it does not specify the sample size for the training set. It only mentions that the pivotal test data was "distinct in time or center" from the training data.
9. How the Ground Truth for the Training Set Was Established
- "As is customary in the field of machine learning, deep learning algorithm development consisted of training on labeled ("tagged") images. In that process, each image in the training dataset was tagged based on the presence of the critical finding."
- While it indicates images were "labeled ("tagged")" based on the "presence of the critical finding," it does not explicitly state who established this ground truth for the training set (e.g., experts, pathology, etc.). It's implied that medical professionals were involved in the labeling process, but no specific number or qualification is provided for the training set.
Ask a specific question about this device
(18 days)
BriefCase-Triage is a radiological computer aided triage and notification software indicated for use in the analysis of contrast-enhanced images that include the lungs in adults or transitional adolescents age 18 and older. The device is intended to assist hospital networks and appropriately trained medical specialists in workflow triage by flagging and communication of suspect cases of incidental Pulmonary Embolism (iPE) pathologies.
BriefCase-Triage uses an artificial intelligence algorithm to analyze images and highlight cases with detected findings in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for suspect cases. Notifications include compressed preview images that are meant for informational purposes only and not intended for diagnostic use beyond notification. The device does not alter the original medical image and is not intended to be used as a diagnostic device.
The results of BriefCase-Triage are intended to be used in conjunction with other patient information and based on their professional judgment, to assist with triage/prioritization of medical images. Notified clinicians are responsible for viewing full images per the standard of care.
Briefcase-Triage is a radiological computer-assisted triage and notification software device. The software is based on an algorithm programmed component and is intended to run on a linux-based server in a cloud environment.
The Briefcase-Triage receives filtered DICOM Images, and processes them chronologically by running the algorithms on each series to detect suspected cases. Following the AI processing, the output of the algorithm analysis is transferred to an image review software (desktop application). When a suspected case is detected, the user receives a pop-up notification and is presented with a compressed, low-quality, grayscale image that is captioned "not for diagnostic use, for prioritization only" which is displayed as a preview function. This preview is meant for informational purposes only, does not contain any marking of the findings, and is not intended for primary diagnosis beyond notification.
Presenting the users with worklist prioritization facilitates efficient triage by prompting the user to assess the relevant original images in the PACS. Thus, the suspect case receives attention earlier than would have been the case in the standard of care practice alone.
Here's a breakdown of the acceptance criteria and the study details for the BriefCase-Triage device, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Metric | Acceptance Criteria (Performance Goal) | Reported Device Performance (Default Operating Point) | 95% Confidence Interval |
---|---|---|---|
Sensitivity | ≥ 80% | 91.7% | [87.5%, 94.9%] |
Specificity | ≥ 80% | 91.4% | [87.3%, 94.6%] |
NPV | Not specified | 99.8% | [99.6%, 99.8%] |
PPV | Not specified | 22.2% | [16.1%, 29.9%] |
PLR | Not specified | 10.7 | [7.2, 16.0] |
NLR | Not specified | 0.09 | [0.06, 0.14] |
Time-to-Notification | Comparability to predicate (282.63s) | 40.2 seconds | [36.9, 43.5] (for subject device) |
Note on Additional Operating Points (AOPs): The document also describes four additional operating points (AOPs 1-4) designed to maximize specificity while maintaining a lower bound 95% CI of 80% for sensitivity. These AOPs achieved similar performance metrics, further demonstrating the device's flexibility.
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: 498 cases
- Data Provenance: Retrospective, blinded, multicenter study. Cases were collected from 6 US-based clinical sites. The test data was distinct in time or center from the cases used for algorithm training and was sequestered from algorithm development activities.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: Three (3)
- Qualifications of Experts: Senior board-certified radiologists.
4. Adjudication Method for the Test Set
The specific adjudication method (e.g., 2+1, 3+1) is not explicitly stated in the provided text. It only mentions that the ground truth was "determined by three senior board-certified radiologists."
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
A Multi-Reader Multi-Case (MRMC) comparative effectiveness study comparing AI assistance versus without AI assistance for human readers was not described for this device. The study primarily evaluated the standalone performance of the AI algorithm and its time-to-notification compared to the predicate device, not the improvement in human reader performance with AI assistance.
6. If a Standalone (Algorithm Only Without Human-in-the-loop Performance) was done
Yes, a standalone performance study was done. The "Pivotal Study Summary" sections describe the device's sensitivity, specificity, and other metrics based on its algorithm's performance in identifying iPE cases compared to the ground truth established by radiologists. The device is intended to provide notifications and not for diagnostic use, suggesting its performance is evaluated in its autonomous triage function.
7. The Type of Ground Truth Used
The ground truth used was expert consensus, specifically "determined by three senior board-certified radiologists."
8. The Sample Size for the Training Set
The sample size for the training set is not explicitly stated in the provided text. It only mentions that the algorithm was "trained during software development on images of the pathology."
9. How the Ground Truth for the Training Set Was Established
The ground truth for the training set was established through manual labeling ("tagging") by experts. The text states: "deep learning algorithm development consisted of training on manually labeled ("tagged") images. In that process, critical findings were tagged in all CTs in the training data set."
Ask a specific question about this device
Page 1 of 1