Search Results
Found 3 results
510(k) Data Aggregation
(101 days)
BriefCase-Triage: CARE (Clinical AI Reasoning Engine) Multi-Triage CT for Pneumothorax; Pericardial effusion; Large aortic aneurysm; Shoulder fracture or dislocation device is a radiological computer aided triage and notification software indicated for use in the analysis of contrast and non-contrast CT images of the chest, abdomen, or chest/abdomen, in adults or transitional adolescents aged 18 and older. The device is intended to assist hospital networks and appropriately trained medical specialists in workflow triage by flagging and communicating suspected positive findings, per study, of:
- Pneumothorax;
- Pericardial effusion;
- Large aortic aneurysm
- Shoulder Fracture or Dislocation
The device flags cases with at least one suspected finding to assist with triage/prioritization of medical images. The device will provide a flag for each suspected finding within this study. A preview image will be provided for each distinct suspected finding.
BriefCase-Triage uses a foundation model-based artificial intelligence (AI) system to analyze images and highlight cases with detected findings in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for cases with suspected findings. Notifications include compressed preview images for each suspected finding that are meant for informational purposes only and not intended for diagnostic use beyond notification. The device does not alter the original medical images and is not intended to be used as a diagnostic device.
The results of BriefCase-Triage are intended to be used in conjunction with other patient information and based on their professional judgment to assist with triage/prioritization of medical images. Notified clinicians are responsible for viewing full images per the standard of care.
BriefCase-Triage: CARE Multi-Triage CT for Pneumothorax; Pericardial effusion; Large aortic aneurysm; Shoulder fracture or dislocation device is a radiological computer-assisted triage and notification software device. The software is based on an algorithm programmed component and is intended to run on a linux-based server in a cloud environment.
The BriefCase-Triage device receives images that match meta-data criteria according to the BriefCase-Triage predefined set of parameters. Then, the BriefCase-Triage processes the series chronologically, identifying cases with suspected positive finding(s) and selecting key slice(s) for preview. BriefCase-Triage output consists of suspected positive flag/notification regarding the existence of each finding in the analyzed study. Each finding includes a Representative Key Slice. The Key Slice(s) may be presented to the users as compressed, low-quality, grayscale, preview images with the date and time imprinted. The previews are not annotated and are captioned with the disclaimer "Not for diagnostic use, for prioritization only" according to the device requirement from the Image Communication Platform (ICP).
Presenting the users with worklist prioritization facilitates efficient triage by prompting the user to assess the relevant original images in the PACS. Thus, the suspect case receives attention earlier than would have been the case in the standard of care practice alone.
Here's a breakdown of the acceptance criteria and the study proving device performance, based on the provided FDA 510(k) clearance letter:
1. Acceptance Criteria and Reported Device Performance
The core acceptance criteria are based on standalone performance metrics for each of the four clinical indications.
| Indication | Acceptance Criteria (Default Operating Point) | Reported Device Performance (Default Operating Point) |
|---|---|---|
| Pneumothorax | AUC > 0.95 (lower bound 95% CI); Sensitivity > 80%; Specificity > 80% | AUC: 98.9 (95% CI: 97.8-99.7) Sensitivity: 94.8% (95% CI: 89.5%-97.9%) Specificity: 95.9% (95% CI: 91.3%-98.5%) |
| Pericardial effusion | AUC > 0.95 (lower bound 95% CI); Sensitivity > 80%; Specificity > 80% | AUC: 99.1 (95% CI: 98.0-99.8) Sensitivity: 96.4% (95% CI: 91.7%-98.8%) Specificity: 96.5% (95% CI: 92.0%-98.8%) |
| Large aortic aneurysm | AUC > 0.95 (lower bound 95% CI); Sensitivity > 80%; Specificity > 80% | AUC: 99.5 (95% CI: 98.9-99.9) Sensitivity: 97.1% (95% CI: 92.7%-99.2%) Specificity: 97.2% (95% CI: 92.9%-99.2%) |
| Shoulder fracture or dislocation | AUC > 0.95 (lower bound 95% CI); Sensitivity > 80%; Specificity > 80% | AUC: 99.9 (95% CI: 99.7-100) Sensitivity: 97.8% (95% CI: 93.7%-99.5%) Specificity: 99.3% (95% CI: 96.2%-100.0%) |
| Time-to-notification | Comparability with predicate device in time savings to standard of care. | Subject Device Mean: 49.9 seconds (95% CI: 46.4-53.5) Predicate Device Mean: 10.7 seconds (95% CI: 10.5-10.9) Note: While the subject device's time is longer, the conclusion states comparability regarding time savings to standard of care review, implying it still offers significant benefit. |
Study Proving Device Meets Acceptance Criteria
The study conducted was a retrospective, blinded, multicenter standalone performance analysis.
2. Sample size used for the test set and the data provenance:
* Sample Size: N = 280 for each of the 4 clinical indications, totaling 772 unique scans across all indications.
* Data Provenance: The cases were collected from 6 US-based clinical sites, representing diverse geographic locations and site types. The data was "distinct in time or center from the cases used to train the algorithm," and "sequestered from algorithm development activities." This indicates a high level of independence for the test set. The data is retrospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
* Number of Experts: Three (3)
* Qualifications: Senior board-certified radiologists.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
* The document states ground truth was "determined by three senior board-certified radiologists." It does not explicitly mention an adjudication method like 2+1 or 3+1, but the plural "radiologists" and the method of "determined by" suggests a consensus or majority opinion among these three, rather than individual opinions without interaction.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
* No MRMC comparative effectiveness study was explicitly described. The study was a "standalone performance analysis" of the software itself. The comparison of "time-to-notification" with the predicate device implies a comparison of software performance characteristics related to triage, not a study of human readers with and without AI assistance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
* Yes, a standalone performance study was done. The document explicitly refers to it as a "standalone performance analysis" to "evaluate the software's performance."
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
* Expert Consensus: The ground truth was established by "three senior board-certified radiologists."
8. The sample size for the training set:
* The document does not specify the exact sample size for the training set. It only mentions that the "algorithm was trained during software development on images of the pathology."
9. How the ground truth for the training set was established:
* The ground truth for the training set was established by "labeled ('tagged') images. In that process, each image in the training dataset was tagged based on the presence of the critical finding." The method or type of tagging (e.g., by experts, automated, etc.) is not detailed, but it's implied that there was a process of assigning labels/tags to the images to indicate the presence or absence of the target pathologies.
Ask a specific question about this device
(285 days)
BriefCase-Triage is a radiological computer aided triage and notification software indicated for use in the analysis of contrast-enhanced CT images that include the brain, in adults or transitional adolescents aged 18 and older. The device is intended to assist hospital networks and appropriately trained medical specialists in workflow triage by flagging and communication of suspected positive cases of Brain Aneurysm (BA) findings that are 3.0 mm or larger.
BriefCase-Triage uses an artificial intelligence algorithm to analyze images and flag suspect cases in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for suspect cases. Notifications include compressed preview images that are meant for informational purposes only and not intended for diagnostic use beyond notification. The device does not alter the original medical image and is not intended to be used as a diagnostic device.
The results of BriefCase-Triage are intended to be used in conjunction with other patient information and based on professional judgment, to assist with triage/prioritization of medical images. Notified clinicians are responsible for viewing full images per the standard of care.
BriefCase-Triage is a radiological computer-assisted triage and notification software device.
The software is based on an algorithm programmed component and is intended to run on a linux-based server in a cloud environment.
The BriefCase-Triage receives filtered DICOM Images, and processes them chronologically by running the algorithms on each series to detect suspected cases. Following the AI processing, the output of the algorithm analysis is transferred to an image review software (desktop application). When a suspected case is detected, the user receives a pop-up notification and is presented with a compressed, low-quality, grayscale image that is captioned "not for diagnostic use, for prioritization only" which is displayed as a preview function. This preview is meant for informational purposes only, does not contain any marking of the findings, and is not intended for primary diagnosis beyond notification.
Here's a breakdown of the acceptance criteria and study details for the BriefCase-Triage device, based on the provided FDA 510(k) clearance letter:
Acceptance Criteria and Reported Device Performance
| Metric | Acceptance Criteria (Performance Goal) | Reported Device Performance |
|---|---|---|
| Primary Endpoints | ||
| Sensitivity | 80% | 87.8% (95% CI: 83.1%-91.6%) |
| Specificity | 80% | 91.6% (95% CI: 87.9%-94.5%) |
| Secondary Endpoints | ||
| Time-to-Notification (mean) | Comparable to predicate device | 44.8 seconds (95% CI: 41.4-48.2) |
| Negative Predictive Value (NPV) | N/A | 98.9% (95% CI: 98.4%-99.2%) |
| Positive Predictive Value (PPV) | N/A | 47.6% (95% CI: 38.4%-57.1%) |
| Positive Likelihood Ratio (PLR) | N/A | 10.5 (95% CI: 7.2-15.3) |
| Negative Likelihood Ratio (NLR) | N/A | 0.13 (95% CI: 0.1-0.19) |
Note on Additional Operating Points (AOPs): The device also met performance goals (80% sensitivity and specificity) for three additional operating points (AOP1, AOP2, AOP3) with slightly varying sensitivity/specificity trade-offs (e.g., AOP3: Sensitivity 86.2%, Specificity 93.6%).
Study Details
1. Sample size used for the test set and the data provenance:
- Sample Size: 544 cases
- Data Provenance: Retrospective, blinded, multicenter study from 6 US-based clinical sites. The cases were distinct in time or center from those used for algorithm training.
2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: Three (3) senior board-certified radiologists.
- Qualifications: "Senior board-certified radiologists." (Specific number of years of experience not detailed in the provided text).
3. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- The text states the ground truth was "determined by three senior board-certified radiologists." It doesn't explicitly describe an adjudication method like "2+1" or "3+1." This implies a consensus approach where all three radiologists agreed, or a majority rule, but the exact mechanism for resolving discrepancies (if any) is not specified.
4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study was NOT done. The study's primary objective was to evaluate the standalone performance of the BriefCase-Triage software. The secondary endpoint compared the device's time-to-notification to that of the predicate device, but not its impact on human reader performance.
5. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone performance study was done. The primary endpoints (sensitivity and specificity) measure the algorithm's performance in identifying Brain Aneurysm (BA) findings.
6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Expert Consensus: The ground truth was "determined by three senior board-certified radiologists."
7. The sample size for the training set:
- Not explicitly stated. The document mentions the algorithm was "trained during software development on images of the pathology" and that "critical findings were tagged in all CTs in the training data set." However, the specific sample size for this training data is not provided.
8. How the ground truth for the training set was established:
- Manually labeled ("tagged") images: The text states, "As is customary in the field of machine learning, deep learning algorithm development consisted of training on manually labeled ('tagged') images. In that process, critical findings were tagged in all CTs in the training data set." It does not specify who performed the tagging or their qualifications, nor the method of consensus if multiple taggers were involved.
Ask a specific question about this device
(112 days)
BriefCase-Triage: CARE (Clinical AI Reasoning Engine) Multi-Triage CT Body is a radiological computer aided triage and notification software indicated for use in the analysis of contrast and non-contrast CT images of the chest, abdomen, and/or pelvis, in adults or transitional adolescents aged 18 and older. The device is intended to assist hospital networks and appropriately trained medical specialists in workflow triage by flagging and communicating suspected positive findings, per study, of:
- Diverticulitis;
- Abdominal-pelvic abscess;
- Appendicitis;
- Intestinal ischemia and/or pneumatosis;
- Obstructive renal stone;
- Small bowel obstruction;
- Large bowel obstruction;
- Spleen injury;
- Liver injury;
- Kidney injury;
- Pelvic fracture.
The device flags cases with at least one suspected finding to assist with triage/prioritization of medical images. The device will provide a flag for each suspected finding within this study. A preview image will be provided for each distinct suspected finding.
BriefCase-Triage uses a foundation model-based artificial intelligence (AI) system to analyze images and highlight cases with detected findings in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for cases with suspected findings. Notifications include compressed preview images for each suspected finding that are meant for informational purposes only and not intended for diagnostic use beyond notification. The device does not alter the original medical images and is not intended to be used as a diagnostic device.
The results of BriefCase-Triage are intended to be used in conjunction with other patient information and based on their professional judgment to assist with triage/prioritization of medical images. Notified clinicians are responsible for viewing full images per the standard of care.
Briefcase-Triage is a radiological computer-assisted triage and notification software device. The software is based on an algorithm programmed component and is intended to run on a linux-based server in a cloud environment.
The BriefCase-Triage receives images that match meta-data criteria according to BriefCase-Triage: CARE Multi-Triage CT Body's predefined set of parameters. Then, the BriefCase-Triage processes the series chronologically, identifying cases with suspected positive finding(s) and selecting key slice(s) for preview. BriefCase-Triage output consists of suspected positive flag/notification regarding the existence of each finding in the analyzed study. Each finding includes a Representative Key Slice. The Key Slice(s) may be presented to the users as compressed, low-quality, grayscale, preview images with the date and time imprinted. The previews are not annotated and are captioned with the disclaimer "Not for diagnostic use, for prioritization only" according to the device requirement from the Image Communication Platform (ICP).
Acceptance Criteria and Study Details for BriefCase-Triage: CARE Multi-triage CT Body
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria for the BriefCase-Triage: CARE Multi-triage CT Body device were primarily defined by performance goals for Area Under the Curve (AUC), Sensitivity (Se), and Specificity (Sp). The study demonstrated that the device met and exceeded these criteria for all 11 indications.
| Indication | Performance Goal (Acceptance Criteria) | Reported Device Performance (Mean) | 95% Confidence Interval (Reported) |
|---|---|---|---|
| Primary Endpoints | |||
| Finding-level AUC | > 0.95 | 0.974 - 1.00 | 0.952 - 1.00 |
| Sensitivity (Se) | > 80% | 94.0% - 99.3% | 88.9% - 100% |
| Specificity (Sp) | > 80% | 95.7% - 99.3% | 91% - 100% |
| Secondary Endpoints (Comparable to Predicate) | |||
| BriefCase time-to-notification | Comparable to predicate | 45 seconds | 43.4 - 46.5 seconds |
Note: The reported device performance for AUC, Sensitivity, and Specificity are ranges covering the minimum and maximum values observed across the 11 indications in the pivotal study. Detailed values for each indication are provided in the source text.
2. Sample Size and Data Provenance for the Test Set
- Sample Size: N = 280 for each of the 11 clinical indications, resulting in 1769 unique scans included across all device indications.
- Data Provenance: The data was collected from 6 US-based clinical sites. It was retrospective and the cases were distinct in time or center from the cases used to train the algorithm.
3. Number of Experts and Qualifications for Ground Truth
- Number of Experts: Three senior board-certified radiologists.
- Qualifications: The document specifically states "senior board-certified radiologists." No further details on years of experience were provided.
4. Adjudication Method for the Test Set
The adjudication method used to establish ground truth was based on the "consensus" of the three senior board-certified radiologists ("as determined by three senior board-certified radiologists"). This implies a consensus-based adjudication, but the specific mechanics (e.g., majority vote like 2+1, or requiring all three to agree) are not explicitly detailed.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No MRMC study comparing human readers with AI assistance versus without AI assistance was reported in this document. The study described is a standalone performance analysis of the algorithm.
6. Standalone Performance Study
Yes, a standalone performance study was done. The document states: "Aidoc conducted a retrospective, blinded, multicenter study with the Briefcase-Triage software to evaluate the standalone performance analysis individually for each of the 11 clinical indications supported by BriefCase-Triage: CARE Multi-triage CT Body device."
7. Type of Ground Truth Used
The ground truth was established by expert consensus of three senior board-certified radiologists.
8. Sample Size for the Training Set
The sample size for the training set is not explicitly provided in the given text. It is only mentioned that "the algorithm was trained during software development on images of the pathology."
9. How the Ground Truth for the Training Set was Established
The ground truth for the training set was established through labeled ("tagged") images. The document states: "As is customary in the field of machine learning, deep learning algorithm development consisted of training on labeled ("tagged") images. In that process, each image in the training dataset was tagged based on the presence of the critical finding." The specific method or expert involvement in this tagging process is not detailed, but it implies human expert labeling.
Ask a specific question about this device
Page 1 of 1