Search Results
Found 1 results
510(k) Data Aggregation
(329 days)
Bering Ltd
BraveCX is a radiological computer-assisted triage and notification software that analyzes adult (≥18 years old) chest Xray images for the presence of pre-specified suspected critical findings (pleural effusion and/or pneumothorax). BraveCX uses an artificial intelligence algorithm to analyze images for features suggestive of critical findings and provides caselevel output available in the PACS/workstation for worklist prioritization or triage. As a passive notification for prioritization-only software tool within standard of care workflow, BraveCX does not send a proactive alert directly to the appropriately trained medical specialists. BraveCX is not intended to direct attention to specific portions of an image or to anomalies other than pleural effusion and/or pneumothorax. Its results are not intended to be used on a stand-alone basis for clinical decision-making.
BraveCX is a Deep Learning Artificial Intelligence (AI) software that analyzes adult (≥18 years old) chest X-ray images for the presence of pre-specified suspected critical findings (pleural effusion and/or pneumothorax. It uses deep learning to analyze each image to identify features suggestive of pleural effusion and/or pneumothorax. Upon image acquisition from other radiological imaging equipment (e.g. X-ray systems), Anteroposterior (AP) and Posteroanterior (PA) chest X-Rays are received and processed by BraveCX. Following receipt of an image, BraveCX de-identifies a copy of each DICOM file and analyses it for features suggestive of pleural effusion and/or pneumothorax. Based on the analysis result, the software notifies PACS/workstation for the presence of the critical findings, indicated by "flag" or "(blank)". This allows the appropriately trained medical specialists to group suspicious exams together with potential for prioritization. Chest radiographs without an identified anomaly are placed in the worklist for routine review, which is the current standard of care. The intended user of the BraveCX software is a health care professional such as radiologist or another appropriately trained clinician. The software does not alter the order or remove cases from the reading queue. The software output to the user is a label of "flag" or "(blank)" that relates to the likelihood of presence of pneumothorax and/or pleural effusion. BraveCX platform ingests prediction requests with either attached DICOM images or DICOM UIDs referencing images already uploaded to DICOM storage. The results will be made available via a newly generated DICOM that is stored in DICOM storage or as a JSON file. The DICOM storage component may be a Picture Archiving and Communications (PACS) system or some other local storage platform. BraveCX works in parallel to and in conjunction with the standard of care workflow to enable prioritized review by the appropriately trained medical specialists who are qualified to interpret chest radiographs. As a passive notification for prioritization-only software tool within standard of care workflow, BraveCX does not send a proactive alert directly to the appropriately trained medical specialists who are qualified to interpret chest radiographs. BraveCX is not intended to direct attention to specific portions or anomalies of an image and it should not be used on a standalone basis for clinical decision-making. BraveCX automatically runs after image acquisition. It prioritises and displays the analysis results through the worklist interface of PACS/workstation. An on-device, technologist notification is generated within 15 minutes after interpretation by the user, indicating which cases were prioritized by BraveCX in PACS. The technologist notification is contextual and does not provide any diagnostic information. The on-device, technologist notification is not intended to inform any clinical decision, prioritization, or action.
The provided text describes the BraveCX device, a radiological computer-assisted triage and notification software that analyzes adult chest X-ray images for the presence of suspected critical findings (pleural effusion and/or pneumothorax).
Here's a breakdown of the acceptance criteria and the study that proves the device meets them:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria for the BraveCX device are not explicitly listed in a separate table as "acceptance criteria." However, based on the "Summary of results" in the "9. Non-Clinical Performance Data" section and the comparison to the predicate device, the implied acceptance criteria are:
- For Pleural Effusion:
- ROC AUC > 0.95
- Sensitivity > 0.85 (implied by "lower bounds of both sensitivity and specificity are above 0.85")
- Specificity > 0.85 (implied by "lower bounds of both sensitivity and specificity are above 0.85")
- For Pneumothorax:
- ROC AUC > 0.95
- Sensitivity > 0.85 (implied by "lower bounds of both sensitivity and specificity are above 0.85")
- Specificity > 0.85 (implied by "lower bounds of both sensitivity and specificity are above 0.85")
- Device Performance Time (Time-to-notification): Comparable to the predicate device.
Reported Device Performance (External Independent Testing):
Metric | Pleural Effusion (BraveCX) | Pneumothorax (BraveCX) |
---|---|---|
ROC AUC | 0.988 (95% CI: 0.9885-0.9887) | 0.972 (95% CI: 0.9727-0.9729) |
Sensitivity | 92.62% (95% CI: 90.67%-94.27%) | 93.38% (95% CI: 92.23%-94.40%) |
Specificity | 98.11% (95% CI: 97.33%-98.71%) | 97.27% (95% CI: 96.49%-97.92%) |
Time-to-notification | 4.8-10.4 seconds (95% CI: 4.2-10.41s) for simultaneous prediction | 4.8-10.4 seconds (95% CI: 4.2-10.41s) for simultaneous prediction |
Predicate Device Performance (Lunit INSIGHT CXR Triage, K211733):
Metric | Pleural Effusion (Predicate) | Pneumothorax (Predicate) |
---|---|---|
ROC AUC | 0.9686 (95% CI: 0.9547 - 0.9824) | 0.9630 (95% CI: 0.9521 - 0.9739) |
Sensitivity | 89.86% (95% CI: 86.72 - 93.00) | 88.92% (95% CI: 85.60 - 92.24) |
Specificity | 93.48% (95% CI: 91.06 - 95.91) | 90.51% (95% CI: 88.18 - 92.83) |
Time-to-notification | 20.76 seconds (95% CI: 20.23 - 21.28) | 20.45 seconds (95% CI: 19.99 - 20.92) |
The BraveCX device’s performance metrics for ROC AUC, sensitivity, and specificity exceed the implied acceptance criteria (all > 0.95 for AUC and > 0.85 for sensitivity/specificity). The time-to-notification is also comparable to, and even faster than, the predicate device.
2. Sample Sizes and Data Provenance for the Test Set
- Sample Size:
- Pleural Effusion: n=2,509 images (with n=867 positive cases)
- Pneumothorax: n=3,245 images (with n=2,114 positive cases)
- Data Provenance: The study used the MIMIC Chest X-ray (MIMIC-CXR) Database v2.0.02, NIH Chest X-Ray dataset (NIH-CXR), and CheXpert dataset (Stanford Hospital). These datasets represent the US population. The specific institutions mentioned are Beth Israel Deaconess Medical Center in Boston, MA, NIH Clinical Center, and Stanford Hospital. The data is retrospective.
3. Number of Experts and Qualifications for Ground Truth
- Number of Experts: Three.
- Qualifications: Board-certified Radiologists with at least 10 years of experience in specialty radiology training.
4. Adjudication Method for the Test Set
The document states, "All images were manually labelled by three board-certified Radiologists with at least 10 years of experience in specialty radiology training." It does not explicitly specify an adjudication method like 2+1 or 3+1, but implies that the agreement among these three experts established the ground truth. There is no mention of a specific tie-breaking rule or consensus process beyond all three being involved in labeling.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No, a multi-reader multi-case (MRMC) comparative effectiveness study comparing human readers with AI vs. without AI assistance was not explicitly mentioned or performed. The study described is a standalone performance evaluation of the AI algorithm.
6. Standalone Performance (Algorithm Only) Study
Yes, a standalone study was performed. The "Non-Clinical Performance Data" section describes an "external independent testing to assess the performance of BraveCX." This is a standalone evaluation of the algorithm's performance without a human in the loop. The results (ROC AUC, sensitivity, specificity, and time-to-notification) are reported for the algorithm itself.
7. Type of Ground Truth Used
The ground truth used for the test set was expert consensus (manual labeling by three board-certified radiologists).
8. Sample Size for the Training Set
The document mentions "Model training, validation, and testing sets were generated by stratified random partitions of 80%, 10%, and 10% respectively." While the exact total number of images used for training across all datasets is not explicitly stated, it implies that the training set constituted 80% of the total dataset used for development.
For the internal independent testing set, it contained n=1,209 cases for pleural effusion and n=1,387 cases of pneumothorax. Assuming these numbers are part of the 10% split for testing for the dataset from NHS Greater Glasgow and Clyde, the training set for that specific dataset would be significantly larger (e.g., if 1209 cases were 10%, training would be 8x that).
The external validation (MIMIC, NIH, CheXpert) had a sample size for testing (2509 for pleural effusion, 3245 for pneumothorax), but the specific training set size for the model that produced these results is not directly stated in terms of an absolute number, only the proportion (80%).
9. How the Ground Truth for the Training Set Was Established
The ground truth for the training set was established through manual curation by three board-certified Radiologists with at least 10 years in specialist radiology training. This is explicitly stated: "Images used in the training, validation, and testing of the subject device were all manually-curated ground truths provided by three board-certified Radiologists with at least 10 years in specialist radiology training."
Ask a specific question about this device
Page 1 of 1