Search Results
Found 1 results
510(k) Data Aggregation
(92 days)
HealthCXR
The Zebra HealthCXR device is a software workflow tool designed to aid the clinical assessment of adult Chest X-Ray cases with features suggestive of pleural effusion in the medical care environment. HealthCXR analyzes cases using an artificial intelligence algorithm to identify suspected findings. It makes case-level output available to a PACS/worldion for worklist prioritization or triage. HealthCXR is not intended to direct attention to specific portions or anomalies of an image. Its results are not intended to be used on a stand-alone basis for clinical decision-making nor is it intended to rule out pleural effusion or otherwise preclude clinical assessment of X-Ray cases.
The HealthCXR solution is a software product that automatically identifies suspected findings on chest x-rays (e.g. pleural effusion) and notifies PACS/workstation of the presence of this critical finding in the scan. This notification allows for worklist prioritization of the identified scan and assists clinicians in viewing the prioritized scan before others. The device aim is to aid in prioritization and triage of radiological medical images only.
The software is automatic and is capable of analyzing PA or AP chest x-rays. If a suspected finding is found in a scan, the alert is automatically sent to the PACS/workstation used by the radiologist or to a standalone desktop application in parallel with the ongoing standard of care. The PACS/workstation prioritizes and displays the study through its worklist interface. The standalone desktop application, Zebra Worklist, includes a compressed preview image meant for informational purposed only and is not intended for diagnostic use.
The HealthCXR device works in parallel to and in conjunction with the standard care of workflow. After a chest x-ray has been performed, a copy of the study is automatically retrieved and processed by the HealthCXR device performs the analysis of the study and returns a notification about the relevant pathology to the PACS/workstation which prioritizes it through the worklist interface or alternatively, the Zebra Worklist will notify the user through the standalone desktop application. The clinician is then able to review the study earlier than in standard of care workflow.
The software does not recommend treatment or provide a diagnosis. It is meant as a tool to assist in improved workload prioritization of critical cases. The final diagnosis is provided by a radiologist after reviewing the scan itself.
The following modules compose the HealthCXR software for Pleural Effusion:
Data input and validation: Following retrieval of a study, the validation feature assessed the input data (i.e. age, modality, view) to ensure compatibility for processing by the algorithm.
Pleural Effusion algorithm: Once a study has been validated, the algorithm analyzes the frontal chest x-ray for detection of suspected finding suggestive of pleural effusion.
IMA Integration feature: The study analysis and the results of a successful study analysis is provided to IMA, to then be sent to the PACS/workstation for prioritization.
Error codes feature: In the case of a study failure during data validation or the analysis by the algorithm, an error is provided to the system.
HealthCXR Acceptance Criteria and Performance Study
This response details the acceptance criteria and the study conducted to prove the HealthCXR device meets these criteria, based on the provided FDA 510(k) summary.
1. Table of Acceptance Criteria and Reported Device Performance
Performance Metric | Acceptance Criteria (Implicit from "exceeds the required technical method" and "substantially equivalent to the predicate" or explicit goals) | Reported Device Performance |
---|---|---|
Area Under the Curve (AUC) | > 0.95 (Explicitly stated for effective triage) | 0.9885 (95% CI: [0.9815, 0.9956]) |
Operating Point 1: Equal Sensitivity & Specificity | Substantially equivalent to predicate device performance | Sensitivity: 96.74% (95% CI: [92.79; 96.48]) |
Specificity: 93.17% (95% CI: [89.57; 95.58]) | ||
Operating Point 2: High Specificity | Substantially equivalent to predicate device performance | Sensitivity: 93.84% (95% CI: [90.36; 96.12]) |
Specificity: 97.12% (95% CI: [94.43; 98.53]) | ||
Mean Processing Time | Substantially equivalent to predicate device performance | Operating Point 1: 27.76 seconds |
Operating Point 2: 20.18 seconds |
Note: The acceptance criteria for sensitivity and specificity are implicitly derived from the statement that the device met the performance goal and was found to be "substantially equivalent to the predicate device, HealthPNX (K190362)." The document does not explicitly state numerical thresholds for the predicate's performance for these specific metrics, but rather that the HealthCXR results were comparable. The AUC goal of >0.95 for effective triage is explicitly stated.
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: 554 anonymized Chest X-ray cases.
- Data Provenance: Retrospective cohort from the USA and Israel.
- 276 cases positive for Pleural Effusion.
- 278 cases negative for Pleural Effusion (including confounding imaging factors).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: Three (3)
- Qualifications: US Board-Certified Radiologists.
4. Adjudication Method for the Test Set
The document states the data was "truthed (ground truth) by three US Board-Certified Radiologists." It does not explicitly state the adjudication method (e.g., 2+1, 3+1). It can be inferred that a consensus or majority opinion was used to establish the ground truth among the three, but the specific process isn't detailed.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done, What was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance
No MRMC comparative effectiveness study involving human readers and AI assistance was reported in this document. The study focused on the standalone performance of the AI algorithm.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done
Yes, a standalone performance study was done. The document explicitly states: "The stand-alone detection accuracy was measured on this cohort respective to the ground truth."
7. The Type of Ground Truth Used
Expert consensus (established by three US Board-Certified Radiologists).
8. The Sample Size for the Training Set
The document does not provide the sample size for the training set. It only describes the validation data set.
9. How the Ground Truth for the Training Set was Established
The document does not provide information on how the ground truth for the training set was established. It only details the establishment of ground truth for the test/validation set.
Ask a specific question about this device
Page 1 of 1