Search Results
Found 1 results
510(k) Data Aggregation
(266 days)
Critical Care Suite is a computer aided triage and notification device that analyzes frontal chest x-ray images for the presence of prespecified critical findings (pneumothorax). Critical Care Suite identifies images with critical findings to enable case prioritization or triage in the PACS/workstation.
Critical Care Suite is intended for notification only and does not provide diagnostic information beyond the notification. Critical Care Suite should not be used in-lieu of full patient evaluation or solely relied upon to make or confirm a diagnosis. It is not intended to replace the review of the x-ray image by a qualified physician.
Critical Care Suite is indicated for adult-size patients.
Critical Care Suite is a software module that employs Al-based image analysis algorithms to identify pre-specified critical findings (pneumothorax) in frontal chest X-ray images and flag the images in the PACS/workstation to enable prioritized review by the radiologist.
Critical Care Suite employs a sequence of vendor and system agnostic AI algorithms to ensure that the input images are suitable for the pneumothorax detection algorithm and to detect the presence of pneumothorax in frontal chest X-rays:
-
The Quality Care Suite algorithms conduct an automated check to confirm that the input image is compatible with the pneumothorax detection algorithm and that the lung field coverage is adequate;
-
the PTX Classifier determines whether a pneumothorax is present in the image.
If a pneumothorax is detected, Critical Care Suite enables case prioritization or triage through direct communication of the Critical Care Suite notification during image transfer to the PACS. It can also produce a Secondary Capture DICOM Image that presents the Al results to the radiologist.
When deployed on a Digital Projection Radiographic Systems such as Optima XR240amx, Critical Care Suite is automatically run after image acquisition. Quality Care Suite algorithms produce an on-device notification if the lung field has atypical positioning to give the technologist the opportunity to make correction before sending the image to the PACS. The Critical Care Suite output is then sent directly to PACS upon exam closure where images with a suspicious finding are flagged for prioritized review by the Radiologist.
In parallel, an on-device, technologist notification is generated 15 minutes after exam closure, indicating which cases were prioritized by Critical Care Suite in PACS. The technologist notification is contextual and does not provide any diagnostic information. The on-device, technologist notification is not intended to inform any clinical decision, prioritization, or action.
The Digital Projection Radiographic System intended use remains unchanged in that the system is used for general purpose diagnostic radiographic imaging.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:
Acceptance Criteria and Device Performance
1. Table of Acceptance Criteria and Reported Device Performance:
| Metric | Acceptance Criteria (Predicate Device HealthPNX - K190362) | Critical Care Suite Reported Performance |
|---|---|---|
| ROC AUC | > 0.95 | 0.9607 (95% CI [0.9491, 0.9724]) |
| Specificity | 93% | 93.5% (95% CI [91.1%, 95.8%]) |
| Sensitivity | 93% | 84.3% (95% CI [80.6%, 88.0%]) |
| AUC on large pneumothorax | Not assessed | 0.9888 (95% CI [0.9810, 0.9965]) |
| Sensitivity on large pneumothorax | Not assessed | 96.3% (95% CI [93.3%, 99.2%]) |
| AUC on small pneumothorax | Not assessed | 0.9389 (95% CI [0.9209, 0.9570]) |
| Sensitivity on small pneumothorax | Not assessed | 75% (95% CI [69.2%, 80.8%]) |
| Timing of notification (delay in PACS worklist) | 22 seconds (HealthPNX) | No delay (immediately on PACS receipt) |
2. Sample size and Data Provenance for the Test Set:
- Sample Size: 804 frontal chest X-ray images (N=376 for pneumothorax present; N=428 for pneumothorax absent).
- Data Provenance: Collected in North America, representative of the intended population. The text does not explicitly state if it was retrospective or prospective, but the nature of a "collected dataset" for evaluation typically implies retrospective analysis of existing images.
3. Number of Experts and Qualifications for Ground Truth of the Test Set:
- Number of Experts: 3 independent US-board certified radiologists.
- Qualifications: "US-board certified radiologists." No specific years of experience or subspecialty are mentioned beyond board certification.
4. Adjudication Method for the Test Set:
- The text states the ground truth was "established by 3 independent US-board certified radiologists." It does not explicitly detail a specific adjudication method like 2+1 or 3+1. This implies a consensus-based approach where the radiologists independently reviewed images to establish the ground truth, likely resolving discrepancies through discussion to reach a final determination.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- No, a multi-reader multi-case (MRMC) comparative effectiveness study directly comparing human readers with AI assistance vs. without AI assistance was not reported in this summary. The study focused on the standalone diagnostic performance of the AI algorithm.
6. Standalone (Algorithm Only) Performance Study:
- Yes, a standalone performance study of the algorithm without human-in-the-loop was done. The reported metrics (ROC AUC, Sensitivity, Specificity) are direct measurements of the algorithm's performance against the established ground truth.
7. Type of Ground Truth Used:
- Expert Consensus: The ground truth was established by "3 independent US-board certified radiologists." This indicates an expert consensus approach.
8. Sample Size for the Training Set:
- The document does not explicitly state the sample size for the training set. It mentions the algorithm was "trained on annotated medical images" but provides no further details on the quantity of images used for training.
9. How the Ground Truth for the Training Set Was Established:
- The document states the device utilizes a "deep learning algorithm trained on annotated medical images." While it doesn't explicitly describe the method for establishing ground truth for the training set, it is implied that these "annotated medical images" had pre-existing labels or were labeled by experts for the purpose of training the AI.
Ask a specific question about this device
Page 1 of 1