Search Results
Found 1 results
510(k) Data Aggregation
(100 days)
Rayvolve PTX-PE
Rayvolve PTX-PE is a radiological computer-assisted triage and notification software that analyzes chest x-ray images (Postero-Anterior (PA) or Antero-Posterior (AP)) of patients 18 years of age or older for the presence of pre-specified suspected critical findings (pleural effusion and/or pneumothorax).
Rayvolve PTX-PE uses an artificial intelligence algorithm to analyze the images for features suggestive of critical findings and provides study-level output available in DICOM node servers for worklist prioritization or triage.
As a passive notification for prioritization-only software tool within the standard of care workflow, Rayvolve PTX-PE does not send a proactive alert directly to a trained medical specialist.
Rayvolve PTX-PE is not intended to direct attention to specific portions of an image. Its results are not intended to be used on a stand-alone basis for clinical decision-making.
Rayvolve PTX-PE is a software-only device designed to help healthcare professionals. It's a radiological computer-assisted triage and notification software that analyzes chest x-ray imaqes (Postero-Anterior (PA) or Antero-Posterior (AP)) of patients of 18 years of age or older for the presence of pre-specified suspected critical findings (pleural effusion and/or pneumothorax). It is intended to work in combination with DICOM node servers.
Rayvolve PTX-PE has been developed to use the current edition of the DICOM image standard. DICOM is the international standard for transmitting, storing, retrieving, printing, processing, and displaying medical imaging.
Using the DICOM standard allows Rayvolve PTX-PE to interact with existing DICOM node servers (eg .: PACS), and clinical-grade image viewers. The device is designed to run on a cloud platform and be connected to the radiology center's local network. It can also interact with the DICOM Node server.
When remotely connected to a medical center DICOM Node server, the software utilizes Al-based analysis algorithms to analyze chest X-rays for features suggestive of critical findings and provide study-level outputs to the DICOM node server for worklist prioritization. Following receipt of chest X-rays, the software device automatically analyzes each image to detect features suggestive of pneumothorax and/or pleural effusion.
Rayvolve PTX-PE filters and downloads only X-rays with organs determined from the DICOM Node server.
As a passive notification for prioritization-only software tool within the standard of care workflow, Rayvolve PTX-PE does not send a proactive alert directly to a trained health professional. Rayvolve PTX-PE is not intended to direct attention to a specific portion of an image. Its results are not intended to be used on a stand-alone basis for clinical decision-making.
Rayvolve PTX-PE does not intend to replace medical doctors. The instructions for use are strictly and systematically transmitted to each user and used to train them on Rayvolve's use.
AZmed's Rayvolve PTX-PE is a radiological computer-assisted triage and notification software designed to analyze chest x-ray images for the presence of suspected pleural effusion and/or pneumothorax. The device's performance was evaluated through a standalone study to demonstrate its effectiveness and substantial equivalence to a predicate device (Lunit INSIGHT CXR Triage, K211733).
Here's a breakdown of the acceptance criteria and the study proving the device meets them:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria for Rayvolve PTX-PE are implicitly derived from demonstrating performance comparable to or better than the predicate device, especially regarding AUC, sensitivity, and specificity for detecting pleural effusion and pneumothorax, as well as notification time. The predicate's performance metrics are used as a benchmark.
Metric (Disease) | Acceptance Criteria (Implicit, based on Predicate K211733) | Reported Device Performance (Rayvolve PTX-PE) |
---|---|---|
Pleural Effusion | ||
ROC AUC | > 0.95 (Predicate: 0.9686) | 0.9830 (95% CI: [0.9778, 0.9880]) |
Sensitivity | 89.86% (Predicate) | 0.9134 (95% CI: [0.8874, 0.9339]) |
Specificity | 93.48% (Predicate) | 0.9448 (95% CI: [0.9239, 0.9339]) |
Performance Time | 20.76 seconds (Predicate) | 19.56 seconds (95% CI: [19.49 - 19.58]) |
Pneumothorax | ||
ROC AUC | > 0.95 (Predicate: 0.9630) | 0.9857 (95% CI: [0.9809, 0.9901]) |
Sensitivity | 88.92% (Predicate) | 0.9379 (95% CI: [0.9127, 0.9561]) |
Specificity | 90.51% (Predicate) | 0.9178 (95% CI: [0.8911, 0.9561]) |
Performance Time | 20.45 seconds (Predicate) | 19.43 seconds (95% CI: [19.42 - 19.45]) |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: The test set for the standalone study consisted of 1000 radiographs for the Pneumothorax group and 1000 radiographs for the Pleural Effusion group. For each group, positive and negative images represented approximately 50%.
- Data Provenance: The document does not explicitly state the country of origin of the data or whether it was retrospective or prospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
The document does not provide details on the number of experts or their specific qualifications (e.g., years of experience as a radiologist) used to establish the ground truth for the test set.
4. Adjudication Method for the Test Set
The document does not describe the adjudication method used for the test set (e.g., 2+1, 3+1, none).
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study Was Done
No, a Multi Reader Multi Case (MRMC) comparative effectiveness study was not conducted. The performance assessment was a standalone study evaluating the algorithm's performance only. The document explicitly states: "AZmed conducted a standalone performance assessment for Pneumothorax and Pleural Effusion in worklist prioritization and triage." Therefore, there is no effect size of how much human readers improve with AI vs. without AI assistance reported in this document.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, a standalone performance assessment (algorithm only without human-in-the-loop) was performed. The results presented in the table above and in the "Bench Testing" section are from this standalone evaluation.
7. The Type of Ground Truth Used
The document does not explicitly state the type of ground truth used (e.g., expert consensus, pathology, outcomes data). However, for a diagnostic AI device, it is standard practice to establish ground truth through a panel of qualified medical experts (e.g., radiologists) providing consensus reads, often with access to additional clinical information or follow-up. Given the nature of the findings (pleural effusion and pneumothorax on X-ray), it is highly likely that expert interpretations served as the ground truth.
8. The Sample Size for the Training Set
The document does not specify the sample size used for the training set of the AI model. The provided information focuses on the performance evaluation using an independent test set.
9. How the Ground Truth for the Training Set Was Established
The document does not detail how the ground truth for the training set was established. This information is typically proprietary to the developer's internal development process and is not always fully disclosed in 510(k) summaries.
Ask a specific question about this device
Page 1 of 1