Search Results
Found 4 results
510(k) Data Aggregation
(103 days)
AZmed
Rayvolve LN is a computer-aided detection software device to identify and mark regions in relation to suspected pulmonary nodules from 6 to 30mm size. It is designed to aid radiologists in reviewing the frontal (AP/PA) chest radiographs of patients of 18 years of age or older acquired on digital radiographic systems as a second reader and be used with any DICOM Node server. Rayvolve LN provides adjunctive information only and is not a substitute for the original chest radiographic image.
The medical device is called Rayvolve LN. Rayvolve LN is one of the verticals of the Rayvolve product line. It is a standalone software that uses deep learning techniques to detect and localize pulmonary nodules on chest X-rays. Rayvolve LN is intended to be used as an aided-diagnosis device and does not operate autonomously.
Rayvolve LN has been developed to use the current edition of the DICOM image standard. DICOM is the international standard for transmitting, storing, retrieving, printing, processing, and displaying medical imaging.
Using the DICOM standard allows Rayvolve LN to interact with existing DICOM Node servers (eg.: PACS) and clinical-grade image viewers. The device is designed for running on-premise, cloud platform, connected to the radiology center local network, and can interact with the DICOM Node server.
When remotely connected to a medical center DICOM Node server, Rayvolve LN directly interacts with the DICOM files to output the prediction (potential presence of pulmonary nodules) the original image appears first, followed by the image processed by Rayvolve.
Rayvolve LN does not intend to replace medical doctors. The instructions for use are strictly and systematically transmitted to each user and used to train them on Rayvolve LN's use.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:
Acceptance Criteria and Device Performance
The document doesn't explicitly list a table of acceptance criteria in the sense of predefined thresholds for performance metrics. Instead, it describes a comparative study where the acceptance criterion is superiority to unaided readers and comparability to a predicate device. The reported device performance is then presented as the outcomes of these studies.
However, we can infer the performance metrics used for evaluation.
Inferred Acceptance Criteria & Reported Device Performance:
Performance Metric | Acceptance Criteria (Implied) | Rayvolve LN Performance (Unaided) | Rayvolve LN Performance (Aided) | Standalone Rayvolve LN Performance |
---|---|---|---|---|
Reader AUC (Diagnostic Accuracy) | Superior to unaided reader performance; comparable to predicate. | 0.8071 | 0.8583 | Not directly applicable |
Reader Sensitivity (per image) | Significantly improved from unaided reader. | 0.7975 | 0.8935 | Not directly applicable |
Reader Specificity (per image) | Improved from unaided reader. | 0.8235 | 0.8510 | Not directly applicable |
Standalone Sensitivity | Demonstrates accurate nodule detection. | Not applicable | Not applicable | 0.8847 |
Standalone Specificity | Demonstrates accurate nodule detection. | Not applicable | Not applicable | 0.8294 |
Standalone AUC (ROC) | Demonstrates accurate nodule detection. | Not applicable | Not applicable | 0.8408 |
Note: The direct "acceptance criteria" are implied by the study's primary and secondary objectives (i.e., improvement over unaided reading and comparability to a predicate device). The tables above synthesize the key performance metrics reported.
Study Details:
1. Sample Sizes and Data Provenance:
- Test Set (Standalone Performance): 2181 radiographs. The data provenance is not explicitly stated in terms of country of origin, nor whether it was retrospective or prospective. It is described as "all the study types and views in the indication for use."
- Test Set (Clinical Data - MRMC Study): 400 cases. These cases were "randomly sampled from the validation dataset used for the standalone performance study," implying they are a subset of the 2181 radiographs mentioned above.
- Training Set: The sample size for the training set is not provided in the document.
2. Number of Experts for Ground Truth & Qualifications:
- Number of Experts: The document does not explicitly state the number of experts used to establish the ground truth for the test set. It mentions "ground truth binary labeling indicating the presence or absence of pulmonary nodules" for the MRMC study but doesn't detail how this ground truth was derived.
- Qualifications of Experts: Not specified.
3. Adjudication Method for the Test Set:
- The adjudication method for establishing ground truth is not explicitly detailed. It merely states "ground truth binary labeling indicating the presence or absence of pulmonary nodules."
4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- Yes, an MRMC study was done.
- Effect Size of Improvement:
- Reader AUC: Improved from 0.8071 (unaided) to 0.8583 (aided), a difference of 0.0511. (95% CI: 0.0501; 0.0518)
- Reader Sensitivity (per image): Improved from 0.7975 (unaided) to 0.8935 (aided), a difference of 0.096.
- Reader Specificity (per image): Improved from 0.8235 (unaided) to 0.8510 (aided), a difference of 0.0275.
5. Standalone Performance (Algorithm Only):
- Yes, a standalone performance assessment was done.
- Reported Metrics:
- Sensitivity: 0.8847 (95% CI: 0.8638; 0.9028)
- Specificity: 0.8294 (95% CI: 0.8066; 0.9028)
- AUC: 0.8408 (95% Bootstrap CI: 0.8272; 0.8548)
6. Type of Ground Truth Used:
- The ground truth for both the standalone and MRMC studies was described as "ground truth binary labeling indicating the presence or absence of pulmonary nodules." It does not specify if this was expert consensus, pathology, or outcomes data. However, the context of detecting nodules on chest radiographs for radiologists implies expert consensus as the most probable method.
7. Sample Size for the Training Set:
- Not provided in the document.
8. How Ground Truth for the Training Set was Established:
- Not provided in the document. The document only mentions that the device uses "deep learning techniques" and "supervised Deep learning," which implies labeled training data was used, but details on its establishment are absent.
Ask a specific question about this device
(100 days)
AZmed
Rayvolve PTX-PE is a radiological computer-assisted triage and notification software that analyzes chest x-ray images (Postero-Anterior (PA) or Antero-Posterior (AP)) of patients 18 years of age or older for the presence of pre-specified suspected critical findings (pleural effusion and/or pneumothorax).
Rayvolve PTX-PE uses an artificial intelligence algorithm to analyze the images for features suggestive of critical findings and provides study-level output available in DICOM node servers for worklist prioritization or triage.
As a passive notification for prioritization-only software tool within the standard of care workflow, Rayvolve PTX-PE does not send a proactive alert directly to a trained medical specialist.
Rayvolve PTX-PE is not intended to direct attention to specific portions of an image. Its results are not intended to be used on a stand-alone basis for clinical decision-making.
Rayvolve PTX-PE is a software-only device designed to help healthcare professionals. It's a radiological computer-assisted triage and notification software that analyzes chest x-ray imaqes (Postero-Anterior (PA) or Antero-Posterior (AP)) of patients of 18 years of age or older for the presence of pre-specified suspected critical findings (pleural effusion and/or pneumothorax). It is intended to work in combination with DICOM node servers.
Rayvolve PTX-PE has been developed to use the current edition of the DICOM image standard. DICOM is the international standard for transmitting, storing, retrieving, printing, processing, and displaying medical imaging.
Using the DICOM standard allows Rayvolve PTX-PE to interact with existing DICOM node servers (eg .: PACS), and clinical-grade image viewers. The device is designed to run on a cloud platform and be connected to the radiology center's local network. It can also interact with the DICOM Node server.
When remotely connected to a medical center DICOM Node server, the software utilizes Al-based analysis algorithms to analyze chest X-rays for features suggestive of critical findings and provide study-level outputs to the DICOM node server for worklist prioritization. Following receipt of chest X-rays, the software device automatically analyzes each image to detect features suggestive of pneumothorax and/or pleural effusion.
Rayvolve PTX-PE filters and downloads only X-rays with organs determined from the DICOM Node server.
As a passive notification for prioritization-only software tool within the standard of care workflow, Rayvolve PTX-PE does not send a proactive alert directly to a trained health professional. Rayvolve PTX-PE is not intended to direct attention to a specific portion of an image. Its results are not intended to be used on a stand-alone basis for clinical decision-making.
Rayvolve PTX-PE does not intend to replace medical doctors. The instructions for use are strictly and systematically transmitted to each user and used to train them on Rayvolve's use.
AZmed's Rayvolve PTX-PE is a radiological computer-assisted triage and notification software designed to analyze chest x-ray images for the presence of suspected pleural effusion and/or pneumothorax. The device's performance was evaluated through a standalone study to demonstrate its effectiveness and substantial equivalence to a predicate device (Lunit INSIGHT CXR Triage, K211733).
Here's a breakdown of the acceptance criteria and the study proving the device meets them:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria for Rayvolve PTX-PE are implicitly derived from demonstrating performance comparable to or better than the predicate device, especially regarding AUC, sensitivity, and specificity for detecting pleural effusion and pneumothorax, as well as notification time. The predicate's performance metrics are used as a benchmark.
Metric (Disease) | Acceptance Criteria (Implicit, based on Predicate K211733) | Reported Device Performance (Rayvolve PTX-PE) |
---|---|---|
Pleural Effusion | ||
ROC AUC | > 0.95 (Predicate: 0.9686) | 0.9830 (95% CI: [0.9778, 0.9880]) |
Sensitivity | 89.86% (Predicate) | 0.9134 (95% CI: [0.8874, 0.9339]) |
Specificity | 93.48% (Predicate) | 0.9448 (95% CI: [0.9239, 0.9339]) |
Performance Time | 20.76 seconds (Predicate) | 19.56 seconds (95% CI: [19.49 - 19.58]) |
Pneumothorax | ||
ROC AUC | > 0.95 (Predicate: 0.9630) | 0.9857 (95% CI: [0.9809, 0.9901]) |
Sensitivity | 88.92% (Predicate) | 0.9379 (95% CI: [0.9127, 0.9561]) |
Specificity | 90.51% (Predicate) | 0.9178 (95% CI: [0.8911, 0.9561]) |
Performance Time | 20.45 seconds (Predicate) | 19.43 seconds (95% CI: [19.42 - 19.45]) |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: The test set for the standalone study consisted of 1000 radiographs for the Pneumothorax group and 1000 radiographs for the Pleural Effusion group. For each group, positive and negative images represented approximately 50%.
- Data Provenance: The document does not explicitly state the country of origin of the data or whether it was retrospective or prospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
The document does not provide details on the number of experts or their specific qualifications (e.g., years of experience as a radiologist) used to establish the ground truth for the test set.
4. Adjudication Method for the Test Set
The document does not describe the adjudication method used for the test set (e.g., 2+1, 3+1, none).
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study Was Done
No, a Multi Reader Multi Case (MRMC) comparative effectiveness study was not conducted. The performance assessment was a standalone study evaluating the algorithm's performance only. The document explicitly states: "AZmed conducted a standalone performance assessment for Pneumothorax and Pleural Effusion in worklist prioritization and triage." Therefore, there is no effect size of how much human readers improve with AI vs. without AI assistance reported in this document.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, a standalone performance assessment (algorithm only without human-in-the-loop) was performed. The results presented in the table above and in the "Bench Testing" section are from this standalone evaluation.
7. The Type of Ground Truth Used
The document does not explicitly state the type of ground truth used (e.g., expert consensus, pathology, outcomes data). However, for a diagnostic AI device, it is standard practice to establish ground truth through a panel of qualified medical experts (e.g., radiologists) providing consensus reads, often with access to additional clinical information or follow-up. Given the nature of the findings (pleural effusion and pneumothorax on X-ray), it is highly likely that expert interpretations served as the ground truth.
8. The Sample Size for the Training Set
The document does not specify the sample size used for the training set of the AI model. The provided information focuses on the performance evaluation using an independent test set.
9. How the Ground Truth for the Training Set Was Established
The document does not detail how the ground truth for the training set was established. This information is typically proprietary to the developer's internal development process and is not always fully disclosed in 510(k) summaries.
Ask a specific question about this device
(112 days)
AZmed SAS
Rayvolve is a computer-assisted detection and diagnosis (CAD) software device to assist radiologists and emergency physicians in detecting fractures during the review of radiographs of the musculosketal system. Rayvolve is indicated for adult and pediatric population (≥ 2 years).
Rayvolve is indicated for radiographs of the following industry-standard radiographic views and study types.
Study type (Anatomic Area of interest) / Radiographic Views* supported: Ankle/ AP, Lateral, Oblique Clavicle/ AP, AP Angulated View Elbow/ AP, Lateral Forearm/ AP, Lateral Hip /AP, Frog-leg lateral Humerus /AP, Lateral Knee/ AP, Lateral Pelvis /AP Shoulder/ AP, Lateral, Axillary Tibia/fibula/ AP, Lateral Wrist/ PA, Lateral, Oblique Hand / PA, Lateral, Oblique Foot/ AP, Lateral, Oblique.
- Definitions of anatomic area of interest and radiographic views are consistent with the ACR-SPR-SSR Practice Parameter for the Performance of Radiography of the Extremities guideline.
The medical device is called Rayvolve. It is a standalone software that uses deep learning techniques to detect and localize fractures on osteoarticular X-rays. Rayvolve is intended to be used as an aided-diagnosis device and does not operate autonomously.
Rayvolve has been developed to use the current edition of the DICOM image standard. DICOM is the international standard for transmitting, storing, printing, processing, and displaying medical imaging.
Using the DICOM standard allows Rayvolve to interact with existing DICOM Node servers (eg.: PACS) and clinical-grade image viewers. The device is designed for running on-premise, cloud platform, connected to the radiology center local network, and can interact with the DICOM Node server.
When remotely connected to a medical center DICOM Node server. Rayvolve directly interacts with the DICOM files to output the prediction (potential presence or absence of fracture) the initial image appears first, followed by the image processed by Ravvolve.
Rayvolve does not intend to replace medical doctors. The instructions for use are strictly and systematically transmitted to each user and used to train them on Ravvolve's use.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) summary for Rayvolve:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are not explicitly listed in a single table with defined thresholds. However, based on the performance data presented, the implicit acceptance criteria for standalone performance appear to be:
- High Sensitivity, Specificity, and AUC for fracture detection.
- Non-inferiority of the retrained algorithm (including pediatric population) compared to the predicate device, specifically by ensuring the lower bound of the difference in AUCs (Retrained - Predicate) for each anatomical area is greater than -0.05.
- Superior diagnostic accuracy of readers when aided by Rayvolve compared to unaided readers, as measured by AUC in an MRMC study.
- Improved sensitivity and specificity for readers when aided by Rayvolve.
Table: Acceptance Criteria (Implicit) and Reported Device Performance
Acceptance Criterion (Implicit) | Reported Device Performance (Standalone & MRMC Studies) |
---|---|
Standalone Performance (Pediatric Population Inclusion) | |
High Sensitivity for fracture detection in pediatric population (implicitly > 0.90 based on predicate). | 0.9611 (95% CI: 0.9480; 0.9710) |
High Specificity for fracture detection in pediatric population (implicitly > 0.80 based on predicate). | 0.8597 (95% CI: 0.8434; 0.8745) |
High AUC for fracture detection in pediatric population (implicitly > 0.90 based on predicate). | 0.9399 (95% Bootstrap CI: 0.9330; 0.9470) |
Non-inferiority of Retrained Algorithm (compared to Predicate for adult & pediatric) | |
Lower bound of difference in AUCs (Retrained - Predicate) > -0.05 for all anatomical areas. | "The lower bounds of the differences in AUCs for the Retrained model compared to the Predicate model are all greater than -0.05, indicating that the Retrained model's performance is not inferior to the Predicate model across all organs." (Specific values for each organ are not provided, only the conclusion that they meet the criterion.) The Total AUC for Retrained is 0.98781 (0.98247; 0.99048) compared to Predicate 0.98607 (0.98104; 0.99058). Overlapping CIs and the non-inferiority statement support this. This suggests the inclusion of pediatric data did not degrade performance in adult data. |
MRMC Clinical Reader Study | |
Diagnostic accuracy (AUC) of readers aided by Rayvolve is superior to unaided readers. | Reader AUC improved from 0.84602 to 0.89327, a difference of 0.04725 (95% Cl: 0.03376; 0.061542) (p=0.0041). This demonstrates statistically significant superiority. |
Reader sensitivity is improved with Rayvolve assistance. | Reader sensitivity improved from 0.86561 (95% Wilson's Cl: 0.84859, 0.88099) to 0.9554 (95% Wilson's CI: 0.94453, 0.96422). |
Reader specificity is improved with Rayvolve assistance. | Reader specificity improved from 0.82645 (95% Wilson's Cl: 0.81187, 0.84012) to 0.83116 (95% Wilson's CI: 0.81673, 0.84467). |
2. Sample Sizes and Data Provenance
-
Test Set (Pediatric Standalone Study):
- Sample Size: 3016 radiographs.
- Data Provenance: Not explicitly stated regarding country of origin. The study was retrospective.
-
Test Set (Adult Predicate Standalone Study - for comparison):
- Sample Size: 2626 radiographs.
- Data Provenance: Not explicitly stated regarding country of origin.
-
Test Set (MRMC Clinical Reader Study):
- Sample Size: 186 cases.
- Data Provenance: Not explicitly stated regarding country of origin. The study was retrospective.
-
Training Set:
- Sample Size: 150,000 osteoarticular radiographs. (Expanded from 115,000 for the predicate device).
- Data Provenance: Not explicitly stated regarding country of origin.
3. Number of Experts and Qualifications for Ground Truth (Test Set)
- Number of Experts: A panel of three (3) US board-certified MSK radiologists.
- Qualifications of Experts: US board-certified MSK (Musculoskeletal) radiologists. Years of experience are not specified, but board certification implies a certain level of expertise.
4. Adjudication Method for the Test Set (Ground Truth Establishment)
- Method: "Each case had been previously evaluated by a panel of three US board-certified MSK radiologists to provide ground truth binary labeling the presence or absence of fracture and the localization information for fractures." This implies a consensus-based ground truth, likely achieved through discussion and agreement among the three radiologists. The term "panel" suggests a collaborative review. No specific "2+1" or "3+1" rule is mentioned, but "panel of three" indicates a rigorous approach to consensus.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was it done?: Yes, a fully crossed multi-reader, multi-case (MRMC) retrospective reader study was done.
- Effect Size of Improvement:
- AUC Improvement: Reader AUC was significantly improved from 0.84602 (unaided) to 0.89327 (aided), resulting in a difference (effect size) of 0.04725 (95% Cl: 0.03376; 0.061542) (p=0.0041).
- Sensitivity Improvement: Reader sensitivity improved from 0.86561 (unaided) to 0.9554 (aided).
- Specificity Improvement: Reader specificity improved from 0.82645 (unaided) to 0.83116 (aided).
6. Standalone (Algorithm Only) Performance Study
- Was it done?: Yes, standalone performance assessments were conducted for both the pediatric population inclusion and the retrained algorithm.
- Pediatric Standalone Study: Sensitivity (0.9611), Specificity (0.8597), and AUC (0.9399) were reported.
- Retrained Algorithm Standalone Study: Non-inferiority was assessed by comparing AUCs against the predicate device's standalone performance, showing improvements or non-inferiority across body parts (e.g., Total AUC for retrained was 0.98781 vs. predicate 0.98607).
7. Type of Ground Truth Used
- For Test Sets (Standalone & MRMC): Expert consensus by a panel of three US board-certified MSK radiologists. They provided binary labeling (presence/absence of fracture) and localization information (bounding boxes) for fractures. This is a form of expert consensus.
8. Sample Size for the Training Set
- Sample Size: 150,000 osteoarticular radiographs.
9. How Ground Truth for the Training Set was Established
The document states that the "training dataset for the subject device was expanded to include 150,000 osteoarticular radiographs". While it confirms the size and composition (mixed adult/pediatric, osteoarticular radiographs), it does not explicitly describe how the ground truth for this training set was established. It mentions that the "previous truthed predicate test dataset was strictly walled off and not included in the new training dataset," implying that the training data was "truthed," but the method (e.g., expert review, automated labeling, etc.) is not detailed. Given the large training set size, it is common for such datasets to be curated through a combination of established clinical reports, expert review, or semi-automated processes, but the specific methodology is not provided in this summary.
Ask a specific question about this device
(133 days)
AZmed SAS
Rayvolve is a computer-assisted detection and diagnosis (CAD) software device to assist radiologists and emergency physicians in detecting fractures during the review of radiographs of the musculoskeletal system. Rayvolve is indicated for adults only (≥ 22 years old). Rayvolve is indicated for radiographs of the following industry-standard radiographic views and study types.
Study Type (Anatomic Area of interest) / Radiographic Views supported: Ankle / Frontal, Lateral,Oblique Clavicle / Frontal Elbow / Frontal, Lateral Forearm / Frontal, Lateral Hip / Frontal, Frog Leg Lateral Humerus / Frontal, Lateral Knee / Frontal, Lateral Pelvis / Frontal Shoulder / Frontal, Lateral, Axillary Tibia/fibula / Frontal. Lateral Wrist / Frontal, Lateral, Oblique Hand / Frontal, Lateral Foot / Frontal, Lateral
*For the purposes of this table, "Frontal" is considered inclusive of both posteroanterior (PA) and anteroposterior (AP) views.
+Definitions of anatomic area of interest and radiographic views are consistent with the American College of Radiology (ACR) standards and guidelines.
The medical device is called Rayvolve. It is a standalone software that uses deep learning techniques to detect and localize fractures on osteoarticular X-rays. Rayvolve is intended to be used as an aided-diagnosis device and does not operate autonomously. It is intended to work in combination with Picture Archiving and communication system (PACS) servers. When remotely connected to a medical center PACS server, Rayvolve directly interacts with the DICOM files to output the prediction (potential presence of fracture). Rayvolve does not intend to replace medical doctors. The instructions for use are strictly and systematically transmitted to each user and used to train them on Rayvolve's use.
Here's a summary of the acceptance criteria and the study proving the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criterion (Primary Endpoint) | Reported Device Performance | Study Type |
---|---|---|
Standalone Study: Characterize the detection accuracy of Rayvolve for detecting adult patient fractures (AUC, Sensitivity, Specificity) | AUC: 0.98607 (95% CI: 0.98104; 0.99058) | |
Sensitivity: 0.98763 (95% CI: 0.97559; 0.99421) | ||
Specificity: 0.88558 (95% CI: 0.87119; 0.89882) | Standalone Bench Testing | |
MRMC Study: Diagnostic accuracy of readers aided by Rayvolve is superior to unaided readers (AUC of ROC curve comparison). H0: T-test for p (no statistical difference) > 0.05; H1: T-Test for p (statistical difference) |
Ask a specific question about this device
Page 1 of 1