Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K220164
    Device Name
    Rayvolve
    Manufacturer
    Date Cleared
    2022-06-02

    (133 days)

    Product Code
    Regulation Number
    892.2090
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Rayvolve is a computer-assisted detection and diagnosis (CAD) software device to assist radiologists and emergency physicians in detecting fractures during the review of radiographs of the musculoskeletal system. Rayvolve is indicated for adults only (≥ 22 years old). Rayvolve is indicated for radiographs of the following industry-standard radiographic views and study types.

    Study Type (Anatomic Area of interest) / Radiographic Views supported: Ankle / Frontal, Lateral,Oblique Clavicle / Frontal Elbow / Frontal, Lateral Forearm / Frontal, Lateral Hip / Frontal, Frog Leg Lateral Humerus / Frontal, Lateral Knee / Frontal, Lateral Pelvis / Frontal Shoulder / Frontal, Lateral, Axillary Tibia/fibula / Frontal. Lateral Wrist / Frontal, Lateral, Oblique Hand / Frontal, Lateral Foot / Frontal, Lateral

    *For the purposes of this table, "Frontal" is considered inclusive of both posteroanterior (PA) and anteroposterior (AP) views.

    +Definitions of anatomic area of interest and radiographic views are consistent with the American College of Radiology (ACR) standards and guidelines.

    Device Description

    The medical device is called Rayvolve. It is a standalone software that uses deep learning techniques to detect and localize fractures on osteoarticular X-rays. Rayvolve is intended to be used as an aided-diagnosis device and does not operate autonomously. It is intended to work in combination with Picture Archiving and communication system (PACS) servers. When remotely connected to a medical center PACS server, Rayvolve directly interacts with the DICOM files to output the prediction (potential presence of fracture). Rayvolve does not intend to replace medical doctors. The instructions for use are strictly and systematically transmitted to each user and used to train them on Rayvolve's use.

    AI/ML Overview

    Here's a summary of the acceptance criteria and the study proving the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criterion (Primary Endpoint)Reported Device PerformanceStudy Type
    Standalone Study: Characterize the detection accuracy of Rayvolve for detecting adult patient fractures (AUC, Sensitivity, Specificity)AUC: 0.98607 (95% CI: 0.98104; 0.99058) Sensitivity: 0.98763 (95% CI: 0.97559; 0.99421) Specificity: 0.88558 (95% CI: 0.87119; 0.89882)Standalone Bench Testing
    MRMC Study: Diagnostic accuracy of readers aided by Rayvolve is superior to unaided readers (AUC of ROC curve comparison). H0: T-test for p (no statistical difference) > 0.05; H1: T-Test for p (statistical difference) < 0.05Reader AUC significantly improved from 0.84602 (unaided) to 0.89327 (aided), a difference of -0.04725 (95% CI: 0.03376; 0.061542), p=0.0041 (indicating superiority of aided reads).Clinical Reader Study (MRMC)

    Secondary Endpoints (Standalone Study): Demonstrate Rayvolve's ability to perform across different subgroup variables (gender, age, anatomic region, machine acquisition, machine view, weight-bearing, complex & uncommon cases).
    Reported Performance: Rayvolve performs with high accuracy across study types (including anatomic areas of interest, views, patient age and sex, and machine) and across potential confounders such as different X-ray manufacturers. Specific AUCs for various subgroups are provided in the document and demonstrate high performance.

    Secondary Endpoints (MRMC Study): Report the sensitivity and specificity of Rayvolve-aided and unaided reads.
    Reported Performance:

    • Unaided Sensitivity: 0.86561 (95% CI: 0.84859, 0.88099)
    • Aided Sensitivity: 0.9554 (95% CI: 0.94453, 0.96422)
    • Unaided Specificity: 0.82645 (95% CI: 0.81187, 0.84012)
    • Aided Specificity: 0.83116 (95% CI: 0.81673, 0.84467)

    2. Sample Size Used for the Test Set and Data Provenance

    • Standalone Test Set Size: 2626 radiographs
      • Provenance: Data were acquired from 4 sites in the US.
    • MRMC Test Set Size: 186 cases (equivalent to 186 patients)
      • Provenance: Data were acquired from 4 sites in the US. All radiographs in the validation study were independent of the training data.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Number of Experts: Three US board-certified MSK radiologists.
    • Qualifications: "US board-certified MSK radiologists." The document does not specify their years of experience.

    4. Adjudication Method for the Test Set

    • MRMC Study Ground Truth Adjudication: A panel of three US board-certified MSK radiologists reviewed each case to provide ground truth binary labeling (presence or absence of fracture) and localization information. While not explicitly stated as "2+1" or "3+1", the use of a panel of three experts strongly suggests a consensus-based adjudication, where if at least two agreed, that would likely be the accepted ground truth.

    5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was Done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • Yes, an MRMC comparative effectiveness study was done.
    • Effect Size of Improvement with AI Assistance:
      • Reader AUC significantly improved by 0.04725 (from 0.84602 to 0.89327).
      • Reader sensitivity improved by 0.08979 (from 0.86561 to 0.9554).
      • Reader specificity improved by 0.00471 (from 0.82645 to 0.83116).

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Yes, a standalone performance assessment was conducted.
      • The results are detailed in the table above: AUC of 0.98607, sensitivity of 0.98763, and specificity of 0.88558.

    7. The type of ground truth used

    • Expert Consensus: For both standalone (bench testing) and MRMC (clinical data), the ground truth was established by human experts, specifically "a panel of three US board-certified MSK radiologists" for the MRMC study, who provided binary labeling indicating the presence or absence of fracture and localization information.

    8. The sample size for the training set

    • The document states: "The dataset used to develop the Rayvolve deep learning algorithm is composed of labeled osteoarticular radiographs." and "Rayvolve training set has been established before the collection of the standalone and MRMC studies data."
    • However, the exact sample size for the training set is not explicitly provided in the given text.

    9. How the ground truth for the training set was established

    • The document states: "The dataset used to develop the Rayvolve deep learning algorithm is composed of labeled osteoarticular radiographs."
    • Similar to the test sets, it is implied that the ground truth for the training set was established through expert labeling, given the nature of a "labeled" dataset for deep learning in medical image analysis. However, the specific process or number/qualifications of experts for the training set ground truth are not explicitly detailed in the provided text.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1