Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K193417
    Date Cleared
    2020-07-30

    (234 days)

    Product Code
    Regulation Number
    892.2090
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    FractureDetect (FX) is a computer-assisted detection and diagnosis (CAD) software device to assist clinicians in detecting fractures during the review of radiographs of the musculoskeletal system. FX is indicated for adults only.

    FX is indicated for radiographs of the following industry-standard radiographic views and study types.

    Study Type(Anatomic Areaof Interest⁺)Radiographic View(s)Supported*
    AnkleFrontal, Lateral, Oblique
    ClavicleFrontal
    ElbowFrontal, Lateral
    FemurFrontal, Lateral
    ForearmFrontal, Lateral
    HipFrontal, Frog Leg Lateral
    HumerusFrontal, Lateral
    KneeFrontal, Lateral
    PelvisFrontal
    ShoulderFrontal, Lateral, Axillary
    Tibia / FibulaFrontal, Lateral
    WristFrontal, Lateral, Oblique

    *For the purposes of this table, "Frontal" is considered inclusive of both posteroanterior (PA) and anteroposterior (AP) views.

    +Definitions of anatomic area of interest and radiographic views are consistent with the American College of Radiology (ACR) standards and guidelines.

    Device Description

    FractureDetect (FX) is a computer-assisted detection and diagnosis (CAD) software device designed to assist clinicians in detecting fractures during the review of commonly acquired adult radiographs. FX does this by analyzing radiographs and providing relevant annotations, assisting clinicians in the detection of fractures within their diagnostic process at the point of care. FX was developed using robust scientific principles and industry-standard deep learning algorithms for computer vision.

    FX creates, as its output, a DICOM overlay with annotations indicating the presence or absence of fractures. If any fracture is detected by FX, the output overlay is composed to include the text annotation "Fracture: DETECTED" and to include one or more bounding boxes surrounding any fracture site(s). If no fracture is detected by FX, the output overlay is composed to include the text annotation "Fracture: NOT DETECTED" and no bounding box is included. Whether or not a fracture is detected, the overlay includes a text annotation identifying the radiograph as analyzed by FX and instructions for users to access labeling. The FX overlay can be toggled on or off by the clinicians within their PACS viewer, allowing for uninhibited concurrent review of the original radiograph.

    AI/ML Overview

    Here's a detailed breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

    Acceptance Criteria and Device Performance

    Acceptance CriteriaReported Device Performance
    Standalone Performance
    Overall Sensitivity0.951 (95% Wilson's CI: 0.940, 0.960)
    Overall Specificity0.893 (95% Wilson's CI: 0.886, 0.898)
    Overall Area Under the Curve (AUC)0.982 (95% Bootstrap CI: 0.9790, 0.9850)
    AUC per Study Type: Ankle0.983 (0.972, 0.991)
    AUC per Study Type: Clavicle0.962 (0.948, 0.975)
    AUC per Study Type: Elbow0.964 (0.940, 0.982)
    AUC per Study Type: Femur0.989 (0.983, 0.994)
    AUC per Study Type: Forearm0.987 (0.977, 0.995)
    AUC per Study Type: Hip0.982 (0.962, 0.995)
    AUC per Study Type: Humerus0.983 (0.974, 0.991)
    AUC per Study Type: Knee0.996 (0.993, 0.998)
    AUC per Study Type: Pelvis0.982 (0.973, 0.989)
    AUC per Study Type: Shoulder0.962 (0.938, 0.982)
    AUC per Study Type: Tibia / Fibula0.994 (0.991, 0.997)
    AUC per Study Type: Wrist0.992 (0.988, 0.996)
    MRMC Comparative Effectiveness (Reader Performance with AI vs. without AI)
    Reader AUC (FX-Aided) vs. (FX-Unaided)Improved from 0.912 to 0.952, a difference of 0.0406 (95% CI: 0.0127, 0.0685) (p=.0043)
    Reader Sensitivity (FX-Aided) vs. (FX-Unaided)Improved from 0.819 (95% Wilson's CI: 0.794, 0.842) to 0.900 (95% Wilson's CI: 0.880, 0.917)
    Reader Specificity (FX-Aided) vs. (FX-Unaided)Improved from 0.890 (95% Wilson's CI: 0.879, 0.900) to 0.918 (95% Wilson's CI: 0.908, 0.927)

    Study Details

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size:
      • Standalone Study: 11,970 radiographs.
      • MRMC Reader Study: 175 cases.
    • Data Provenance: Not explicitly stated, but the experts establishing ground truth are specified as U.S. board-certified, suggesting the data is likely from the U.S. There is no indication whether the data was retrospective or prospective, but for an FDA submission of this nature, historical retrospective data is common.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    • Number of Experts: A panel of three experts was used for the MRMC study's ground truth.
    • Qualifications: "U.S. board-certified orthopedic surgeons or U.S. board-certified radiologists." Specific years of experience are not mentioned.

    4. Adjudication Method for the Test Set

    • Adjudication Method: A "panel of three" experts assigned a ground truth binary label (presence or absence of fracture). This implies a consensus-based adjudication. While not explicitly stated (e.g., 2-out-of-3, or further adjudication if there was disagreement), the phrasing suggests a collective agreement to establish the "ground truth." This is analogous to a 3-expert consensus, where the majority rules.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? Yes.
    • Effect Size (Improvement with AI vs. without AI assistance):
      • Readers' AUC significantly improved by 0.0406 (from 0.912 to 0.952).
      • Readers' sensitivity improved by 0.081 (from 0.819 to 0.900).
      • Readers' specificity improved by 0.028 (from 0.890 to 0.918).

    6. Standalone (Algorithm Only) Performance Study

    • Was a standalone study done? Yes.
    • Performance:
      • Sensitivity: 0.951
      • Specificity: 0.893
      • Overall AUC: 0.982
      • High accuracy across study types and potential confounders (image brightness, x-ray manufacturers).

    7. Type of Ground Truth Used

    • Standalone Study: The ground truth for the standalone study is not explicitly detailed but given the MRMC study, it's highly probable it also leveraged expert consensus, similar to the MRMC setup, for fracture detection.
    • MRMC Study: Expert Consensus by a panel of three U.S. board-certified orthopedic surgeons or U.S. board-certified radiologists.

    8. Sample Size for the Training Set

    • The document does not explicitly state the sample size for the training set. It only mentions "robust scientific principles and industry-standard deep learning algorithms for computer vision" were used for development.

    9. How the Ground Truth for the Training Set Was Established

    • The document does not explicitly describe how the ground truth for the training set was established. It only mentions "Supervised Deep Learning" as the methodology, which implies labeled data was used for training, but the process of obtaining these labels is not detailed.
    Ask a Question

    Ask a specific question about this device

    K Number
    DEN180005
    Device Name
    OsteoDetect
    Date Cleared
    2018-05-24

    (108 days)

    Product Code
    Regulation Number
    892.2090
    Type
    Direct
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    OsteoDetect analyzes wrist radiographs using machine learning techniques to identify and highlight distal radius fractures during the review of posterior-anterior (PA) and lateral (LAT) radiographs of adult wrists.

    Device Description

    OsteoDetect is a software device designed to assist clinicians in detecting distal radius fractures during the review of posterior-anterior (PA) and lateral (LAT) radiographs of adult wrists. The software uses deep learning techniques to analyze wrist radiographs (PA and LAT views) for distal radius fracture in adult patients.

    AI/ML Overview

    1. Table of Acceptance Criteria and Reported Device Performance

    Standalone Performance

    Performance MetricAcceptance Criteria (Implicit)Reported Device Performance (Estimate)95% Confidence Interval
    AUC of ROCHigh0.965(0.953, 0.976)
    SensitivityHigh0.921(0.886, 0.946)
    SpecificityHigh0.902(0.877, 0.922)
    PPVHigh0.813(0.769, 0.850)
    NPVHigh0.961(0.943, 0.973)
    Localization Accuracy (average pixel distance)Small33.52 pixelsNot provided for average distance itself, but standard deviation of 30.03 pixels.
    Generalizability (AUC for all subgroups)High≥ 0.926 (lowest subgroup - post-surgical radiographs)Not explicitly provided for all, but individual subgroup CIs available in text.

    MRMC (Reader Study) Performance - Aided vs. Unaided Reads

    Performance MetricAcceptance Criteria (Implicit: Superiority of Aided)Reported Device Performance (OD-Aided)Reported Device Performance (OD-Unaided)95% Confidence Interval (OD-Aided)95% Confidence Interval (OD-Unaided)p-value for difference
    AUC of ROCAUC_aided - AUC_unaided > 00.8890.840Not explicitly given for AUCs themselves, but difference CI: (0.019, 0.080)Not explicitly given for AUCs themselves, but difference CI: (0.019, 0.080)0.0056
    SensitivitySuperior Aided0.8030.747(0.785, 0.819)(0.728, 0.765)Not explicitly given for individual metrics, but non-overlapping CIs imply significance.
    SpecificitySuperior Aided0.9140.889(0.903, 0.924)(0.876, 0.900)Not explicitly given for individual metrics, but non-overlapping CIs imply significance.
    PPVSuperior Aided0.8830.844(0.868, 0.896)(0.826, 0.859)Not explicitly given for individual metrics, but non-overlapping CIs imply significance.
    NPVSuperior Aided0.8530.814(0.839, 0.865)(0.800, 0.828)Not explicitly given for individual metrics, but non-overlapping CIs imply significance.

    2. Sample Size and Data Provenance for Test Set

    Standalone Performance Test Set:

    • Sample Size: 1000 images (500 PA, 500 LAT)
    • Data Provenance: Retrospective. Randomly sampled from an existing validation database of consecutively collected images from patients receiving wrist radiographs at the (b) (4) from November 1, 2016 to April 30, 2017. The study population included images from the US.

    MRMC (Reader Study) Test Set:

    • Sample Size: 200 cases.
    • Data Provenance: Retrospective. Randomly sampled from the same validation database used for the standalone performance study. The data includes cases from the US.

    3. Number of Experts and Qualifications for Ground Truth

    Standalone Performance Test Set and MRMC (Reader Study) Test Set:

    • Number of Experts: Three.
    • Qualifications: U.S. board-certified orthopedic hand surgeons.

    4. Adjudication Method for Test Set

    Standalone Performance Test Set:

    • Adjudication Method (Binary Fracture Presence/Absence): Majority opinion of at least 2 of the 3 clinicians.
    • Adjudication Method (Localization - Bounding Box): The union of the bounding box of each clinician identifying the fracture.

    MRMC (Reader Study) Test Set:

    • Adjudication Method: Majority opinion of three U.S. board-certified orthopedic hand surgeons. (Note: this was defined on a per-case basis, considering PA, LAT, and oblique images if available).

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? Yes.
    • Effect Size (Improvement of Human Readers with AI vs. without AI assistance):
      • The least squares mean difference between the AUC for OsteoDetect-aided and OsteoDetect-unaided reads is 0.049 (95% CI, (0.019, 0.080)). This indicates a statistically significant improvement in diagnostic accuracy (AUC) of 4.9 percentage points when readers were aided by OsteoDetect.
      • Sensitivity: Improved from 0.747 (unaided) to 0.803 (aided), an improvement of 0.056.
      • Specificity: Improved from 0.889 (unaided) to 0.914 (aided), an improvement of 0.025.

    6. Standalone (Algorithm Only) Performance Study

    • Was a standalone study done? Yes.

    7. Type of Ground Truth Used

    Standalone Performance Test Set:

    • Type of Ground Truth: Expert consensus (majority opinion of three U.S. board-certified orthopedic hand surgeons).

    MRMC (Reader Study) Test Set:

    • Type of Ground Truth: Expert consensus (majority opinion of three U.S. board-certified orthopedic hand surgeons).

    8. Sample Size for Training Set

    The document does not explicitly state the sample size for the training set. It mentions "randomly withheld subset of the model's training data" for setting the operating point, implying a training set existed, but its size is not provided.

    9. How Ground Truth for Training Set Was Established

    The document does not explicitly state how the ground truth for the training set was established. It only refers to a "randomly withheld subset of the model's training data" during the operating point setting.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1