Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K193271
    Date Cleared
    2021-01-15

    (416 days)

    Product Code
    Regulation Number
    892.2080
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    uAI EasyTriage-Rib

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    uAl EasyTriage-Rib is a radiological computer-assisted triage and notification software device for analysis of CT chest images. The device is intended to assist hospital networks and trained radiologists in workflow triage by flagging and prioritizing trauma studies with suspected positive findings of multiple (3 or more) acute rib fractures.

    Device Description

    uAl EasyTriage-Rib is a radiological computer-assisted triage and notification software device indicated for analysis of CT chest images. The device is intended to assist hospital networks and trained radiologists in workflow triage by flagging and prioritizing studies with suspected positive findings of multiple (3 or more) acute rib fractures. The device consists of the following two modules: (1) uAl EasyTriage-Rib Server; and (2) uAl EasyTriage-Rib Studylist Application that provides the user interface in which notifications from the application are received.

    AI/ML Overview

    The information provided describes the uAI EasyTriage-Rib device and its performance study to meet acceptance criteria for identifying multiple (3 or more) acute rib fractures in CT chest images.

    Here's a breakdown of the requested information:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implied by the reported performance metrics. The specific "acceptance criteria" are not explicitly stated as numerical targets in the provided text. However, a general statement is made: "The results show that it can detect rib fractures and reach the preset standard." Given the context of a 510(k) summary, the reported sensitivity, specificity, and AUC, along with a comparable time-to-notification to the predicate device, are the performance benchmarks that demonstrate achievement of that standard.

    Performance MetricAcceptance Criteria (Implied)Reported Device PerformanceComments
    SensitivityHigh92.7% (95% CI: 84.8%-97.3%)Achieved high sensitivity as a crucial consideration for a time-critical condition.
    SpecificityAdequate84.7% (95% CI: 77.0%-90.7%)Specificity was noted to be affected by the difficulty in distinguishing acute from chronic fractures, but considered acceptable given clinical relevance of reviewing chronic fractures.
    AUCHigh0.939 (95% CI: 0.906, 0.972)Indicates high discriminative power.
    Time-to-notification (Average)Comparable to predicate device69.56 secondsComparable to predicate device (HealthVCF: 61.36 seconds), suggesting timely notifications.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: 200 cases
    • Data Provenance:
      • Country of Origin: Multiple US clinical sites (explicitly stated).
      • Retrospective or Prospective: Retrospective (explicitly stated).

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts

    This information is not explicitly stated in the provided text. The document mentions "trained radiologists" being involved in clinical decision-making but does not specify the number or qualifications of experts used to establish the ground truth for the test set.

    4. Adjudication Method for the Test Set

    This information is not explicitly stated in the provided text.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

    A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done with human readers. The study focused on the standalone performance of the AI algorithm and a comparison of its "time-to-notification" with a predicate device, not on how human readers improve with AI assistance.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    Yes, a standalone study was done. The reported sensitivity, specificity, and AUC are measures of the algorithm's performance in identifying the target condition without human intervention in the analysis. The device "uses an artificial intelligence algorithm to analyze images and highlight studies with suspected multiple (3 or more) acute rib fractures in a standalone application for study list prioritization or triage in parallel to ongoing standard of care."

    7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)

    The document does not explicitly state the method used to establish the ground truth for the 200 test cases. It is often implied to be expert consensus by radiologists in such studies, but this is not confirmed in the text.

    8. The Sample Size for the Training Set

    The sample size for the training set is not provided in the text. The document only mentions that the deep learning algorithm was "trained on medical images."

    9. How the Ground Truth for the Training Set Was Established

    This information is not provided in the text.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1