K Number
K193271
Date Cleared
2021-01-15

(416 days)

Product Code
Regulation Number
892.2080
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

uAl EasyTriage-Rib is a radiological computer-assisted triage and notification software device for analysis of CT chest images. The device is intended to assist hospital networks and trained radiologists in workflow triage by flagging and prioritizing trauma studies with suspected positive findings of multiple (3 or more) acute rib fractures.

Device Description

uAl EasyTriage-Rib is a radiological computer-assisted triage and notification software device indicated for analysis of CT chest images. The device is intended to assist hospital networks and trained radiologists in workflow triage by flagging and prioritizing studies with suspected positive findings of multiple (3 or more) acute rib fractures. The device consists of the following two modules: (1) uAl EasyTriage-Rib Server; and (2) uAl EasyTriage-Rib Studylist Application that provides the user interface in which notifications from the application are received.

AI/ML Overview

The information provided describes the uAI EasyTriage-Rib device and its performance study to meet acceptance criteria for identifying multiple (3 or more) acute rib fractures in CT chest images.

Here's a breakdown of the requested information:

1. Table of Acceptance Criteria and Reported Device Performance

The acceptance criteria are implied by the reported performance metrics. The specific "acceptance criteria" are not explicitly stated as numerical targets in the provided text. However, a general statement is made: "The results show that it can detect rib fractures and reach the preset standard." Given the context of a 510(k) summary, the reported sensitivity, specificity, and AUC, along with a comparable time-to-notification to the predicate device, are the performance benchmarks that demonstrate achievement of that standard.

Performance MetricAcceptance Criteria (Implied)Reported Device PerformanceComments
SensitivityHigh92.7% (95% CI: 84.8%-97.3%)Achieved high sensitivity as a crucial consideration for a time-critical condition.
SpecificityAdequate84.7% (95% CI: 77.0%-90.7%)Specificity was noted to be affected by the difficulty in distinguishing acute from chronic fractures, but considered acceptable given clinical relevance of reviewing chronic fractures.
AUCHigh0.939 (95% CI: 0.906, 0.972)Indicates high discriminative power.
Time-to-notification (Average)Comparable to predicate device69.56 secondsComparable to predicate device (HealthVCF: 61.36 seconds), suggesting timely notifications.

2. Sample Size Used for the Test Set and Data Provenance

  • Sample Size for Test Set: 200 cases
  • Data Provenance:
    • Country of Origin: Multiple US clinical sites (explicitly stated).
    • Retrospective or Prospective: Retrospective (explicitly stated).

3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts

This information is not explicitly stated in the provided text. The document mentions "trained radiologists" being involved in clinical decision-making but does not specify the number or qualifications of experts used to establish the ground truth for the test set.

4. Adjudication Method for the Test Set

This information is not explicitly stated in the provided text.

5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done with human readers. The study focused on the standalone performance of the AI algorithm and a comparison of its "time-to-notification" with a predicate device, not on how human readers improve with AI assistance.

6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

Yes, a standalone study was done. The reported sensitivity, specificity, and AUC are measures of the algorithm's performance in identifying the target condition without human intervention in the analysis. The device "uses an artificial intelligence algorithm to analyze images and highlight studies with suspected multiple (3 or more) acute rib fractures in a standalone application for study list prioritization or triage in parallel to ongoing standard of care."

7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)

The document does not explicitly state the method used to establish the ground truth for the 200 test cases. It is often implied to be expert consensus by radiologists in such studies, but this is not confirmed in the text.

8. The Sample Size for the Training Set

The sample size for the training set is not provided in the text. The document only mentions that the deep learning algorithm was "trained on medical images."

9. How the Ground Truth for the Training Set Was Established

This information is not provided in the text.

§ 892.2080 Radiological computer aided triage and notification software.

(a)
Identification. Radiological computer aided triage and notification software is an image processing prescription device intended to aid in prioritization and triage of radiological medical images. The device notifies a designated list of clinicians of the availability of time sensitive radiological medical images for review based on computer aided image analysis of those images performed by the device. The device does not mark, highlight, or direct users' attention to a specific location in the original image. The device does not remove cases from a reading queue. The device operates in parallel with the standard of care, which remains the default option for all cases.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Design verification and validation must include:
(i) A detailed description of the notification and triage algorithms and all underlying image analysis algorithms including, but not limited to, a detailed description of the algorithm inputs and outputs, each major component or block, how the algorithm affects or relates to clinical practice or patient care, and any algorithm limitations.
(ii) A detailed description of pre-specified performance testing protocols and dataset(s) used to assess whether the device will provide effective triage (
e.g., improved time to review of prioritized images for pre-specified clinicians).(iii) Results from performance testing that demonstrate that the device will provide effective triage. The performance assessment must be based on an appropriate measure to estimate the clinical effectiveness. The test dataset must contain sufficient numbers of cases from important cohorts (
e.g., subsets defined by clinically relevant confounders, effect modifiers, associated diseases, and subsets defined by image acquisition characteristics) such that the performance estimates and confidence intervals for these individual subsets can be characterized with the device for the intended use population and imaging equipment.(iv) Stand-alone performance testing protocols and results of the device.
(v) Appropriate software documentation (
e.g., device hazard analysis; software requirements specification document; software design specification document; traceability analysis; description of verification and validation activities including system level test protocol, pass/fail criteria, and results).(2) Labeling must include the following:
(i) A detailed description of the patient population for which the device is indicated for use;
(ii) A detailed description of the intended user and user training that addresses appropriate use protocols for the device;
(iii) Discussion of warnings, precautions, and limitations must include situations in which the device may fail or may not operate at its expected performance level (
e.g., poor image quality for certain subpopulations), as applicable;(iv) A detailed description of compatible imaging hardware, imaging protocols, and requirements for input images;
(v) Device operating instructions; and
(vi) A detailed summary of the performance testing, including: test methods, dataset characteristics, triage effectiveness (
e.g., improved time to review of prioritized images for pre-specified clinicians), diagnostic accuracy of algorithms informing triage decision, and results with associated statistical uncertainty (e.g., confidence intervals), including a summary of subanalyses on case distributions stratified by relevant confounders, such as lesion and organ characteristics, disease stages, and imaging equipment.