Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K222268
    Date Cleared
    2023-03-28

    (243 days)

    Product Code
    Regulation Number
    892.2080
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K192901

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Annalise Enterprise CXR Triage Trauma is a software workflow tool designed to aid the clinical assessment of adult chest X-ray cases with features suggestive of vertebral compression fracture in the medical care environment.

    The device analyzes cases using an artificial intelligence algorithm to identify findings. It makes case-level output available to a PACS or RIS for worklist prioritization or triage intended for clinicians in Bone Health and Fracture Liaison Service programs.

    The device is intended to be used by trained clinicians who are qualified to interpret chest X-rays as part of their scope of practice.

    The device is not intended to direct attention to specific portions of an image or to anomalies other than vertebral compression fracture.

    Its results are not intended to be used on a standalone basis for clinical decision making nor is it intended to rule out specific critical findings, or otherwise preclude clinical assessment of X-ray cases.

    Standalone performance evaluation of the device was performed on a dataset that included only erect positioning. Use of this device with supine positioning may result in differences in performance.

    Device Description

    Annalise Enterprise CXR Triage Trauma is a software workflow tool which uses an artificial intelligence (AI) algorithm to identify suspected findings on chest X-ray (CXR) studies in the medical care environment. The findings identified by the device include vertebral compression fractures.

    Radiological findings are identified by the device using an AI algorithm - a convolutional neural network trained using deep-learning techniques. Images used to train the algorithm were sourced from datasets across three continents, including a range of equipment manufacturers and models. The performance of the device's AI algorithm was validated in a standalone performance evaluation, in which the case-level output from the device was compared with a reference standard ('ground truth'). This was determined by two ground truthers, with a third truther used in the event of disagreement. All truthers were US board-certified radiologists.

    The device interfaces with image and order management systems (such as PACS/RIS) to obtain CXR studies for processing by the AI algorithm. Following processing, if the clinical finding of interest is identified in a CXR study, the device provides a notification to the image and order management system for prioritization of that study in the worklist. This enables users to review the studies containing features suggestive of these clinical findings earlier than in the standard clinical workflow. It is important to note that the device will never decrease a study's existing priority in the worklist. This ensures that worklist items will never have their priorities downgraded based on AI results.

    The device workflow is performed parallel to and in conjunction with the standard clinical workflow for interpretation of CXRs. The device is intended to aid in prioritization and triage of radiological medical images only.

    AI/ML Overview

    The provided text describes the Annalise Enterprise CXR Triage Trauma device, an AI-powered software tool designed to aid in the clinical assessment and triage of adult chest X-ray cases for vertebral compression fracture. Here's a breakdown of its acceptance criteria and the study proving its performance:

    Acceptance Criteria and Reported Device Performance

    FindingAcceptance Criteria (Metric)Reported Device Performance
    Vertebral compression fractureAUC (Area Under the Curve)0.954 (95% CI: 0.939-0.968)
    Vertebral compression fractureSensitivity (Se) at specific operating point89.3% (85.7-93.0%) at 0.3849 operating point
    Vertebral compression fractureSpecificity (Sp) at specific operating point89.0% (85.8-92.1%) at 0.3849 operating point
    Vertebral compression fractureSensitivity (Se) at specific operating point85.3% (80.9-89.3%) at 0.4834 operating point
    Vertebral compression fractureSpecificity (Sp) at specific operating point90.9% (87.7-94.0%) at 0.4834 operating point
    Triage Turn-Around TimeDemonstrates effective triage (implicitly compared to predicate device)Average 30.0 seconds

    Study Details

    1. Sample Size and Data Provenance:

      • Test Set Sample Size: 589 CXR cases (272 positive for vertebral compression fracture, 317 negative).
      • Data Provenance: Retrospective, anonymized study. Collected consecutively from four U.S. hospital network sites. The cases were collected from multiple data sources spanning a variety of geographical locations.
    2. Number of Experts and Qualifications for Ground Truth:

      • Number of Experts: At least two ABR-certified radiologists for initial annotation, with a third radiologist for disagreement resolution.
      • Qualifications: All truthers were U.S. board-certified radiologists who were protocol-trained.
    3. Adjudication Method for Test Set:

      • Consensus was determined by two ground truthers. A third ground truther was used in the event of disagreement. This is commonly referred to as a 2+1 adjudication method.
    4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

      • No MRMC comparative effectiveness study was explicitly mentioned or detailed in the provided text regarding how human readers improve with AI vs without AI assistance. The study described is primarily a standalone performance evaluation of the AI algorithm. The device's role is described as a "workflow tool" for worklist prioritization or triage, implying indirect assistance, rather than direct human-AI collaborative interpretation.
    5. Standalone Performance:

      • Yes, a standalone (algorithm only without human-in-the-loop) performance evaluation was done. The performance results (AUC, Sensitivity, Specificity) listed in the table above pertain to this standalone evaluation.
    6. Type of Ground Truth Used:

      • Expert consensus (blinded annotations by ABR-certified radiologists with adjudication).
    7. Training Set Sample Size:

      • The exact sample size for the training set is not explicitly stated. However, it is mentioned that "Images used to train the algorithm were sourced from datasets across three continents, including a range of equipment manufacturers and models." The test dataset was "newly acquired and independent from the training dataset used in model development."
    8. How Ground Truth for Training Set Was Established:

      • The document states that the AI algorithm was a "convolutional neural network trained using deep-learning techniques." While it mentions the source of the training data (datasets across three continents), it does not explicitly detail the method for establishing ground truth for the training set (e.g., expert review, pathology, or other means). The focus of the provided text is on the validation of the test set performance.
    Ask a Question

    Ask a specific question about this device

    K Number
    K193271
    Date Cleared
    2021-01-15

    (416 days)

    Product Code
    Regulation Number
    892.2080
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K192901

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    uAl EasyTriage-Rib is a radiological computer-assisted triage and notification software device for analysis of CT chest images. The device is intended to assist hospital networks and trained radiologists in workflow triage by flagging and prioritizing trauma studies with suspected positive findings of multiple (3 or more) acute rib fractures.

    Device Description

    uAl EasyTriage-Rib is a radiological computer-assisted triage and notification software device indicated for analysis of CT chest images. The device is intended to assist hospital networks and trained radiologists in workflow triage by flagging and prioritizing studies with suspected positive findings of multiple (3 or more) acute rib fractures. The device consists of the following two modules: (1) uAl EasyTriage-Rib Server; and (2) uAl EasyTriage-Rib Studylist Application that provides the user interface in which notifications from the application are received.

    AI/ML Overview

    The information provided describes the uAI EasyTriage-Rib device and its performance study to meet acceptance criteria for identifying multiple (3 or more) acute rib fractures in CT chest images.

    Here's a breakdown of the requested information:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implied by the reported performance metrics. The specific "acceptance criteria" are not explicitly stated as numerical targets in the provided text. However, a general statement is made: "The results show that it can detect rib fractures and reach the preset standard." Given the context of a 510(k) summary, the reported sensitivity, specificity, and AUC, along with a comparable time-to-notification to the predicate device, are the performance benchmarks that demonstrate achievement of that standard.

    Performance MetricAcceptance Criteria (Implied)Reported Device PerformanceComments
    SensitivityHigh92.7% (95% CI: 84.8%-97.3%)Achieved high sensitivity as a crucial consideration for a time-critical condition.
    SpecificityAdequate84.7% (95% CI: 77.0%-90.7%)Specificity was noted to be affected by the difficulty in distinguishing acute from chronic fractures, but considered acceptable given clinical relevance of reviewing chronic fractures.
    AUCHigh0.939 (95% CI: 0.906, 0.972)Indicates high discriminative power.
    Time-to-notification (Average)Comparable to predicate device69.56 secondsComparable to predicate device (HealthVCF: 61.36 seconds), suggesting timely notifications.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: 200 cases
    • Data Provenance:
      • Country of Origin: Multiple US clinical sites (explicitly stated).
      • Retrospective or Prospective: Retrospective (explicitly stated).

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts

    This information is not explicitly stated in the provided text. The document mentions "trained radiologists" being involved in clinical decision-making but does not specify the number or qualifications of experts used to establish the ground truth for the test set.

    4. Adjudication Method for the Test Set

    This information is not explicitly stated in the provided text.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

    A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done with human readers. The study focused on the standalone performance of the AI algorithm and a comparison of its "time-to-notification" with a predicate device, not on how human readers improve with AI assistance.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    Yes, a standalone study was done. The reported sensitivity, specificity, and AUC are measures of the algorithm's performance in identifying the target condition without human intervention in the analysis. The device "uses an artificial intelligence algorithm to analyze images and highlight studies with suspected multiple (3 or more) acute rib fractures in a standalone application for study list prioritization or triage in parallel to ongoing standard of care."

    7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)

    The document does not explicitly state the method used to establish the ground truth for the 200 test cases. It is often implied to be expert consensus by radiologists in such studies, but this is not confirmed in the text.

    8. The Sample Size for the Training Set

    The sample size for the training set is not provided in the text. The document only mentions that the deep learning algorithm was "trained on medical images."

    9. How the Ground Truth for the Training Set Was Established

    This information is not provided in the text.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1