Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K193351
    Device Name
    NinesAI
    Manufacturer
    Date Cleared
    2020-04-21

    (140 days)

    Product Code
    Regulation Number
    892.2080
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Device Name :

    NinesAI

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    NinesAl is a parallel workflow tool indicated for use by hospital networks and trained clinicians to identify images of specific patients to a radiologist, independent of standard of care workflow, to aid in prioritizing and performing the radiological review. NinesAl uses artificial intelligence algorithms to analyze head CT images for findings suggestive of a pre-specified emergent clinical condition.

    The software automatically analyzes Digital Imaging and Communications in Medicine (DICOM) images as they arrive in the Picture Archive and Communication System (PACS) using machine learning algorithms. Identification of suspected findings is not for diagnostic use beyond notification. Specifically, the software analyzes head CT images of the brain to assess the suspected presence of intracranial hemorrhage and/or mass effect and identifies images with potential emergent findings in a radiologist's worklist.

    NinesAl is intended to be used as a triage tool limited to analysis of imaging data and should not be used in-lieu of full patient evaluation or relied upon to make or confirm a diagnosis. Additionally, preview images displayed to the radiologist outside of the DICOM viewer are non-diagnostic quality and should only be used for informational purposes.

    Device Description

    NinesAl notifies a radiologist of the presence of a suspected critical abnormality in a radiological image. The software system is a complete package comprised of image analysis software and a workstation module that is used to alert the radiologist. The image analysis can also be configured to send HL7 messages and DICOM secondary series.

    The image analysis uses Artificial Intelligence (AI) technology to analyze non contrast CT Head scans for the presence of Intracranial Hemorrhage and/or Mass Effect. More specifically, the device utilizes two machine learning (ML) algorithms to detect each finding respectively.

    NinesAl is a software device and does not come into contact with patients. All radiological studies are still reviewed by trained radiologists. NinesAl is meant to be used as an aid for case prioritization.

    AI/ML Overview

    Here's a summary of the acceptance criteria and study details for the NinesAI device, based on the provided text:

    Acceptance Criteria and Device Performance

    The acceptance criteria are derived from the observed performance of the predicate device (Aidoc's BriefCase) and a baseline of 0.80 for both sensitivity and specificity for general emergent findings.

    FindingAcceptance Criteria (Sensitivity)Reported Device Performance (Sensitivity) [95% CI]Acceptance Criteria (Specificity)Reported Device Performance (Specificity) [95% CI]
    Intracranial Hemorrhage>= 0.800.899 [0.837, 0.940]>= 0.800.974 [0.974, 0.992]
    Mass Effect>= 0.800.964 [0.916, 0.987]>= 0.800.911 [0.856, 0.948]

    Time Benefit Analysis:

    MetricNinesAI Time-to-Notification (Mean [min] / Median [min])Standard of Care Time-to-Open-Dictation (Mean [min] / Median [min])
    Intracranial Hemorrhage (Time-Savings)0.23 [0.23, 0.24] / 0.24159.4 [67.07, 251.7] / 6.0
    Mass Effect (Time-Savings)0.23 [0.23, 0.24] / 0.2428.5 [14.1, 42.8] / 7.5

    Study Details

    1. Sample Size and Data Provenance (Test Set):

      • Sample Size: Not explicitly stated as a single number, but the text mentions "Head CT studies included in each of the test datasets were obtained from over 20 clinical sites."
      • Data Provenance: Retrospective. The studies were obtained from "over 20 clinical sites" and included "a minimum of 3 scanner manufacturers and over 20 scanner models, and also reflected broad patient demographics," suggesting a diverse dataset. The country of origin for the data is not specified.
    2. Number of Experts and Qualifications (Ground Truth for Test Set):

      • Number of Experts: Not explicitly stated. The text mentions "agreement rate between labelers who determined ground truth for the test dataset studies." This implies multiple experts were involved in establishing the ground truth.
      • Qualifications of Experts: Not explicitly stated, but the term "labelers" typically refers to trained medical professionals who are qualified to interpret medical images, such as radiologists.
    3. Adjudication Method (Test Set):

      • Not explicitly stated. The mention of "agreement rate between labelers who determined ground truth" suggests some form of consensus or agreement process, but the specific method (e.g., 2+1, 3+1) is not detailed.
    4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

      • No, a specific MRMC comparative effectiveness study is not explicitly mentioned. The study focuses on the standalone performance of the AI algorithm and a time benefit analysis, which compares AI notification time to standard-of-care time-to-open-dictation, rather than comparing human reader performance with and without AI assistance.
    5. Standalone Performance Study:

      • Yes, a standalone (algorithm only) performance study was conducted. The algorithms were evaluated independently, and primary endpoints like sensitivity and specificity were measured for each algorithm.
    6. Type of Ground Truth Used (Test Set):

      • Expert Consensus: The text states, "agreement rate between labelers who determined ground truth for the test dataset studies." This indicates that human expert consensus was used to establish the ground truth.
    7. Sample Size for Training Set:

      • Not explicitly stated in the provided text. The text mentions, "The algorithms are trained using a database of radiological images," but does not give a specific number for the training set size.
    8. How Ground Truth for Training Set was Established:

      • Not explicitly stated in the provided text. It is generally inferred that similar expert labeling methods would be used for training data, but the document does not detail this.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1