Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K252482

    Validate with FDA (Live)

    Device Name
    CogNet AI-MT+
    Manufacturer
    Date Cleared
    2025-12-11

    (126 days)

    Product Code
    Regulation Number
    892.2080
    Age Range
    22 - 120
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The MedCognetics CogNet AI-MT+ software is a passive notification for prioritization-only, parallel-workflow software tool used by MQSA qualified interpreting physicians to prioritize viewing patients with suspicious findings in the medical care environment. CogNet AI-MT+ utilizes an artificial intelligence algorithm to analyze DBT screening mammograms and flags those that are suggestive of the presence of at least one suspicious finding at the exam level.

    CogNet AI-MT+ produces an exam level output to a PACS/Workstation for flagging the suspicious study and allows for worklist prioritization. MQSA qualified interpreting physicians are responsible for reviewing each exam on a display approved for use in mammography, according to the current standard of care. The CogNet AI-MT+ device is limited to the categorization of exams, does not provide any diagnostic information beyond triage and prioritization, does not remove images from the interpreting physician's worklist, and should not be used in lieu of full patient evaluation, or relied upon to make or confirm diagnosis.

    The CogNet AI-MT+ device is intended for use with DBT mammography exams acquired using validated DBT equipment only.

    Device Description

    The MedCognetics CogNet AI-MT+ is a non-invasive computer-assisted triage and notification software as a medical device (SaMD) that analyzes DBT screening mammograms using a machine learning algorithm and notifies a PACS/workstation of the presence of findings suspicious of cancer in a study. The passive-notification enables interpreting physicians to prioritize their worklist and assists them in viewing prioritized studies using the standard PACS or workstation viewing software. The device aim is to aid in the prioritization and triage of radiological medical images only. It is a software tool for MQSA interpreting physicians reading mammograms and does not replace complete evaluation according to the standard of care.

    The software modules that compose the CogNet AI-MT+ Deep Learning software are:

    The Qualification Module - The requirement for acceptance into the CogNet AI-MT+ analysis is a completed Mammogram DICOM image. In the Qualification Module, the image arrives from the Mammogram modality and is "read" to determine if this qualification applies.

    The Mammogram Pre-Processing module – The DBT pixel brightness, image size, and shape is adjusted for consistency in this module. After the DICOM image has been qualified, the Pre-Processing module assures that the images are from a mammogram device and then validates that the DICOM is properly formed and consists of "For Presentation" image pixel data.

    Mammogram Learning Module – This module accepts the normalized image data from the pre-processing module and uses Deep Learning techniques to extract features to determine if any lesions suspicious for cancer exist in the mammogram study

    Failures in any of the above modules will generate error messages that are recorded in an accessible log file and, if user specific issues are encountered, sent to the user in a secondary capture report.

    CogNet AI-MT+ has no viewing capability, but the results data are sent via a secure network function to the PACS/workstation, and the PACS/workstation "reads" the necessary DICOM tags and matches it with the original mammogram study images as a normal function of a PACS or Workstation.

    When the study data is fed into the configured reading worklist, the results are merged as part of the mammogram study. This process allows an AI Result to be ready for prioritization of the study prior to the interpreting physician's review.

    A reading worklist is a listing of available studies for reading and diagnosis. The worklist is populated by the parsing of a DICOM file of a completed mammogram study, using the demographic and study fields to fill in the designated columns of the worklist. The columns are sortable by study, based on the column headings. CogNet AI-MT+ provides an API for adding an AI Results column with 0 to 1 response per study. If an analysis was not performed on that study, the AI Results indication is 0. If an analysis was performed on that study, then the AI Results column indicates either Suspicious (red diamond icon) or Processed (blue circle icon). The AI Results column may be sorted by the interpreting physicians by clicking an up or down arrow next to the column heading. This sort would allow the studies that contain suspicious findings to be brought to the top of the viewing list.

    AI/ML Overview

    Here's a detailed description of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) clearance letter:


    Acceptance Criteria and Device Performance

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criterion (Primary Objective)Reported Device Performance (CogNet AI-MT+)
    AUROC $\ge$ 0.95 with a 95% Confidence Interval of $\pm$0.02AUROC = 0.9548 (95% CI: 0.9364 - 0.9699)
    Acceptance Criterion (Secondary Objective - compared to BCSC study)Reported Device Performance
    Sensitivity comparable to BCSC (0.869)Sensitivity = 0.8809 (95% CI: 0.8511 - 0.9032)
    Specificity comparable to BCSC (0.889)Specificity = 0.9156 (95% CI: 0.8933 - 0.9380)

    The reported device performance for CogNet AI-MT+ met or exceeded both the primary AUROC objective and the secondary sensitivity and specificity objectives.


    Study Details Proving Device Meets Acceptance Criteria

    2. Sample size used for the test set and the data provenance

    • Sample Size for Test Set: 806 women (patients).
      • This consisted of 403 cases labeled "benign" (negative diagnosis with 2-year follow-up) and 403 cases labeled "malignant" (positive biopsy result).
      • The breakdown of these 806 samples by biopsy outcome is:
        • Biopsy Proven Benign: 21
        • Malignant: 403
        • Screening Benign: 382
    • Data Provenance: The test set data was obtained from a site or facility that was not used to source the training or development data, to ensure generalizability. The specific country of origin is not explicitly stated, but it is implied to be distinct from the diverse geographical regions listed for training data (Europe, South Asia, South America, Africa, United States). It is a retrospective study.

    3. Number of experts used to establish the ground truth for the test set and qualifications of those experts

    The document does not explicitly state the number of experts used or their qualifications for establishing the ground truth for the test set. However, the ground truth was based on:

    • "negative diagnosis (BI-RADS 1 or 2 assessment) throughout 2-years of follow-up" for benign cases.
    • "positive biopsy result" for malignant cases.
      This implies clinical follow-up and pathology reports, which are inherently established by qualified medical professionals, but not specifically "experts" in the context of reader studies.

    4. Adjudication method for the test set

    The document does not explicitly describe an adjudication method (e.g., 2+1, 3+1) for the test set. The ground truth appears to be based on definitive clinical outcomes such as biopsy results and 2-year follow-up on BI-RADS classifications, which typically do not require an adjudication process among readers for establishing the final ground truth label itself.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done.
    • The study conducted was a standalone retrospective study of the device performance (algorithm only).

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Yes, a standalone retrospective study of device performance was conducted. The results (AUROC, Sensitivity, Specificity) listed in the table above reflect the algorithm's performance without human-in-the-loop assistance.

    7. The type of ground truth used

    The ground truth used was:

    • Outcomes data / Clinical Follow-up: "negative diagnosis (BI-RADS 1 or 2 assessment) throughout 2-years of follow-up" for benign cases.
    • Pathology: "positive biopsy result" for malignant cases.

    8. The sample size for the training set

    • Total Patients for Training Set: 32,292 patients.
      • This corresponds to approximately 129,168 images (assuming 4 images per patient for bilateral studies with 4 standard views, as mentioned in inclusion criteria).
      • The breakdown was 10,496 positive cases and 21,796 negative cases.

    9. How the ground truth for the training set was established

    The document states that the algorithm was "trained with samples both suspicious of cancer and not suspicious of cancer." While the precise method for establishing ground truth for each individual training sample is not detailed, it can be inferred that it followed similar clinical standards as the test set:

    • "Biopsy proven cancer studies (soft tissues and microcalcifications)" for positive cases.
    • "BIRADS 1 and 2 normal/benign cases with 2-year follow-up of a negative diagnosis" for negative cases.
      These methods would involve pathology reports and clinical follow-up, similar to the test set ground truth.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1