K Number
K211678
Manufacturer
Date Cleared
2021-11-17

(169 days)

Product Code
Regulation Number
892.2090
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Lunit INSIGHT MMG is a radiological Computer-Assisted Detection and Diagnosis (CADe/x) software device based on an artificial intelligence algorithm intended to aid in the detection, and characterization of suspicious areas for breast cancer on mammograms from compatible FFDM systems. As an adjunctive tool, the device is intended to be viewed by interpreting physicians after completing their initial read.

It is not intended as a replacement for a complete physician's review or their clinical judgement that takes into account other relevant information from the image or patient history. Lunit INSIGHT MMG uses screening mammograms of the female population.

Device Description

Lunit INSIGHT MMG is a radiological Computer-Assisted Detection and Diagnosis (CADe/x) software for aiding interpreting physicians with the detection, and characterization of suspicious areas for breast cancer on mammograms from compatible FFDM (full-field digital mammography) systems. The software applies an artificial intelligence algorithm for recognition of suspicious lesions, which are trained with large databases of biopsy proven examples of breast cancer, benign lesions and normal tissues.

Lunit INSIGHT MMG automatically analyzes the mammograms received from the client's image storage system (e.g., Picture Archiving and Communication System (PACS)) or other radiological imaging equipment. Following receipt of mammograms, the software device de-identifies the copies of images in DICOM format (.dcm) and then automatically analyzes each image and identifies and characterizes suspicious areas for breast cancer. The analysis result is converted into DICOM file and the result is saved within the designated storage location (e.g., PACS, x-ray system, etc.)

Lunit INSIGHT MMG processes mammograms and the output of the device can be viewed by interpreting physicians after completing their initial read. As an analysis result, the software device allows a visualization and quantitative estimation of the presence of a malignant lesion. The suspicious lesions are marked by a visualized map and an abnormality score, which reflects general likelihood of presence of malignancy, is presented for each lesion, as well as for each breast.

AI/ML Overview
{
  "acceptance_criteria_and_study_description": {
    "acceptance_criteria_and_performance_table": [
      {
        "criterion": "Standalone Performance: ROC AUC",
        "acceptance_criteria": "Statistical significance (p  0",
        "reported_performance": "0.051 (95% CI: 0.027-0.075), P=0.0001"
      }
    ],
    "test_set_sample_size_and_provenance": {
      "standalone_performance_study": {
        "sample_size": "2412 mammograms",
        "data_provenance": "Collected using Hologic, GE Healthcare, and Siemens mammography equipment. Data is independent from dataset used for algorithm development and US pivotal reader study.",
        "country_of_origin": "Not explicitly stated, but implies global or general collection from compatible equipment."
      },
      "clinical_testing_reader_study": {
        "sample_size": "240 mammograms",
        "data_provenance": "Collected using Hologic and GE Healthcare mammography equipment in the US. Retrospective study.",
        "country_of_origin": "US"
      }
    },
    "experts_for_ground_truth": {
      "number_of_experts": "Not explicitly stated for establishing ground truth, but for the clinical reader study, 12 MQSA qualified reading panelists performed interpretations.",
      "qualifications_of_experts": "For the clinical reader study, 'MQSA qualified reading panelists' were used. Specific years of experience are not mentioned."
    },
    "adjudication_method_for_test_set": {
      "standalone_performance_study": "Not explicitly described, but implies comparison against 'reference standards'.",
      "clinical_testing_reader_study": "Not explicitly described beyond the use of 12 MQSA qualified reading panelists. The process states that readers re-interpret cases with AI assistance after an initial unaided read, with randomized reading order to minimize bias."
    },
    "mrmc_comparative_effectiveness_study": {
      "conducted": "Yes",
      "effect_size": {
        "roc_auc_improvement": "Average inter-test difference in ROC AUC between Test 2 (with CAD) and Test 1 (without CAD) was 0.051 (95% CI: 0.027-0.075), with statistical significance (P=0.0001). This indicates improved physician interpretation ability.",
        "lcm_lroc_improvement": "0.094 (95% CI: 0.056 - 0.132)",
        "lcm_roc_auc_improvement": "0.052 (95% CI: 0.026 - 0.079)",
        "recall_rate_in_cancer_group_sensitivity_improvement": "5.97 (95% CI: 2.48 - 9.46)",
        "recall_rate_in_non_cancer_group_1_specificity_improvement": "-1.46 (95% CI: -3.41 - 0.05)"
      }
    },
    "standalone_performance_study_conducted": "Yes, two standalone performance analyses were done:\n1. A dedicated standalone performance study with 2412 mammograms, showing ROC AUC of 0.903.\n2. An additional standalone algorithm performance assessment using the 240 cases from the reader study, without reader intervention, yielding an ROC AUC of 0.863. This exceeded the AUC of every human reading panelist in the unaided (Test 1) scenario (unaided AUC = 0.754).",
    "type_of_ground_truth_used": "Mainly implied as breast cancer detection based on 'reference standards' in the standalone study and by expert interpretation/consensus in the reader study. The device is trained with 'biopsy proven examples of breast cancer, benign lesions and normal tissues', suggesting pathology or clinical outcomes data as the ultimate ground truth source for training.",
    "training_set_sample_size": "Not explicitly stated as a specific number, but described as 'large databases of biopsy proven examples of breast cancer, benign lesions and normal tissues'."
  }
}

§ 892.2090 Radiological computer-assisted detection and diagnosis software.

(a)
Identification. A radiological computer-assisted detection and diagnostic software is an image processing device intended to aid in the detection, localization, and characterization of fracture, lesions, or other disease-specific findings on acquired medical images (e.g., radiography, magnetic resonance, computed tomography). The device detects, identifies, and characterizes findings based on features or information extracted from images, and provides information about the presence, location, and characteristics of the findings to the user. The analysis is intended to inform the primary diagnostic and patient management decisions that are made by the clinical user. The device is not intended as a replacement for a complete clinician's review or their clinical judgment that takes into account other relevant information from the image or patient history.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Design verification and validation must include:
(i) A detailed description of the image analysis algorithm, including a description of the algorithm inputs and outputs, each major component or block, how the algorithm and output affects or relates to clinical practice or patient care, and any algorithm limitations.
(ii) A detailed description of pre-specified performance testing protocols and dataset(s) used to assess whether the device will provide improved assisted-read detection and diagnostic performance as intended in the indicated user population(s), and to characterize the standalone device performance for labeling. Performance testing includes standalone test(s), side-by-side comparison(s), and/or a reader study, as applicable.
(iii) Results from standalone performance testing used to characterize the independent performance of the device separate from aided user performance. The performance assessment must be based on appropriate diagnostic accuracy measures (
e.g., receiver operator characteristic plot, sensitivity, specificity, positive and negative predictive values, and diagnostic likelihood ratio). Devices with localization output must include localization accuracy testing as a component of standalone testing. The test dataset must be representative of the typical patient population with enrichment made only to ensure that the test dataset contains a sufficient number of cases from important cohorts (e.g., subsets defined by clinically relevant confounders, effect modifiers, concomitant disease, and subsets defined by image acquisition characteristics) such that the performance estimates and confidence intervals of the device for these individual subsets can be characterized for the intended use population and imaging equipment.(iv) Results from performance testing that demonstrate that the device provides improved assisted-read detection and/or diagnostic performance as intended in the indicated user population(s) when used in accordance with the instructions for use. The reader population must be comprised of the intended user population in terms of clinical training, certification, and years of experience. The performance assessment must be based on appropriate diagnostic accuracy measures (
e.g., receiver operator characteristic plot, sensitivity, specificity, positive and negative predictive values, and diagnostic likelihood ratio). Test datasets must meet the requirements described in paragraph (b)(1)(iii) of this section.(v) Appropriate software documentation, including device hazard analysis, software requirements specification document, software design specification document, traceability analysis, system level test protocol, pass/fail criteria, testing results, and cybersecurity measures.
(2) Labeling must include the following:
(i) A detailed description of the patient population for which the device is indicated for use.
(ii) A detailed description of the device instructions for use, including the intended reading protocol and how the user should interpret the device output.
(iii) A detailed description of the intended user, and any user training materials or programs that address appropriate reading protocols for the device, to ensure that the end user is fully aware of how to interpret and apply the device output.
(iv) A detailed description of the device inputs and outputs.
(v) A detailed description of compatible imaging hardware and imaging protocols.
(vi) Warnings, precautions, and limitations must include situations in which the device may fail or may not operate at its expected performance level (
e.g., poor image quality or for certain subpopulations), as applicable.(vii) A detailed summary of the performance testing, including test methods, dataset characteristics, results, and a summary of sub-analyses on case distributions stratified by relevant confounders, such as anatomical characteristics, patient demographics and medical history, user experience, and imaging equipment.