Search Results
Found 1 results
510(k) Data Aggregation
(169 days)
Lunit INSIGHT MMG
Lunit INSIGHT MMG is a radiological Computer-Assisted Detection and Diagnosis (CADe/x) software device based on an artificial intelligence algorithm intended to aid in the detection, and characterization of suspicious areas for breast cancer on mammograms from compatible FFDM systems. As an adjunctive tool, the device is intended to be viewed by interpreting physicians after completing their initial read.
It is not intended as a replacement for a complete physician's review or their clinical judgement that takes into account other relevant information from the image or patient history. Lunit INSIGHT MMG uses screening mammograms of the female population.
Lunit INSIGHT MMG is a radiological Computer-Assisted Detection and Diagnosis (CADe/x) software for aiding interpreting physicians with the detection, and characterization of suspicious areas for breast cancer on mammograms from compatible FFDM (full-field digital mammography) systems. The software applies an artificial intelligence algorithm for recognition of suspicious lesions, which are trained with large databases of biopsy proven examples of breast cancer, benign lesions and normal tissues.
Lunit INSIGHT MMG automatically analyzes the mammograms received from the client's image storage system (e.g., Picture Archiving and Communication System (PACS)) or other radiological imaging equipment. Following receipt of mammograms, the software device de-identifies the copies of images in DICOM format (.dcm) and then automatically analyzes each image and identifies and characterizes suspicious areas for breast cancer. The analysis result is converted into DICOM file and the result is saved within the designated storage location (e.g., PACS, x-ray system, etc.)
Lunit INSIGHT MMG processes mammograms and the output of the device can be viewed by interpreting physicians after completing their initial read. As an analysis result, the software device allows a visualization and quantitative estimation of the presence of a malignant lesion. The suspicious lesions are marked by a visualized map and an abnormality score, which reflects general likelihood of presence of malignancy, is presented for each lesion, as well as for each breast.
{
"acceptance_criteria_and_study_description": {
"acceptance_criteria_and_performance_table": [
{
"criterion": "Standalone Performance: ROC AUC",
"acceptance_criteria": "Statistical significance (p 0",
"reported_performance": "0.051 (95% CI: 0.027-0.075), P=0.0001"
}
],
"test_set_sample_size_and_provenance": {
"standalone_performance_study": {
"sample_size": "2412 mammograms",
"data_provenance": "Collected using Hologic, GE Healthcare, and Siemens mammography equipment. Data is independent from dataset used for algorithm development and US pivotal reader study.",
"country_of_origin": "Not explicitly stated, but implies global or general collection from compatible equipment."
},
"clinical_testing_reader_study": {
"sample_size": "240 mammograms",
"data_provenance": "Collected using Hologic and GE Healthcare mammography equipment in the US. Retrospective study.",
"country_of_origin": "US"
}
},
"experts_for_ground_truth": {
"number_of_experts": "Not explicitly stated for establishing ground truth, but for the clinical reader study, 12 MQSA qualified reading panelists performed interpretations.",
"qualifications_of_experts": "For the clinical reader study, 'MQSA qualified reading panelists' were used. Specific years of experience are not mentioned."
},
"adjudication_method_for_test_set": {
"standalone_performance_study": "Not explicitly described, but implies comparison against 'reference standards'.",
"clinical_testing_reader_study": "Not explicitly described beyond the use of 12 MQSA qualified reading panelists. The process states that readers re-interpret cases with AI assistance after an initial unaided read, with randomized reading order to minimize bias."
},
"mrmc_comparative_effectiveness_study": {
"conducted": "Yes",
"effect_size": {
"roc_auc_improvement": "Average inter-test difference in ROC AUC between Test 2 (with CAD) and Test 1 (without CAD) was 0.051 (95% CI: 0.027-0.075), with statistical significance (P=0.0001). This indicates improved physician interpretation ability.",
"lcm_lroc_improvement": "0.094 (95% CI: 0.056 - 0.132)",
"lcm_roc_auc_improvement": "0.052 (95% CI: 0.026 - 0.079)",
"recall_rate_in_cancer_group_sensitivity_improvement": "5.97 (95% CI: 2.48 - 9.46)",
"recall_rate_in_non_cancer_group_1_specificity_improvement": "-1.46 (95% CI: -3.41 - 0.05)"
}
},
"standalone_performance_study_conducted": "Yes, two standalone performance analyses were done:\n1. A dedicated standalone performance study with 2412 mammograms, showing ROC AUC of 0.903.\n2. An additional standalone algorithm performance assessment using the 240 cases from the reader study, without reader intervention, yielding an ROC AUC of 0.863. This exceeded the AUC of every human reading panelist in the unaided (Test 1) scenario (unaided AUC = 0.754).",
"type_of_ground_truth_used": "Mainly implied as breast cancer detection based on 'reference standards' in the standalone study and by expert interpretation/consensus in the reader study. The device is trained with 'biopsy proven examples of breast cancer, benign lesions and normal tissues', suggesting pathology or clinical outcomes data as the ultimate ground truth source for training.",
"training_set_sample_size": "Not explicitly stated as a specific number, but described as 'large databases of biopsy proven examples of breast cancer, benign lesions and normal tissues'."
}
}
Ask a specific question about this device
Page 1 of 1