Search Results
Found 1 results
510(k) Data Aggregation
(168 days)
Lunit INSIGHT DBT is a computer-assisted detection and diagnosis (CADe/x) software intended to be used concurrently by interpreting physicians to aid in the detection and characterization of suspected lesions for breast cancer in digital breast tomosynthesis (DBT) exams from compatible DBT systems. Through the analysis. the regions of soft tissue lesions and calcifications are marked with an abnormality score indicating the likelihood of the presence of malignancy for each lesion. Lunit INSIGHT DBT uses screening mammograms of the female population.
Lunit INSIGHT DBT is not intended as a replacement for a complete interpreting physician's review or their clinical judgment that takes into account other relevant information from the image or patient history.
Lunit INSIGHT DBT is a computer-assisted detection/diagnosis (CADe/x) software as a medical device that provides information about the presence, location and characteristics of lesions suspicious for breast cancer to assist interpreting physicians in making diagnostic decisions when reading digital breast tomosynthesis (DBT) images. The software automatically analyzes digital breast tomosynthesis slices via artificial intelligence technology that has been trained via deep learning.
For each DBT case, Lunit INSIGHT DBT generates an artificial intelligence analysis results that include the lesion type, location, lesion-level case-level score, and outline of the regions suspected of breast cancer. This peripheral information intends to augment the physician's workflow to better aid in detection and diagnosis of breast cancer.
Here's a breakdown of the acceptance criteria and the study proving the device, Lunit INSIGHT DBT, meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
| Performance Metric | Acceptance Criteria | Reported Device Performance | Statistical Significance / Comment |
|---|---|---|---|
| Standalone Performance | |||
| AUROC | > 0.903 (mean AUROC of predicate device K211678) | 0.928 (95% Cl: 0.917 - 0.939) | p < 0.0001 (exceeded criteria) |
| Clinical Assessment (MRMC Study with CAD Assistance) | |||
| Patient-level LOS AUROC | CAD-assisted performance superior to CAD-unassisted performance with statistical significance | CAD-unassisted AUROC: 0.897 (95% Cl 0.858 - 0.936)CAD-assisted AUROC: 0.915 (95% Cl: 0.874 - 0.955) | Inter-test difference: 0.017 (95% Cl: 0.000 - 0.034, P = 0.0498) - met criteria. |
2. Sample Size Used for the Test Set and Data Provenance
- Standalone Performance Test Set: 2,202 DBT exams (1,100 negative/benign, 1,102 cancer cases).
- Data Provenance: Collected at multiple imaging facilities in the US. The data was collected consecutively.
- Clinical Assessment (MRMC) Test Set: 258 DBT exams (65 cancer cases, 193 non-cancer cases, comprising 128 normal and 65 benign cases).
- Data Provenance: Acquired from US clinical centers.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications
- Standalone Performance Test Set: The text states ground truth localization was "derived based on the radiologic review and annotation by multiple MQSA qualified ground truthers." The exact number of experts is not specified.
- Qualifications: "MQSA qualified ground truthers."
- Clinical Assessment (MRMC) Test Set: The ground truth for the cases used in the MRMC study is implicitly established by the case classification (cancer vs. non-cancer). It's not explicitly stated how many experts established the underlying ground truth for these 258 cases, beyond the radiologists participating in the MRMC study itself. The readers for the MRMC study were "a total of 15 MQSA qualified and US board-certified radiologists."
4. Adjudication Method for the Test Set
- Standalone Performance Test Set: Ground truth was established through "binary classification of each case based on clinical supporting data, particularly pathology reports for cancer and biopsy-proven benign cases, followed by localization which was derived based on the radiologic review and annotation by multiple MQSA qualified ground truthers." This suggests an expert consensus/review process for localization, likely involving reconciliation or multiple reads. The specific type of adjudication (e.g., 2+1, 3+1) is not explicitly detailed.
- Clinical Assessment (MRMC) Test Set: The ground truth for the MRMC study seems to rely on the pre-established classification of cases (cancer/non-cancer) based on clinical and pathological data. The radiologists in the MRMC study were evaluating cases against this existing ground truth, not establishing it in an adjudicated reading session.
5. Was a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Done? If so, what was the effect size of how much human readers improve with AI vs. without AI assistance?
- Yes, an MRMC comparative effectiveness study was done.
- Effect Size of Improvement:
- CAD-unassisted AUROC: 0.897
- CAD-assisted AUROC: 0.915
- Inter-test difference (effect size): 0.017 (Absolute difference in AUROC). This indicates an improvement in AUROC of 0.017 when radiologists were assisted by Lunit INSIGHT DBT. The p-value of 0.0498 indicates this improvement was statistically significant.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Yes, a standalone performance study was done.
- Performance: AUROC of 0.928 (95% Cl: 0.917 - 0.939).
7. The Type of Ground Truth Used
- Standalone Performance and Clinical Assessment Test Sets: The ground truth was established primarily through clinical supporting data, specifically pathology reports for cancer and biopsy-proven benign cases, further supplemented by radiologic review and annotation by MQSA qualified ground truthers for localization. This can be categorized as a combination of pathology and expert consensus/review.
8. The Sample Size for the Training Set
- The document states that the "dataset used in the standalone performance test was independent from the dataset used for development of the artificial intelligence algorithm." However, it does not specify the sample size of the training set used for the development of Lunit INSIGHT DBT.
9. How the Ground Truth for the Training Set Was Established
- The document mentions that the training dataset was separate from the test dataset. While it details how the ground truth was established for the test set (pathology, biopsy, radiologic review/annotation), it does not explicitly describe how the ground truth was established for the training set. Being a deep learning model, it's highly probable the training data also relied on verified diagnoses, likely similarly derived from pathology/biopsy given the nature of CADe/x devices for breast cancer.
Ask a specific question about this device
Page 1 of 1