Search Results
Found 3 results
510(k) Data Aggregation
(279 days)
Genius AI Detection 2.0
Genius AI Detection is a computer-aided detection and diagnosis (CADe/CADx) software device intended to be used with compatible digital breast tomosynthesis (DBT) systems to identify and mark regions of interest including soft tissue densities (masses, architectural distortions and asymmetries) and calcifications in DBT exams from compatible DBT systems and provide confidence scores that offer assessment for Certainty of Findings and a Case Score.
The device intends to aid in the interpretation of digital breast tomosynthesis exams in a concurrent fashion, where the interpreting physician confirms or dismisses the findings during the reading of the exam.
Genius AI Detection 2.0 is a software device intended to identify potential abnormalities in breast tomosynthesis images. Genius AI Detection 2.0 analyzes each standard mammographic view in a digital breast tomosynthesis examination using deep learning networks. For each detected lesion, Genius AI Detection 2.0 produces CAD results that include:
- the location of the lesion;
- an outline of the lesion;
- a confidence score for the lesion
- Genius AI Detection 2.0 also produces a case score for the entire breast tomosynthesis exam.
Genius AI Detection 2.0 packages all CAD findings derived from the corresponding analysis of a tomosynthesis exam into a DICOM Mammography CAD SR object and distributes it for display on DICOM compliant review workstations. The interpreting physician will have access to the CAD findings concurrently to the reading of the tomosynthesis exam. In addition, a combination of peripheral information such as number of marks and case scores may be used on the review workstation to enhance the interpreting physician's workflow by offering a better organization of the patient worklist.
Here's a breakdown of the acceptance criteria and study details for Genius AI Detection 2.0, based on the provided FDA 510(k) clearance letter:
Acceptance Criteria and Device Performance for Genius AI Detection 2.0
1. Table of Acceptance Criteria and Reported Device Performance
The provided document describes a non-inferiority study to demonstrate that the performance of Genius AI Detection 2.0 on Envision (ENV) images is equivalent to its performance on the predicate's Standard of Care (SOC) images (Hologic's Selenia Dimensions systems). The primary acceptance criterion was non-inferiority of the Area Under the Curve (AUC) of the ROC curve, with a 5% margin. Secondary metrics included sensitivity, specificity, and false marker rate per view.
Acceptance Criteria Category | Specific Metric | Predicate Device Performance (SOC Images) | Subject Device Performance (ENV Images) | Acceptance Criteria Met? |
---|---|---|---|---|
Primary Endpoint (Non-Inferiority) | AUC of ROC Curve (ENV-SOC) | N/A (Comparison study) | -0.0017 (95% CI -0.023 - 0.020) | Yes (p-value for difference = 0.87, indicating no significant difference, and within 5% non-inferiority margin) |
Secondary Metrics | Sensitivity | N/A (Comparison study) | No significant difference reported between modalities | Yes |
Specificity | N/A (Comparison study) | No significant difference reported between modalities | Yes | |
False Marker Rate per View | N/A (Comparison study) | No significant difference reported between modalities | Yes | |
CC-MLO Correlation | Accuracy on Malignant Lesions | N/A | 90% | Yes (Considered accurate) |
Accuracy on Negative Cases (Correlated pairs) | N/A | 73% | Yes (Considered accurate) | |
Implant Cases | Location-specific cancer detection sensitivity | N/A | 76% (CI 68%~84%) | Yes (Considered acceptable based on confidence intervals) |
Specificity | N/A | 67% (CI 62%~72%) | Yes (Considered acceptable based on confidence intervals) |
(Note: The document focuses on demonstrating equivalence to the predicate's performance on a new platform rather than absolute performance against a fixed threshold for all metrics, except for the implant case where specific CIs are given and deemed acceptable.)
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size (Main Comparison Study): 1475 subjects
- 200 biopsy-proven cancer subjects
- 275 biopsy-proven benign subjects
- 78 BI-RADS 3 subjects (considered BI-RADS 1 or 2 upon diagnostic workup)
- 922 BI-RADS 1 and 2 subjects (at screening)
- Implant Case Test Set: 480 subjects
- 132 biopsy-proven cancer subjects
- 348 negative subjects (119 biopsy-proven benign, 229 screening negative)
- Data Provenance:
- Country of Origin: Not explicitly stated, but collected from a "national multi-center breast imaging network" within the U.S., implying U.S. origin.
- Retrospective or Prospective: The main comparison study data was collected for evaluating the safety and effectiveness of the Envision platform, with an IRB approved protocol. This suggests a retrospective study design, where existing images were gathered for evaluation. The implant cases were collected between 2015 and 2022, also indicating a retrospective approach.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: Two
- Qualifications: Both were MQSA-certified radiologists with over 20 years of experience.
4. Adjudication Method for the Test Set
The document explicitly states that the "ground truthing to evaluate performance metrics including the locations of cancer lesions was done by two MQSA-certified radiologists with over 20 years of experience."
- Adjudication Method: It does not specify a particular adjudication method (e.g., 2+1, 3+1). It simply states that ground truthing was done by two experts. This implies either consensus was reached between the two, or potentially an unstated arbitration method if they disagreed, or that their individual findings were used for analysis. Given the phrasing, expert consensus is the most likely implied method, but not explicitly detailed.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
- No, an MRMC comparative effectiveness study was NOT done. The study described is a standalone performance comparison of the AI algorithm on images from different modalities (Envision vs. Standard of Care), not a study involving human readers with and without AI assistance to measure effect size.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Yes, a standalone study WAS done. The document explicitly states, "A standalone study was conducted to compare the detection performance of FDA cleared Genius AI Detection 2.0 (K221449) using Standard of Care (SOC) images acquired on the Dimensions systems against images acquired on the FDA approved Envision Mammography Platform (P080003/S009)." This study evaluated the algorithm's performance (fROC, ROC, sensitivity, specificity, false marker rate) directly against the ground truth without human intervention.
7. The Type of Ground Truth Used
- Ground Truth Type: A combination of biopsy-proven cancer and biopsy-proven benign cases, along with BI-RADS diagnostic outcomes (for negative cases). For the cancer cases, the "locations of cancer lesions" were part of the ground truth.
8. The Sample Size for the Training Set
- Not provided. The document states that the test dataset was "sequestered from any training datasets by isolating it on a secured server with controlled access permissions" and that the data for implant cases was "sequestered from the training datasets for Genius AI Detection." However, the actual sample size of the training set is not mentioned.
9. How the Ground Truth for the Training Set Was Established
- Not provided. Since the training set sample size and details are not disclosed, the method for establishing its ground truth is also not mentioned in this document. It is generally assumed that similar rigorous methods (e.g., biopsy-proven truth, expert review) would have been used for training data, but this specific filing does not detail it.
Ask a specific question about this device
(130 days)
Genius AI Detection 2.0 with CC-MLO Correlation
Genius AI Detection is a computer-aided detection and diagnosis (CADe/CADx) software device intended to be used with compatible digital breast tomosynthesis (DBT) systems to identify and mark regions of interest including soft tissue densities (masses, architectural distortions and asymmetries) and calcifications in DBT exams from compatible DBT systems and provide confidence scores that offer assessment for Certainty of Findings and a Case Score. The device intends to aid in the interpretation of digital breast tomosynthesis exams in a concurrent fashion, where the interpreting physician confirms or dismisses the findings during the reading of the exam.
Genius AI Detection is a software device intended to identify potential abnormalities in breast tomosynthesis images. Genius Al Detection analyzes each standard mammographic view in a digital breast tomosynthesis examination using deep learning networks. For each detected lesion, Genius AI Detection produces CAD results that include the location of the lesion, an outline of the lesion and a confidence score for that lesion. Genius Al Detection also produces a case score for the entire tomosynthesis exam.
Genius Al Detection packages all CAD findings derived from the corresponding analysis of a tomosynthesis exam into a DICOM Mammography CAD SR object and distributes it for display on DICOM compliant review workstations. The interpreting physician will have access to the CAD findings concurrently to the reading of the tomosynthesis exam. In addition, a combination of peripheral information such as number of marks and case scores may be used on the review workstation to enhance the interpreting physician's workflow by offering a better organization of the patient worklist.
The Genius Al Detection 2.0 now added the CC-MLO Correlation feature. The added feature provides the ability to correlate a suspected lesion in one view with a like finding in the other view and additionally provides a workflow and navigation feature for the interpreting physician.
The provided text describes the regulatory clearance of a medical device, "Genius AI Detection 2.0 with CC-MLO Correlation." While it mentions "acceptance criteria" through the lens of safety and effectiveness, it does not explicitly list quantitative acceptance criteria for the device's performance (e.g., a specific sensitivity or specificity threshold). Instead, it describes internal validation and a standalone evaluation study to demonstrate that the device is "safe and effective."
Here's a breakdown of the requested information based on the provided text:
1. A table of acceptance criteria and the reported device performance
As mentioned, explicit quantitative acceptance criteria are not provided in the document. The document states that the "verification testing showed that the software application satisfied the software requirements." For the standalone evaluation of the CC-MLO Correlation feature, the performance was "estimated in both groups by scoring the detection pairs against the truth pairs and by evaluating the expert radiologist's response, respectively." However, specific performance metrics (e.g., accuracy percentages, sensitivity, specificity for the CC-MLO correlation) are not reported in this summary.
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
- Test Set Sample Size:
- For the standalone evaluation of the CC-MLO Correlation feature, the dataset included:
- 106 biopsy-proven malignant cases.
- 561 screening negative cases.
- Additionally, the detection pairs generated by the CC-MLO correlation feature were reviewed on 658 screening negative and biopsied benign cases. (It's unclear if this "658 cases" is a subset or superset of the "561 screening negative cases" mentioned earlier, or an entirely separate review of negative/benign cases for correlation specifically.)
- For the standalone evaluation of the CC-MLO Correlation feature, the dataset included:
- Data Provenance: The document does not specify the country of origin of the data or whether it was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
- Number of Experts: An "expert radiologist" is mentioned in the singular ("an expert radiologist" and "the expert radiologist's response"). This suggests that one expert radiologist was primarily responsible for establishing ground truth for the malignant cases and reviewing detection pairs.
- Qualifications of Experts: The document specifies "expert radiologist" but does not provide details on their specific qualifications, such as years of experience.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
- The text describes ground truth for malignant cases being established by "an expert radiologist by generating ground truth marks and truth pairs." For the CC-MLO correlation feature, generated detection pairs were "reviewed by an expert radiologist."
- This suggests a single-reader ground truth establishment and review without an explicit multi-reader adjudication method (like 2+1 or 3+1). It seems to be "none" in terms of multi-reader consensus for the test set ground truth.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- The document states, "Standalone evaluation testing was also conducted." It focuses on the performance of the algorithm itself and its ability to correlate findings.
- There is no mention of an MRMC comparative effectiveness study where human readers' performance with and without AI assistance was evaluated. Therefore, no effect size for human reader improvement is provided. The device "intends to aid in the interpretation... in a concurrent fashion, where the interpreting physician confirms or dismisses the findings," implying human-in-the-loop, but no study of this combined performance is detailed here.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Yes, a standalone evaluation was done. The document explicitly states: "Standalone evaluation testing was also conducted." The performance of the CC-MLO Correlation feature was "estimated... by scoring the detection pairs against the truth pairs."
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- For the malignant cases (106 cases): Ground truth was established by "an expert radiologist by generating ground truth marks and truth pairs." It also mentions these were "biopsy proven malignant cases," indicating pathology was also part of the ground truth for these malignant cases. The "truth pairs" were essentially expert annotations of pathologically confirmed lesions on both orthogonal views.
- For the screening negative cases (561 and 658 cases reviewed): The ground truth was presumably based on their screening negativity, validated by clinical follow-up or expert review. The review of detection pairs was against "the expert radiologist's response," implying expert judgment as ground truth for these negative/benign cases.
8. The sample size for the training set
- The sample size for the training set is not provided in this document. The description focuses solely on the "standalone evaluation of the CC-MLO Correlation feature" which used a specific test dataset.
9. How the ground truth for the training set was established
- As the training set sample size is not provided, how its ground truth was established is also not detailed in this document. It only mentions the use of "deep learning networks" which implies a trained model, but the specifics of its training data and ground truth establishment are absent from this summary.
Ask a specific question about this device
(141 days)
Genius AI Detection 2.0
Genius AI Detection is a computer-aided detection and diagnosis (CADe/CADx) software device intended to be used with compatible digital breast tomosynthesis (DBT) systems to identify and mark regions of interest including soft tissue densities (masses, architectural distortions and asymmetries) and calcifications in DBT exams from compatible DBT systems and provide confidence scores that offer assessment for Certainty of Findings and a Case Score. The device intends to aid in the interpretation of digital breast tomosynthesis exams in a concurrent fashion, where the interpreting physician confirms or dismisses the findings during the reading of the exam.
Genius Al Detection is a software device intended to identify potential abnormalities in breast tomosynthesis images. Genius Al Detection analyzes each standard mammographic view in a digital breast tomosynthesis examination using deep learning networks. For each detected lesion, Genius Al Detection produces CAD results that include the location of the lesion, an outline of the lesion and a confidence score for that lesion. Genius Al Detection also produces a case score for the entire tomosynthesis exam.
Genius AI Detection packages all CAD findings derived from the corresponding analysis of a tomosynthesis exam into a DICOM Mammography CAD SR object and distributes it for display on DICOM compliant review workstations. The interpreting physician will have access to the CAD findings concurrently to the reading of the tomosynthesis exam. In addition, a combination of peripheral information such as number of marks and case scores may be used on the review workstation to enhance the interpreting physician's workflow by offering a better organization of the patient worklist.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) summary:
Acceptance Criteria and Reported Device Performance
The document states that the new device, Genius AI Detection 2.0, aims to improve performance, mainly in terms of improved specificity, particularly for micro-calcification cancer detection, while maintaining sensitivity. The study confirms this improvement.
Acceptance Criteria Category | Specific Criterion | Reported Device Performance |
---|---|---|
Detection Performance: | Improved specificity over the predicate device, especially for micro-calcification cancer detection. | The specificity measured at the operating point of Genius AI Detection 2.0 demonstrated a significant increase of 12% compared to the original Genius AI Detection predicate device (McNemar's p |
Ask a specific question about this device
Page 1 of 1