(279 days)
Genius AI Detection is a computer-aided detection and diagnosis (CADe/CADx) software device intended to be used with compatible digital breast tomosynthesis (DBT) systems to identify and mark regions of interest including soft tissue densities (masses, architectural distortions and asymmetries) and calcifications in DBT exams from compatible DBT systems and provide confidence scores that offer assessment for Certainty of Findings and a Case Score.
The device intends to aid in the interpretation of digital breast tomosynthesis exams in a concurrent fashion, where the interpreting physician confirms or dismisses the findings during the reading of the exam.
Genius AI Detection 2.0 is a software device intended to identify potential abnormalities in breast tomosynthesis images. Genius AI Detection 2.0 analyzes each standard mammographic view in a digital breast tomosynthesis examination using deep learning networks. For each detected lesion, Genius AI Detection 2.0 produces CAD results that include:
- the location of the lesion;
- an outline of the lesion;
- a confidence score for the lesion
- Genius AI Detection 2.0 also produces a case score for the entire breast tomosynthesis exam.
Genius AI Detection 2.0 packages all CAD findings derived from the corresponding analysis of a tomosynthesis exam into a DICOM Mammography CAD SR object and distributes it for display on DICOM compliant review workstations. The interpreting physician will have access to the CAD findings concurrently to the reading of the tomosynthesis exam. In addition, a combination of peripheral information such as number of marks and case scores may be used on the review workstation to enhance the interpreting physician's workflow by offering a better organization of the patient worklist.
Here's a breakdown of the acceptance criteria and study details for Genius AI Detection 2.0, based on the provided FDA 510(k) clearance letter:
Acceptance Criteria and Device Performance for Genius AI Detection 2.0
1. Table of Acceptance Criteria and Reported Device Performance
The provided document describes a non-inferiority study to demonstrate that the performance of Genius AI Detection 2.0 on Envision (ENV) images is equivalent to its performance on the predicate's Standard of Care (SOC) images (Hologic's Selenia Dimensions systems). The primary acceptance criterion was non-inferiority of the Area Under the Curve (AUC) of the ROC curve, with a 5% margin. Secondary metrics included sensitivity, specificity, and false marker rate per view.
| Acceptance Criteria Category | Specific Metric | Predicate Device Performance (SOC Images) | Subject Device Performance (ENV Images) | Acceptance Criteria Met? |
|---|---|---|---|---|
| Primary Endpoint (Non-Inferiority) | AUC of ROC Curve (ENV-SOC) | N/A (Comparison study) | -0.0017 (95% CI -0.023 - 0.020) | Yes (p-value for difference = 0.87, indicating no significant difference, and within 5% non-inferiority margin) |
| Secondary Metrics | Sensitivity | N/A (Comparison study) | No significant difference reported between modalities | Yes |
| Specificity | N/A (Comparison study) | No significant difference reported between modalities | Yes | |
| False Marker Rate per View | N/A (Comparison study) | No significant difference reported between modalities | Yes | |
| CC-MLO Correlation | Accuracy on Malignant Lesions | N/A | 90% | Yes (Considered accurate) |
| Accuracy on Negative Cases (Correlated pairs) | N/A | 73% | Yes (Considered accurate) | |
| Implant Cases | Location-specific cancer detection sensitivity | N/A | 76% (CI 68%~84%) | Yes (Considered acceptable based on confidence intervals) |
| Specificity | N/A | 67% (CI 62%~72%) | Yes (Considered acceptable based on confidence intervals) |
(Note: The document focuses on demonstrating equivalence to the predicate's performance on a new platform rather than absolute performance against a fixed threshold for all metrics, except for the implant case where specific CIs are given and deemed acceptable.)
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size (Main Comparison Study): 1475 subjects
- 200 biopsy-proven cancer subjects
- 275 biopsy-proven benign subjects
- 78 BI-RADS 3 subjects (considered BI-RADS 1 or 2 upon diagnostic workup)
- 922 BI-RADS 1 and 2 subjects (at screening)
- Implant Case Test Set: 480 subjects
- 132 biopsy-proven cancer subjects
- 348 negative subjects (119 biopsy-proven benign, 229 screening negative)
- Data Provenance:
- Country of Origin: Not explicitly stated, but collected from a "national multi-center breast imaging network" within the U.S., implying U.S. origin.
- Retrospective or Prospective: The main comparison study data was collected for evaluating the safety and effectiveness of the Envision platform, with an IRB approved protocol. This suggests a retrospective study design, where existing images were gathered for evaluation. The implant cases were collected between 2015 and 2022, also indicating a retrospective approach.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: Two
- Qualifications: Both were MQSA-certified radiologists with over 20 years of experience.
4. Adjudication Method for the Test Set
The document explicitly states that the "ground truthing to evaluate performance metrics including the locations of cancer lesions was done by two MQSA-certified radiologists with over 20 years of experience."
- Adjudication Method: It does not specify a particular adjudication method (e.g., 2+1, 3+1). It simply states that ground truthing was done by two experts. This implies either consensus was reached between the two, or potentially an unstated arbitration method if they disagreed, or that their individual findings were used for analysis. Given the phrasing, expert consensus is the most likely implied method, but not explicitly detailed.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
- No, an MRMC comparative effectiveness study was NOT done. The study described is a standalone performance comparison of the AI algorithm on images from different modalities (Envision vs. Standard of Care), not a study involving human readers with and without AI assistance to measure effect size.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Yes, a standalone study WAS done. The document explicitly states, "A standalone study was conducted to compare the detection performance of FDA cleared Genius AI Detection 2.0 (K221449) using Standard of Care (SOC) images acquired on the Dimensions systems against images acquired on the FDA approved Envision Mammography Platform (P080003/S009)." This study evaluated the algorithm's performance (fROC, ROC, sensitivity, specificity, false marker rate) directly against the ground truth without human intervention.
7. The Type of Ground Truth Used
- Ground Truth Type: A combination of biopsy-proven cancer and biopsy-proven benign cases, along with BI-RADS diagnostic outcomes (for negative cases). For the cancer cases, the "locations of cancer lesions" were part of the ground truth.
8. The Sample Size for the Training Set
- Not provided. The document states that the test dataset was "sequestered from any training datasets by isolating it on a secured server with controlled access permissions" and that the data for implant cases was "sequestered from the training datasets for Genius AI Detection." However, the actual sample size of the training set is not mentioned.
9. How the Ground Truth for the Training Set Was Established
- Not provided. Since the training set sample size and details are not disclosed, the method for establishing its ground truth is also not mentioned in this document. It is generally assumed that similar rigorous methods (e.g., biopsy-proven truth, expert review) would have been used for training data, but this specific filing does not detail it.
FDA 510(k) Clearance Letter - Genius AI Detection 2.0
Page 1
U.S. Food & Drug Administration
10903 New Hampshire Avenue
Silver Spring, MD 20993
www.fda.gov
Doc ID # 04017.08.00
July 31, 2025
Hologic, Inc.
Julia Vaillancourt
Regulatory Affairs Specialist
600 Technology Dr.
Newark, Delaware 19702
Re: K243341
Trade/Device Name: Genius AI Detection 2.0
Regulation Number: 21 CFR 892.2090
Regulation Name: Radiological Computer Assisted Detection And Diagnosis Software
Regulatory Class: Class II
Product Code: QDQ
Dated: June 30, 2025
Received: June 30, 2025
Dear Julia Vaillancourt:
We have reviewed your section 510(k) premarket notification of intent to market the device referenced above and have determined the device is substantially equivalent (for the indications for use stated in the enclosure) to legally marketed predicate devices marketed in interstate commerce prior to May 28, 1976, the enactment date of the Medical Device Amendments, or to devices that have been reclassified in accordance with the provisions of the Federal Food, Drug, and Cosmetic Act (the Act) that do not require approval of a premarket approval application (PMA). You may, therefore, market the device, subject to the general controls provisions of the Act. Although this letter refers to your product as a device, please be aware that some cleared products may instead be combination products. The 510(k) Premarket Notification Database available at https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn.cfm identifies combination product submissions. The general controls provisions of the Act include requirements for annual registration, listing of devices, good manufacturing practice, labeling, and prohibitions against misbranding and adulteration. Please note: CDRH does not evaluate information related to contract liability warranties. We remind you, however, that device labeling must be truthful and not misleading.
If your device is classified (see above) into either class II (Special Controls) or class III (PMA), it may be subject to additional controls. Existing major regulations affecting your device can be found in the Code of Federal Regulations, Title 21, Parts 800 to 898. In addition, FDA may publish further announcements concerning your device in the Federal Register.
Additional information about changes that may require a new premarket notification are provided in the FDA guidance documents entitled "Deciding When to Submit a 510(k) for a Change to an Existing Device" (https://www.fda.gov/media/99812/download) and "Deciding When to Submit a 510(k) for a Software Change to an Existing Device" (https://www.fda.gov/media/99785/download).
Your device is also subject to, among other requirements, the Quality System (QS) regulation (21 CFR Part 820), which includes, but is not limited to, 21 CFR 820.30, Design controls; 21 CFR 820.90, Nonconforming product; and 21 CFR 820.100, Corrective and preventive action. Please note that regardless of whether a change requires premarket review, the QS regulation requires device manufacturers to review and approve changes to device design and
Page 2
production (21 CFR 820.30 and 21 CFR 820.70) and document changes and approvals in the device master record (21 CFR 820.181).
Please be advised that FDA's issuance of a substantial equivalence determination does not mean that FDA has made a determination that your device complies with other requirements of the Act or any Federal statutes and regulations administered by other Federal agencies. You must comply with all the Act's requirements, including, but not limited to: registration and listing (21 CFR Part 807); labeling (21 CFR Part 801); medical device reporting (reporting of medical device-related adverse events) (21 CFR Part 803) for devices or postmarketing safety reporting (21 CFR Part 4, Subpart B) for combination products (see https://www.fda.gov/combination-products/guidance-regulatory-information/postmarketing-safety-reporting-combination-products); good manufacturing practice requirements as set forth in the quality systems (QS) regulation (21 CFR Part 820) for devices or current good manufacturing practices (21 CFR Part 4, Subpart A) for combination products; and, if applicable, the electronic product radiation control provisions (Sections 531-542 of the Act); 21 CFR Parts 1000-1050.
All medical devices, including Class I and unclassified devices and combination product device constituent parts are required to be in compliance with the final Unique Device Identification System rule ("UDI Rule"). The UDI Rule requires, among other things, that a device bear a unique device identifier (UDI) on its label and package (21 CFR 801.20(a)) unless an exception or alternative applies (21 CFR 801.20(b)) and that the dates on the device label be formatted in accordance with 21 CFR 801.18. The UDI Rule (21 CFR 830.300(a) and 830.320(b)) also requires that certain information be submitted to the Global Unique Device Identification Database (GUDID) (21 CFR Part 830 Subpart E). For additional information on these requirements, please see the UDI System webpage at https://www.fda.gov/medical-devices/device-advice-comprehensive-regulatory-assistance/unique-device-identification-system-udi-system.
Also, please note the regulation entitled, "Misbranding by reference to premarket notification" (21 CFR 807.97). For questions regarding the reporting of adverse events under the MDR regulation (21 CFR Part 803), please go to https://www.fda.gov/medical-devices/medical-device-safety/medical-device-reporting-mdr-how-report-medical-device-problems.
For comprehensive regulatory information about medical devices and radiation-emitting products, including information about labeling regulations, please see Device Advice (https://www.fda.gov/medical-devices/device-advice-comprehensive-regulatory-assistance) and CDRH Learn (https://www.fda.gov/training-and-continuing-education/cdrh-learn). Additionally, you may contact the Division of Industry and Consumer Education (DICE) to ask a question about a specific regulatory topic. See the DICE website (https://www.fda.gov/medical-devices/device-advice-comprehensive-regulatory-assistance/contact-us-division-industry-and-consumer-education-dice) for more information or contact DICE by email (DICE@fda.hhs.gov) or phone (1-800-638-2041 or 301-796-7100).
Sincerely,
Yanna Kang, Ph.D.
Assistant Director
Mammography and Ultrasound Team
DHT8C: Division of Radiological Imaging and Radiation Therapy Devices
OHT8: Office of Radiological Health
Office of Product Evaluation and Quality
Center for Devices and Radiological Health
Enclosure
Page 3
Food and Drug Administration
Indications for Use
Form Approved: OMB No. 0910-0120
Expiration Date: 07/31/2026
See PRA Statement below.
DEPARTMENT OF HEALTH AND HUMAN SERVICES
Submission Number (if known): K243341
Device Name: Genius AI Detection 2.0
Indications for Use (Describe)
Genius AI Detection is a computer-aided detection and diagnosis (CADe/CADx) software device intended to be used with compatible digital breast tomosynthesis (DBT) systems to identify and mark regions of interest including soft tissue densities (masses, architectural distortions and asymmetries) and calcifications in DBT exams from compatible DBT systems and provide confidence scores that offer assessment for Certainty of Findings and a Case Score.
The device intends to aid in the interpretation of digital breast tomosynthesis exams in a concurrent fashion, where the interpreting physician confirms or dismisses the findings during the reading of the exam.
Type of Use (Select one or both, as applicable)
☒ Prescription Use (Part 21 CFR 801 Subpart D)
☐ Over-The-Counter Use (21 CFR 801 Subpart C)
CONTINUE ON A SEPARATE PAGE IF NEEDED.
This section applies only to requirements of the Paperwork Reduction Act of 1995.
DO NOT SEND YOUR COMPLETED FORM TO THE PRA STAFF EMAIL ADDRESS BELOW.
The burden time for this collection of information is estimated to average 79 hours per response, including the time to review instructions, search existing data sources, gather and maintain the data needed and complete and review the collection of information. Send comments regarding this burden estimate or any other aspect of this information collection, including suggestions for reducing this burden, to:
Department of Health and Human Services
Food and Drug Administration
Office of Chief Information Officer
Paperwork Reduction Act (PRA) Staff
PRAStaff@fda.hhs.gov
"An agency may not conduct or sponsor, and a person is not required to respond to, a collection of information unless it displays a currently valid OMB number."
Page 4
Hologic, Inc. 510(k) Premarket Notification
Genius AI Detection 2.0
Traditional 510(k) Summary
This 510(k) summary is submitted in accordance with the requirements of 21 CFR Part 807.92
Date Prepared: October 25th, 2024
Manufacturer: Hologic, Inc.
600 Technology Drive
Newark, DE 19702 USA
Establishment Registration #: 3005496266
Contact Person: Julia Vaillancourt
Regulatory Affairs Specialist II
P: 781.980.2885
Identification of the Device:
Proprietary/Trade Name: Genius AI Detection 2.0
Classification Name: Radiological Computer Assisted Detection and Diagnosis Software for Lesions Suspicious for Cancer
Regulatory Number: 21 CFR 892.2090
Product Code: QDQ
Device Class: Class II
Review Panel: Radiology
Identification of the Legally Marketed Predicate and Reference Devices:
Trade Name: Genius AI Detection 2.0
Classification Name: Radiological Computer Assisted Detection and Diagnosis Software for Lesions Suspicious for Cancer
Regulatory Number: 21 CFR 892.2090
Product Code: QDQ
Device Class: Class II
Review Panel: Radiology
Submitter/510(k) Holder: Hologic, Inc.
Clearance: K221449 (cleared October 6, 2022)
Reference Device: Genius AI Detection 2.0 with CC-MLO Correlation
Classification Name: Radiological Computer Assisted Detection and Diagnosis Software for Lesions Suspicious for Cancer
Regulatory Number: 21 CFR 892.2090
Product Code: QDQ
Page 5
Device Class: Class II
Review Panel: Radiology
Submitter/510(k) Holder: Hologic, Inc.
Clearance: K230096 (cleared May 23, 2023)
Device Description:
Genius AI Detection 2.0 is a software device intended to identify potential abnormalities in breast tomosynthesis images. Genius AI Detection 2.0 analyzes each standard mammographic view in a digital breast tomosynthesis examination using deep learning networks. For each detected lesion, Genius AI Detection 2.0 produces CAD results that include:
- the location of the lesion;
- an outline of the lesion;
- a confidence score for the lesion
- Genius AI Detection 2.0 also produces a case score for the entire breast tomosynthesis exam.
Genius AI Detection 2.0 packages all CAD findings derived from the corresponding analysis of a tomosynthesis exam into a DICOM Mammography CAD SR object and distributes it for display on DICOM compliant review workstations. The interpreting physician will have access to the CAD findings concurrently to the reading of the tomosynthesis exam. In addition, a combination of peripheral information such as number of marks and case scores may be used on the review workstation to enhance the interpreting physician's workflow by offering a better organization of the patient worklist.
Indications for Use:
Genius AI Detection is a computer-aided detection and diagnosis (CADe/CADx) software device intended to be used with compatible digital breast tomosynthesis (DBT) systems to identify and mark regions of interest including soft tissue densities (masses, architectural distortions and asymmetries) and calcifications in DBT exams from compatible DBT systems and provide confidence scores that offer assessment for Certainty of Findings and a Case Score. The device intends to aid in the interpretation of digital breast tomosynthesis exams in a concurrent fashion, where the interpreting physician confirms or dismisses the findings during the reading of the exam.
Standards:
- IEC 62304: 2015 – Medical device software – Software Life Cycle Processes
- ISO 14971: 2019 – Medical Devices – Application of Risk Management to Medical Devices
Page 6
Summary of Substantial Equivalence:
| Features and Characteristics | Subject Device Hologic, Inc. Genius AI Detection 2.0 | Predicate Device Hologic, Inc. Genius AI Detection 2.0 (K221449) | Difference and Comments |
|---|---|---|---|
| Regulation Number/Name | 21 CFR 892.2090/ Radiological Computer Assisted Detection and Diagnosis Software | Same | N/A |
| Product Code | QDQ | Same | N/A |
| Classification Description | A radiological computer assisted detection and diagnostic software for suspected lesions is an image processing device intended to aid in the detection, localization, and characterization of lesions suspicious for cancer on acquired medical images (e.g., mammography, MR, CT, ultrasound, radiography). The device detects, identifies and characterizes lesions suspicious for cancer based on features or information extracted from the images, and may provide information about the presence, location, and characteristics of the lesion to the user. Primary diagnostic and patient management decisions are made by the clinical user. | Same | N/A |
| Indications for Use | Genius AI Detection 2.0 is a computer-aided detection and diagnosis (CADe/CADx) software device intended to be used with compatible digital breast tomosynthesis (DBT) systems to identify and mark regions of interest including soft tissue densities (masses, architectural distortions and asymmetries) and calcifications in DBT exams from compatible DBT systems and provide confidence scores that offer assessment for Certainty of Findings and a Case Score. The device intends to aid in the interpretation of digital breast tomosynthesis exams in a concurrent fashion, where the interpreting physician confirms or | Same | N/A |
Page 7
| Features and Characteristics | Subject Device Hologic, Inc. Genius AI Detection 2.0 | Predicate Device Hologic, Inc. Genius AI Detection 2.0 (K221449) | Difference and Comments |
|---|---|---|---|
| dismisses the findings during the reading of the exam. | |||
| Compatible DBT Systems | Envision Mammography Platform Supports the following modes: • High resolution 1-mm slices (Clarity HD) • High resolution 6-mm smart slices (3DQuorum) | Hologic Selenia Dimensions Hologic 3Dimensions Supports both models in the following modes: • Standard resolution 1-mm slices • High resolution 1-mm slices (Clarity HD) • High resolution 6-mm smart slices (3DQuorum) | The Envision Mammography Platform supports High resolution slices; standard resolution is not available. |
| Type of CAD Software | Radiological computer assisted detection and diagnostic software | Same | N/A |
| Mode of Action | Image processing device utilizing machine learning to aid in the detection, localization, and characterization of soft tissue densities (masses, architectural distortions and asymmetries) and calcifications in the 1-mm 3D DBT slices. Findings are co-registered to 6-mm SmartSlices. | Same | N/A |
| Clinical Output | To inform the primary diagnostic and patient management decisions that are made by the clinical user. | Same | N/A |
| Patient Population | Symptomatic and asymptomatic women undergoing mammography. | Same | N/A |
| End Users | MQSA- Qualified Interpreting Physicians and Radiologists | Same | N/A |
| Image Source Modalities | Digital Breast Tomosynthesis Slices | Same | N/A |
| Output Device | Softcopy Workstation | Same | N/A |
| Deployment | Stand-alone computer | Same | N/A |
| Method of Use | Concurrent reading | Same | N/A |
| Supported Views | CC and MLO | Same | N/A |
| Visualization Features | Places mark within suspicious lesion by default (Emphasize™; RightOn™) | Same | N/A |
Page 8
| Features and Characteristics | Subject Device Hologic, Inc. Genius AI Detection 2.0 | Predicate Device Hologic, Inc. Genius AI Detection 2.0 (K221449) | Difference and Comments |
|---|---|---|---|
| and reports confidence of finding next to each identified lesion in the image. CAD display may be toggled on/off. Option to automatically zoom into or contour the suspicious region of interest (PeerView™). |
Comparison with Predicate Device:
The Summary of Substantial Equivalence table above details the similarities and differences between the Genius AI Detection 2.0 device and its predicate device, Genius AI Detection 2.0, K221449. The main difference is compatibility with an additional DBT system, the Envision Mammography Platform. Both the proposed and predicate device use the same technology per 21 CFR 892.2090. GAID 2.0 remains unchanged in order to be used on the Envision Mammography Platform. Consistent with the predicate device, the subject device aids in the detection, localization, and characterization of disease specific findings on acquired medical images. The outputs of GAID 2.0 serve to augment the interpretation of digital breast tomosynthesis exams as a concurrent reading tool. The output is used to inform and assist the interpreting physician, supplementing their clinical expertise and judgement. Processing of implant cases acquired with Hologic's Selenia Dimensions System has been added along with CC-MLO Correlation feature for cases acquired with the Envision System.
Genius AI Detection 2.0 on the Envision Mammography Platform provides the same performance in terms of sensitivity and the number of false positives as GAID 2.0 on the Selenia Dimensions System as well as the same workflow for radiologists. As such, there is no change in the safety and effectiveness of the GAID 2.0 device.
Standalone Performance Testing:
Genius AI Detection 2.0 is a software-only device. In accordance with the FDA Guidance "Content of Premarket Submissions for Device Software Functions", issued on June 14, 2023, the Genius AI Detection 2.0 software requires Basic Documentation as it is a Class II device that is intended to be used with compatible digital breast tomosynthesis (DBT) systems to identify and mark regions of interest, where the interpreting physician maintains control to confirm or dismiss the findings during the reading of the exam.
Verification testing consisted of software unit testing, software integration testing and software system testing. The verification testing showed that the software application satisfied the software requirements.
A standalone study was conducted to compare the detection performance of FDA cleared Genius AI Detection 2.0 (K221449) using Standard of Care (SOC) images acquired on the Dimensions systems against images acquired on the FDA approved Envision Mammography Platform (P080003/S009).
Page 9
A sequestered test dataset was constructed with a subset of breast tomosynthesis exams acquired for evaluating the safety and effectiveness of the Envision (ENV) platform. All images were collected using an IRB approved protocol from 15 locations at a national multi-center breast imaging network. The protocol involved imaging each subject on the SOC imaging system and the Envision Mammography Platform (ENV). The dataset consisted of standard mammographic images from 1475 subjects consisting of 200 biopsy-proven cancer subjects, 275 biopsy-proven benign subjects, 78 BI-RADS subjects that were considered BI-RADS 1 or 2 upon diagnostic workup (no biopsy), and 922 subjects that were read as BI-RADS 1 and 2 at screening. Of the 200 biopsy-proven cancers 46 cases included only calcification lesions whereas 154 cases included soft tissue lesions or soft tissue lesions associated with calcification lesions. All patients were female with average age of 57 with a standard deviation of 11 years. The race distribution of this cohort indicated 81% white, 6% African American, 1% Asian, 1% multi-race and 11% unknown, whereas ethnicity distribution was 6% Hispanic, 78% non-Hispanic and 16% unknown. The cohort contained 10% BI-RADS density category a, 48% category b, 36% category c and 6% category d. This image dataset was sequestered from any training datasets by isolating it on a secured server with controlled access permissions.
This comparison study included testing in terms of fROC and ROC curves and key metrics including sensitivity, specificity, and false marker rate per view. The ground truthing to evaluate performance metrics including the locations of cancer lesions was done by two MQSA-certified radiologists with over 20 years of experience. The comparison of AUC of ROC curve for ENV and SOC images indicated the difference in AUC (ENV-SOC) to be -0.0017 (95% CI -0.023-0.020). The DeLong's p-value for this difference was 0.87 indicating no significant difference in the performance of GAID 2.0 between the two modalities. In addition, the comparisons of sensitivity, specificity and false marks per image also resulted in no significant difference for each metric between the two modalities. The fROC curves stratified by breast density and lesion type also aligned closely between the two modalities. The primary endpoint targeted by this analysis was that accuracy of GAID 2.0, as measured by AUC on ENV images, is non-inferior (5% margin) to the same on SOC images. Thus, all results in the standalone analysis confirmed that the detection performance of the GAID 2.0 algorithm on ENV images can be considered as equivalent to its performance on SOC images.
The same sequestered dataset was also used to evaluate performance of CC-MLO correlation algorithm. The study evaluated accuracy of the CC-MLO correlation algorithm on ENV images in comparison with SOC images. The CC-MLO Correlation algorithm accurately correlated the Genius AI Detection software 2.0 marks on 90% of the biopsy-proven malignant lesions. In addition, 73% of correlated pairs of marks on negative cases were considered as accurate by expert radiologists.
The performance of GAID on implant cases was evaluated on a dataset with a total of 480 subjects consisting of 132 biopsy proven cancer subjects and 348 negative subjects including 119 biopsy proven benign cases and 229 screening negative cases. All subjects were female subjects with implants and were imaged with implant displaced views. 30% of the subjects had sub-glandular implants whereas 70% of subjects had sub-muscular implants. All images were collected using Hologic's Selenia Dimensions systems collected from 2015 to 2022. The ground truthing to evaluate performance metrics including the locations of cancer lesions was done by two MQSA-certified radiologists with over 20 years of experience. The data utilized for evaluation of implant cases was sequestered from the training datasets for Genius AI Detection. In absence paired dataset with and without implants, the performance of GAID
Page 10
on implant patients was considered acceptable based on confidence intervals of sensitivity and specificity at operating point. The detection performance of GAID 2.0 measured on a set of 132 cancer patients and 348 negative subjects with implant displaced images demonstrated location specific cancer detection sensitivity of 76% (CI 68%~84%) and specificity of 67% (CI 62%~72%).
Based on results of verification and validation tests, it is concluded the Genius AI Detection 2.0 is safe and effective in the detection of soft lesions and calcifications at an appropriate safety level in tomosynthesis exams acquired with Hologic's Envision Mammography Platform.
Assessment of Benefit-Risk and Safety and Effectiveness:
Risk management is ensured through risk analysis which is used to identify and mitigate potential hazards. Any potential hazards are controlled via software development, verification, and validation testing. In addition, device labeling contains instructions for use and any necessary cautions and warnings to provide for safe and effective use of this device. Hologic finds that the proposed device has a positive balance in terms of probable benefits versus probable risks and thus may be considered safe and effective based on verification and validation testing.
Conclusion:
Based on the required information submitted in this premarket notification, the proposed Genius AI Detection 2.0 device has been found to be substantially equivalent to the predicate device, Genius AI Detection 2.0, K221449. Both devices have the same indications for use and aid in the detection, localization, and characterization of disease specific findings on acquired medical images. Standalone performance testing demonstrated that Genius AI Detection 2.0 on the Envision Mammography Platform achieves equivalent detection performance compared to the predicate. There are no issues of safety and effectiveness of the proposed Genius AI Detection 2.0 device.
§ 892.2090 Radiological computer-assisted detection and diagnosis software.
(a)
Identification. A radiological computer-assisted detection and diagnostic software is an image processing device intended to aid in the detection, localization, and characterization of fracture, lesions, or other disease-specific findings on acquired medical images (e.g., radiography, magnetic resonance, computed tomography). The device detects, identifies, and characterizes findings based on features or information extracted from images, and provides information about the presence, location, and characteristics of the findings to the user. The analysis is intended to inform the primary diagnostic and patient management decisions that are made by the clinical user. The device is not intended as a replacement for a complete clinician's review or their clinical judgment that takes into account other relevant information from the image or patient history.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Design verification and validation must include:
(i) A detailed description of the image analysis algorithm, including a description of the algorithm inputs and outputs, each major component or block, how the algorithm and output affects or relates to clinical practice or patient care, and any algorithm limitations.
(ii) A detailed description of pre-specified performance testing protocols and dataset(s) used to assess whether the device will provide improved assisted-read detection and diagnostic performance as intended in the indicated user population(s), and to characterize the standalone device performance for labeling. Performance testing includes standalone test(s), side-by-side comparison(s), and/or a reader study, as applicable.
(iii) Results from standalone performance testing used to characterize the independent performance of the device separate from aided user performance. The performance assessment must be based on appropriate diagnostic accuracy measures (
e.g., receiver operator characteristic plot, sensitivity, specificity, positive and negative predictive values, and diagnostic likelihood ratio). Devices with localization output must include localization accuracy testing as a component of standalone testing. The test dataset must be representative of the typical patient population with enrichment made only to ensure that the test dataset contains a sufficient number of cases from important cohorts (e.g., subsets defined by clinically relevant confounders, effect modifiers, concomitant disease, and subsets defined by image acquisition characteristics) such that the performance estimates and confidence intervals of the device for these individual subsets can be characterized for the intended use population and imaging equipment.(iv) Results from performance testing that demonstrate that the device provides improved assisted-read detection and/or diagnostic performance as intended in the indicated user population(s) when used in accordance with the instructions for use. The reader population must be comprised of the intended user population in terms of clinical training, certification, and years of experience. The performance assessment must be based on appropriate diagnostic accuracy measures (
e.g., receiver operator characteristic plot, sensitivity, specificity, positive and negative predictive values, and diagnostic likelihood ratio). Test datasets must meet the requirements described in paragraph (b)(1)(iii) of this section.(v) Appropriate software documentation, including device hazard analysis, software requirements specification document, software design specification document, traceability analysis, system level test protocol, pass/fail criteria, testing results, and cybersecurity measures.
(2) Labeling must include the following:
(i) A detailed description of the patient population for which the device is indicated for use.
(ii) A detailed description of the device instructions for use, including the intended reading protocol and how the user should interpret the device output.
(iii) A detailed description of the intended user, and any user training materials or programs that address appropriate reading protocols for the device, to ensure that the end user is fully aware of how to interpret and apply the device output.
(iv) A detailed description of the device inputs and outputs.
(v) A detailed description of compatible imaging hardware and imaging protocols.
(vi) Warnings, precautions, and limitations must include situations in which the device may fail or may not operate at its expected performance level (
e.g., poor image quality or for certain subpopulations), as applicable.(vii) A detailed summary of the performance testing, including test methods, dataset characteristics, results, and a summary of sub-analyses on case distributions stratified by relevant confounders, such as anatomical characteristics, patient demographics and medical history, user experience, and imaging equipment.