Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K221449
    Manufacturer
    Date Cleared
    2022-10-06

    (141 days)

    Product Code
    Regulation Number
    892.2090
    Reference & Predicate Devices
    Predicate For
    Why did this record match?
    Reference Devices :

    K201019

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Genius AI Detection is a computer-aided detection and diagnosis (CADe/CADx) software device intended to be used with compatible digital breast tomosynthesis (DBT) systems to identify and mark regions of interest including soft tissue densities (masses, architectural distortions and asymmetries) and calcifications in DBT exams from compatible DBT systems and provide confidence scores that offer assessment for Certainty of Findings and a Case Score. The device intends to aid in the interpretation of digital breast tomosynthesis exams in a concurrent fashion, where the interpreting physician confirms or dismisses the findings during the reading of the exam.

    Device Description

    Genius Al Detection is a software device intended to identify potential abnormalities in breast tomosynthesis images. Genius Al Detection analyzes each standard mammographic view in a digital breast tomosynthesis examination using deep learning networks. For each detected lesion, Genius Al Detection produces CAD results that include the location of the lesion, an outline of the lesion and a confidence score for that lesion. Genius Al Detection also produces a case score for the entire tomosynthesis exam.

    Genius AI Detection packages all CAD findings derived from the corresponding analysis of a tomosynthesis exam into a DICOM Mammography CAD SR object and distributes it for display on DICOM compliant review workstations. The interpreting physician will have access to the CAD findings concurrently to the reading of the tomosynthesis exam. In addition, a combination of peripheral information such as number of marks and case scores may be used on the review workstation to enhance the interpreting physician's workflow by offering a better organization of the patient worklist.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) summary:

    Acceptance Criteria and Reported Device Performance

    The document states that the new device, Genius AI Detection 2.0, aims to improve performance, mainly in terms of improved specificity, particularly for micro-calcification cancer detection, while maintaining sensitivity. The study confirms this improvement.

    Acceptance Criteria CategorySpecific CriterionReported Device Performance
    Detection Performance:Improved specificity over the predicate device, especially for micro-calcification cancer detection.The specificity measured at the operating point of Genius AI Detection 2.0 demonstrated a significant increase of 12% compared to the original Genius AI Detection predicate device (McNemar's p< 0.001).
    Maintained sensitivity compared to the predicate device.The device maintained sensitivity compared to its predicate.
    Overall Performance Comparison:Superior performance compared to the predicate device (Genius AI Detection K201019) using fROC analysis.The fROC curves demonstrated significant improvement for Genius AI Detection 2.0 over its predicate device.
    Safety and EffectivenessSafe and effective in detecting soft tissue lesions and calcification lesions.Based on verification and validation tests, it is concluded that the Genius AI Detection 2.0 device is safe and effective in detecting soft tissue lesions and calcification lesions at an appropriate safety level. Risk management in place to control potential hazards.
    FunctionalitySatisfied software requirements.Verification testing (software unit, integration, and system testing) showed that the software application satisfied the software requirements.

    Study Details:

    1. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):

      • Sample Size: 764 tomosynthesis exams.
        • Including 106 biopsy-proven cancers.
        • 97 biopsy-proven benign cases.
        • 81 recalls.
        • 480 screening negative cases.
      • Data Provenance: Acquired from multiple centers across the United States. The data was retrospective, as it was a "sequestered dataset" collected with Hologic Dimensions 3D Mammography systems and not used for training.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):

      • Number of Experts: Two experts.
      • Qualifications: Grounds truth was initially determined by "a radiology expert" and then verified by "another MQSA-qualified, board-certified radiologist." Specific years of experience are not mentioned beyond these qualifications.
    3. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

      • The ground truth was first determined by one radiology expert and then "verified by another MQSA-qualified, board-certified radiologist to ensure accuracy and consistency." This suggests a two-expert consensus or verification model, rather than an adjudication method like 2+1 where a third expert breaks ties. If there were disagreements, the document does not specify a formal adjudication rule, implying either direct agreement or resolution between the two.
    4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly detailed in this summary. The standalone performance section focused on comparing the algorithm's performance (Genius AI Detection 2.0) against its predicate algorithm (Genius AI Detection K201019), not on a human-in-the-loop improvement study.
    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • Yes, a standalone performance study was done. The document explicitly states: "To evaluate the performance of Genius AI Detection 2.0 in comparison to its predicate device, a standalone study was conducted on the sequestered dataset described above." It also discusses "detection sensitivity, and rate of false positive marks per view" and "specificity measured at the operating point" which are characteristic of standalone algorithm evaluation.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc):

      • The ground truth for the test set was established using a combination of radiology reports, pathology reports, and diagnostic and post-biopsy images, verified by two expert radiologists. This is a robust form of expert consensus combined with pathology and imaging outcomes data.
    7. The sample size for the training set:

      • The document states: "The cancer database for training and evaluation is enlarged by two-fold." However, it does not provide the specific numerical sample size of the training set. It only mentions that the "significantly expanded dataset" allowed for training of a more complex CNN model.
    8. How the ground truth for the training set was established:

      • The document does not explicitly detail how the ground truth for the training set was established. It only mentions that an "enlarged cancer data set" was used to train the deep learning AI models and that "all deep learning AI models have been retrained on the enlarged dataset." It's implied that similar clinical data and perhaps expert review were used, but specific methods are not provided.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1