Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K111324
    Date Cleared
    2011-12-22

    (225 days)

    Product Code
    Regulation Number
    892.1715
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Agfa's CR Mammography System with DX-M Digitizer is indicated for use in diagnostic and screening mammography. It is intended to be used wherever conventional film/ mammography screen systems are used.

    Device Description

    The new device is defined as:

    • . Agfa's DX-M digitizer.
    • HM5.0 image detectors designed for mammography, and .
    • . The NX user workstation with a mammography license.
      The device uses detectors which are exposed to x-rays. The detectors are then scanned by the digitizer to create an electronic images are sent to a user workstation for patient identification, image previewing and adjustment, before sending them to a viewing workstation (PACS system) or printer for use by the physician. Principles of operation and technological characteristics of the new and predicate devices are the same. With optional general radiography screens and cassettes, the DX-M digitizer has the same capabilities as Agfa's DX-G.
    AI/ML Overview
    • Acceptance Criteria and Reported Device Performance:
    Acceptance CriteriaReported Device Performance
    Conformity to electronic medical product safety, radiology, and medical imaging standardsConforms to IEC 60601-1, IEC 60601-1-2, IEC 60825-1, IEC 6220-1-2, IEC 61223-2-3 and ACR/NEMA PS3.1-3.18 (DICOM). Conforms to ISO 13485, ISO 14971.
    Sensitometric responseTest results demonstrated performance equivalent or better than the predicate device.
    Spatial resolutionTest results demonstrated performance equivalent or better than the predicate device. DQE data presented in Figure 1 shows performance.
    Noise analysisTest results demonstrated performance equivalent or better than the predicate device.
    Dynamic rangeTest results demonstrated performance equivalent or better than the predicate device.
    ErasureTest results demonstrated performance equivalent or better than the predicate device.
    FadingTest results demonstrated performance equivalent or better than the predicate device.
    Repeat exposuresTest results demonstrated performance equivalent or better than the predicate device.
    AEC performanceTest results demonstrated performance equivalent or better than the predicate device.
    Evaluation of images from mammography phantomsTest results demonstrated performance equivalent or better than the predicate device.
    Clinical image quality (breast positioning, exposure, image contrast, sharpness, tissue visibility at the skin line noise, and overall clinical image quality)"Reviewers found all studies to be of acceptable overall clinical image quality."
    • Sample Size and Data Provenance for Test Set: The document does not specify a numerical sample size for the clinical image evaluation test set. It mentions "Anonymized images were evaluated." The data provenance (country of origin, retrospective/prospective) is not explicitly stated.

    • Number of Experts and Qualifications for Ground Truth (Test Set): The document states "An evaluation of clinical images by expert, MQSA qualified radiologists was conducted." The exact number of radiologists is not specified, nor is their specific years of experience. "MQSA qualified" indicates they meet the minimum qualification standards for interpreting mammograms in the United States.

    • Adjudication Method for Test Set: The document does not describe a specific adjudication method (e.g., 2+1, 3+1). It only states that "reviewers found all studies to be of acceptable overall clinical image quality." This implies a consensus or individual assessment without a detailed adjudication process mentioned.

    • Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study: No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly mentioned or described. The study focused on assessing the image quality of the new device by expert radiologists, not on comparing human readers' performance with and without AI assistance.

    • Standalone Performance Study: Yes, a standalone (algorithm only without human-in-the-loop performance) study was conducted for the device's technical specifications. This is indicated by the "Laboratory imaging tests of the new device consistent with IEC 62220-1-2 has been completed" which included evaluation of sensitometric response, spatial resolution, noise analysis, dynamic range, erasure, fading, repeat exposures, AEC performance, and mammography phantoms.

    • Type of Ground Truth Used:

      • For the technical laboratory tests (sensitometric response, spatial resolution, etc.), the ground truth was based on objective measurements and industry standards (IEC 62220-1-2) using phantoms.
      • For the clinical image evaluation, the ground truth was based on expert consensus from MQSA qualified radiologists regarding "acceptable overall clinical image quality."
    • Sample Size for Training Set: The document does not mention a training set or its sample size. This submission is for a medical imaging digitizer, which typically doesn't involve machine learning training in the same way an AI algorithm would. Its performance is based on the physical characteristics of the imaging components and processing algorithms.

    • How Ground Truth for Training Set was Established: Not applicable, as a training set for machine learning is not described.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1