Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K183474
    Date Cleared
    2019-01-16

    (30 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The device is intended to capture for display radiographic images of human anatomy including both pediatric and adult patients. The device is intended for use in general projection radiographic applications wherever conventional screen-film systems or CR systems may be used. Excluded from the indications for use are mammography, fluoroscopy, and angiography applications.

    Device Description

    Carestream Health, Inc. is submitting this Special 510(k) premarket notification for a modification to the cleared Carestream DRX-1 System with DRX Plus 3543 Detectors (K150766). The product will be marketed as the Carestream DRX-1 System with DRX Core Detectors.

    Consistent with the original system, the Carestream DRX-1 System with DRX Core 3543 Detectors are flat panel digital imagers utilizing a stationary digital radiography (DR) x-ray system.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    Important Note: The provided text is a 510(k) summary for a modified device, where the primary change is the detector. The study focuses on demonstrating equivalence to a predicate device, rather than establishing absolute performance targets for a new invention. Therefore, the "acceptance criteria" here largely revolve around demonstrating equivalent or superior image quality compared to the predicate.


    1. Table of Acceptance Criteria and the Reported Device Performance

    Acceptance Criteria (Implicit for Equivalence)Reported Device Performance
    Bench Testing ConformancePredefined acceptance criteria were met, demonstrating the device conforms to its specifications, is as safe, as effective, and performs as well as or better than the predicate device in terms of workflow, performance, function, shipping, verification, validation, and reliability.
    Image Quality EquivalenceThe average preference for all image quality attributes (detail contrast, sharpness, and noise) demonstrates that the image quality of the investigational device (exposed with 30% less exposure) is the same as that of the predicate device.
    Beam Detect Mode EquivalenceThe two-sample equivalence tests confirm that the beam detect mode has no effect on preference (indicating equivalent performance whether beam detect is used or not).
    Artifact AbsenceNo unique artifacts associated with either detector were evident in the resultant images during image comparison. (Common artifacts attributed to external factors like dust were deemed inconsequential as they appeared on both).
    Compliance with StandardsDevice was tested and found compliant to IEC 60601-1, IEC 60601-1-2, and IEC 62321. Adherence to FDA Guidance Documents for Solid State Imaging and Pediatric Information was also stated.

    2. Sample Size Used for the Test Set and the Data Provenance

    • Sample Size for Test Set: Thirty image pairs were obtained for the phantom imaging study (investigational vs. predicate). This involved adult and pediatric anthropomorphic phantoms.
    • Data Provenance: The data was generated through an experimental phantom imaging study in a controlled environment, comparing the investigational device with the predicate device. The text does not specify the country of origin, but given it's a FDA submission for a US company, it's likely US-based or internally generated by the manufacturer. It is a prospective study designed specifically for this comparison.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts

    • Number of Experts: Three expert observers.
    • Qualifications of Experts: "former radiographic technologists with extensive experience in resolving image quality related service issues."

    4. Adjudication Method for the Test Set

    The observers evaluated images in a blinded pairwise study using a 5-point preference scale for image quality attributes. The text does not specify a formal adjudication method like "2+1" or "3+1" for discordant readings. The "average preference" suggests that individual expert preferences were aggregated rather than going through a consensus-building process for each image pair.


    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No MRMC study was done in the context of AI assistance. This submission is for a hardware modification (detector change) within an existing X-ray system, not for an AI device. The study performed was a phantom imaging study with human observers assessing image quality attributes, primarily to demonstrate equivalence between two detector types.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    • N/A. This submission is not for an AI algorithm. The device is an X-ray system with a digital detector. The "performance" being evaluated relates to the image quality produced by the hardware, not an algorithm's diagnostic capabilities.

    7. The Type of Ground Truth Used

    • "Ground truth" was implicitly established by comparing the investigational device's image output against the predicate device's image output, as evaluated by expert observers using a preference scale. This is a comparative "ground truth" based on expert perception of image quality. Since it's a phantom study, there isn't disease pathology involved in the same way as a clinical study. The phantoms themselves represent anatomical structures.

    8. The Sample Size for the Training Set

    • N/A. The provided text does not describe a "training set" in the context of machine learning or AI. This is a hardware device 510(k) submission, not an AI/ML device. The "training" for the device would be its engineering design and manufacturing processes, not data training.

    9. How the Ground Truth for the Training Set Was Established

    • N/A. As above, there is no mention of a training set or its associated ground truth in this submission, as it's not an AI/ML device.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1