Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K122842
    Date Cleared
    2012-10-09

    (22 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    RADREX-I, SW V4.00 MODEL DRAD-3000E

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    This device is indicated as a general radiography device. It is capable of providing digital images of the head, neck, spine, chest, abdomen, and limbs by converting x-rays to digital images. Excluded indications include mammography, fluoroscopy and angiography studies.

    Device Description

    The RADREX-i is a general purpose x-ray system that employs Solid State Imager(s), SSXI, which converts x-rays directly into electrical signals shich can, after appropriate processing be displayed on LCD monitors or printed to a medical grade image printer. The system console is a PC based devise that allows for worklist management, image storage, image processing, image exporting and image printing. The system may be equipped with a table and/or vertical wall unit, is configurable with up to two x-ray tubes, and has an auto stitching function.

    AI/ML Overview

    The provided documents do not contain a detailed study report with quantitative acceptance criteria and device performance metrics in the format requested. The submission is a 510(k) for a modification (Special 510(k)) to an already cleared device, primarily adding new flat panel detectors.

    Therefore, many of the requested elements (e.g., sample sizes, ground truth establishment, expert qualifications, MRMC studies, standalone performance with specific metrics like sensitivity/specificity) are not present in the provided text.

    Specifically, the document states:

    • "Image Quality metrics utilizing phantoms are provided in this submission." (Section 18. TESTING)
    • "Safety and effectiveness have been verified via risk management and application of design controls to the modifications." (Section 19. CONCLUSION)

    These statements indicate that testing was performed, but the specific results and methodology are not detailed in the provided summary. The submission focuses on demonstrating substantial equivalence to a predicate device, and for modifications like this, a detailed clinical study with human readers might not have been deemed necessary by the submitter or requested by the FDA if phantom testing and engineering verification were sufficient to establish equivalence for the new components.

    Given the available information, here's what can be extracted, with explicit notes for missing information:


    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria CategorySpecific Metric (if available)Acceptance CriteriaReported Device Performance
    Image Quality (Phantoms)(Not specified)(Not specified)Provided in submission (specific values not detailed)
    SafetyCompliance21 CFR § 820, ISO 13485, IEC60601-1, IEC 60601-2-32, IEC60601-2-28, 21 CFR §1020Device is designed and manufactured in conformance with these standards.
    Effectiveness(Not specified)(Not specified)Verified via risk management and design controls.

    2. Sample size used for the test set and the data provenance:

    • Sample Size (Test Set): Not specified. The document primarily refers to "Image Quality metrics utilizing phantoms," implying phantom-based testing rather than a clinical human-subject test set.
    • Data Provenance: Not specified. Based on phantom testing, geographic origin is not relevant. The testing would be considered prospective for the device modification.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Number of Experts: Not applicable. Ground truth for phantom-based image quality metrics is typically established by physical measurements or known characteristics of the phantom, not by expert consensus on clinical images.
    • Qualifications of Experts: Not applicable.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    • Adjudication Method: Not applicable, as detailed multi-reader clinical testing with human subjects is not described for the test set.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • MRMC Study: No. This submission does not describe an AI device or an MRMC study comparing human readers with and without AI assistance. The device is a general radiography system, not an AI diagnostic tool.

    6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:

    • Standalone Performance Test: Not applicable. The device is an imaging system, not an algorithm, and its performance is assessed via image quality and safety standards for the hardware/software.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • Type of Ground Truth: For the "Image Quality metrics utilizing phantoms," the ground truth would be based on the known, reproducible properties of the phantom and objective physical measurements (e.g., spatial resolution, contrast-to-noise ratio, MTF, DQE). If any limited subjective evaluation was done, it is not described.

    8. The sample size for the training set:

    • Sample Size (Training Set): Not applicable. This is not an AI/ML device that requires a training set of data in the conventional sense. The "base software" is stated to remain unchanged, implying previous development and validation.

    9. How the ground truth for the training set was established:

    • Ground Truth (Training Set): Not applicable. See point 8.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1