Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K251167
    Device Name
    uDR Aurora CX
    Date Cleared
    2025-09-19

    (157 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    uDR Aurora CX is intended to acquire X-ray images of the human body by a qualified technician, examples include acquiring two-dimensional X-ray images of the skull, spinal column, chest, abdomen, extremities, limbs and trunk. The visualization of such anatomical structures provide visual evidence to radiologists and clinicians in making diagnostic decisions. This device is not intended for mammography.

    Device Description

    uDR Aurora CX is a model of Digital Medical X-ray Imaging System developed and manufactured by Shanghai United Imaging Healthcare Co., Ltd(UIH). It includes X-ray Generator, X-ray Imaging System. The X-ray Generator produces controlled X-rays by high-voltage generator and X-ray tube assembly, ensuring stable energy output for human body penetration. The X-ray Imaging System converts X-ray photons into electrical signals by detectors, and generates DICOM-standard images by workstation to reflecting density variations of human body.

    AI/ML Overview

    This document describes the acceptance criteria and study details for two features of the uDR Aurora CX device: uVision and uAid.


    1. Acceptance Criteria and Reported Device Performance

    FeatureAcceptance CriteriaReported Device Performance
    uVisionWhen users employ the uVision function for automatic positioning, the automatically set system position and field size will meet clinical technicians' criteria with 95% compliance. This demonstrates that uVision can effectively assist clinical technicians in positioning tasks, specifically by aiming to reduce retake rates attributed to incorrect positioning (which studies indicate can range from 9% to 28%).In 95% of patient positioning processes, the light field and equipment position automatically set by uVision met clinical positioning and shooting requirements for chest PA, whole-spine, and whole-lower-limb stitching exams. In the remaining 5% of cases, manual adjustments by technicians were needed.
    uAidThe accuracy of non-standard image recognition (specifically, the rate of "Grade A" images recognized) should meet a 90% pass rate, aligning with industry standards derived from guidelines like those from European Radiology and ACR-AAPM-SPR Practice Parameter (which indicate Grade A image rates between 80% and 90% in public hospitals). This demonstrates that uAid can effectively assist clinical technicians in managing standardized image quality.Overall Performance: The uAid function can correctly identify four types of results (Foreign object, Incomplete lung fields, Unexposed shoulder blades, and Centerline deviation) and classify images into Green (qualified), Yellow (secondary), or Red (waste). It meets the requirement for checking examination and positioning quality.
    Specific Metric/Quantitative Performance (from "Summary"):- Average algorithm time: 1.359 seconds (longest not exceeding 2 seconds).- Maximum memory occupation: Not more than 2G.- For foreign body, lung field integrity, and scapula opening, both sensitivity and specificity for recognition exceed 0.9.

    2. Sample Size and Data Provenance for the Test Set

    FeatureSample Size for Test SetData Provenance
    uVision348 cases (328 Chest PA cases + 20 Full Spine or Full Lower Limb Stitching cases) collected over one week from 2024.12.17 to 2024.12.23. The device had been installed for over a year, with an average daily volume of ~80 patients, ~45 chest X-rays/day, and ~10-20 stitching cases/week.Prospective/Retrospective Hybrid: The data was collected prospectively from a device (serial number 11XT7E0001) in clinical use after installation and commissioning over a year prior to the reported test period. It was collected from individuals of all genders and varying heights (able to stand independently). The testing was conducted in a real-world clinical setting. Country of Origin: Not explicitly stated, but the company is in Shanghai, China, suggesting the data is likely from China.
    uAidNot explicitly stated as a single total number of cases. Instead, the data distribution is provided, indicating various counts for different conditions across gender and age groups. For example, "lung field segmentation" had 465 negative and 31 positive cases. "Foreign object" had 1078 negative and 3080 positive cases. The sum of these individual counts suggests a total dataset of several thousand images.Retrospective: Data collection for uAid started in October 2017, with a wide range of data sources, including different cooperative hospitals. The data was cleaned and stored in DICOM format. Country of Origin: Not explicitly stated, but the company is in Shanghai, China, suggesting the data is likely from China.

    3. Number and Qualifications of Experts for Ground Truth (Test Set)

    FeatureNumber of ExpertsQualifications of Experts
    uVisionNot explicitly stated. The statement says, "The results automatically set by the system are then statistically analyzed by clinical experts.""Clinical experts." No specific qualifications (e.g., years of experience, specialty) are provided.
    uAidNot explicitly stated. The document mentions "The study was approved by the institutional review board of the hospitals," which implies expert review but does not detail the number or roles of experts in establishing the ground truth labels for the specific image characteristics tested.Not explicitly stated for establishing ground truth labels.

    4. Adjudication Method (Test Set)

    FeatureAdjudication Method
    uVisionNot explicitly stated. The data was "statistically analyzed by clinical experts." It does not specify if multiple experts reviewed cases or how disagreements were resolved.
    uAidNot explicitly stated. The process mentions data cleaning and sorting, and IRB approval, but not the specific adjudication method for individual image labels.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • uVision: No MRMC comparative effectiveness study was done to compare human readers with and without AI assistance. The study evaluates the AI's direct assistance in positioning, measured by compliance with clinical criteria, rather than comparing diagnostic performance of human readers.
    • uAid: No MRMC comparative effectiveness study was done. The study focuses on the standalone performance of the algorithm in identifying image quality issues, not on how it impacts human reader diagnostic accuracy or efficiency.

    6. Standalone Performance (Algorithm Only)

    • uVision: Yes, a standalone performance study was done. The "95% compliance" rate refers to the algorithm's direct ability to set system position and FOV that meet clinical technician criteria without a human actively adjusting or guiding the initial AI-generated settings during the compliance evaluation. Technicians could manually adjust those settings if needed.
    • uAid: Yes, a standalone performance study was done. The algorithm processes images and outputs a quality classification (Green, Yellow, Red) and identifies specific issues (foreign object, incomplete lung fields, etc.). Its sensitivity and specificity metrics are standalone performance indicators.

    7. Type of Ground Truth Used

    • uVision: Expert Consensus/Clinical Criteria: The ground truth for uVision's performance (i.e., whether the automatically set position/FOV was "compliant") was established by "clinical experts" based on "clinical technicians' criteria" for proper positioning and shooting requirements.
    • uAid: Expert Consensus/Manual Labeling: The ground truth for uAid's evaluation (e.g., presence of foreign objects, complete lung fields, open scapula, centerline deviation) was established through a "classification" process, implying manual labeling or consensus by experts after data collection and cleaning. The document mentions "negative" and "positive" data distributions for each criterion.

    8. Sample Size for the Training Set

    • uVision: Not explicitly stated in the provided text. The testing data was confirmed to be "collected independently from the training dataset, with separated subjects and during different time periods."
    • uAid: Not explicitly stated in the provided text. The document mentions "The data collection started in October 2017, with a wide range of data sources" for training, but does not provide specific numbers for the training set size.

    9. How Ground Truth for Training Set was Established

    • uVision: Not explicitly stated for the training set. It can be inferred that a similar process to the test set, involving expert review against clinical criteria, would have been used.
    • uAid: Not explicitly stated for the training set. Given that the data was collected from "different cooperative hospitals," "multiple cleaning and sorting" was performed, and the study was "approved by the institutional review board," it is highly likely that the ground truth for the training set involved manual labeling by clinical experts/radiologists, followed by a review process (potentially consensus-based or single-expert) to establish the labels for image characteristics and quality.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1