Search Results
Found 1 results
510(k) Data Aggregation
(152 days)
Digital Medical X-ray Imaging System is intended to acquire X-ray images of the human body by a qualified technician, examples include acquiring two-dimensional X-ray images of the skull, spinal column, chest, abdomen, extremities, limbs and trunk. The visualization of such anatomical structures provide visual evidence to radiologists and clinicians in making diagnostic decisions. This device is not intended for mammography.
uDR Arria and uDR Aris are two models of Digital Medical X-ray Imaging System developed and manufactured by Shanghai United Imaging Healthcare Co., Ltd(UIH). The system is equipped with imaging chain components and utilizes enhanced processing technology, so it can offer radiographic images with high image quality. The intuitive user interface and easy-to-use functions provide clinical users with a experience during patient examination and image processing.
The system is intended to acquire X-ray images of the human body by a qualified technician, examples include acquiring two-dimensional X-ray images of the skull, spinal column, chest, abdomen, extremities, limbs and trunk. The visualization of such anatomical structures provide visual evidence to radiologists and clinicians in making diagnostic decisions. This device is not intended for mammography.
Here's a breakdown of the acceptance criteria and study details for the uDR Arria and uDR Aris devices, based on the provided FDA 510(k) clearance letter:
1. Table of Acceptance Criteria and Reported Device Performance
The 510(k) summary provided details for two specific optional software functions: uVision and uAid. Other aspects of the device are X-ray imaging systems, for which the primary "performance" is image quality, which is evaluated subjectively through clinical image evaluation rather than quantitative metrics and acceptance criteria.
| Feature | Acceptance Criteria | Reported Device Performance |
|---|---|---|
| uVision (Optional) | When users employ the uVision function for automatic positioning, the automatically set system position and field size will meet clinical technicians' criteria with 95% compliance. | In 95% of patient positioning processes, the light field and equipment position automatically set by uVision can meet the clinical positioning and shooting requirements. In the remaining 5% of cases, based on the light field and system position automatically set by the equipment, technicians still need to make manual adjustments. |
| uAid (Optional) | A 90% pass rate for "Grade A" images, aligning with industry standards (e.g., European Radiology and ACR-AAPM-SPR Practice Parameter guidelines which state Grade A image rates in public hospitals generally range between 80% and 90%).Implicitly, for specific criteria: Sensitivity and specificity of whether there is a foreign body, whether the lung field is intact, and whether the scapula is open all exceed 0.9. | Average time of uAid algorithm: 1.359 seconds (longest does not exceed 2 seconds).Maximum memory occupation of uAid algorithm: not more than 2GB.Sensitivity and specificity of whether there is a foreign body, whether the lung field is intact, and whether the scapula is open all exceed 0.9.The uAid function can correctly identify four types of results: Foreign object, Incomplete lung fields, Unexposed shoulder blades, and Centerline deviation, making classifications (Green: qualified, Yellow: secondary, Red: waste). |
| Clinical Image Quality | Each image was reviewed with a statement indicating that image quality is sufficient for clinical diagnosis. | Sample images of chest, abdomen, spine, pelvis, upper extremity and lower extremity were provided. A board-certified radiologist evaluated the image quality for sufficiency for clinical diagnosis. |
2. Sample Size Used for the Test Set and Data Provenance
uVision:
- Sample Size: The evaluation results provided are from a single week's worth of data, totaling 328 chest cases and 20 full spine or full lower limb stitching cases from the specified period (2024.12.17 - 2024.12.23).
- Data Provenance: The device with uVision function (serial number 11XT7E0001) has been in use for over a year at a hospital. The testing data includes individuals of all genders and varying heights capable of standing independently. It is prospective in the sense that it was collected during routine clinical operation after installation and commissioning. The country of origin is not explicitly stated but implied to be China, given the manufacturer's location.
uAid:
- Sample Size: The document does not provide a single total number for the test set. Instead, it breaks down the data by age/gender distribution and the distribution of positive/negative cases for each criterion.
- Age/Gender Distribution: Total 5680 patients (sum of all male/female/no age/no age, no gender categories).
- Criterion-specific counts:
- Lung field segmentation: 465 Negative, 31 Positive
- Spinal centerline segmentation: 815 Negative, 68 Positive
- Shoulder blades segmentation: 210 Negative, 1089 Positive
- Foreign object: 1078 Negative, 3080 Positive
- Data Provenance: Data collection started in October 2017, from the uDR 780i and "different cooperative hospitals." The study was approved by the institutional review board. The data is stored in DICOM format. It is retrospective, collected from existing hospital data. Country of origin not explicitly stated but implied to be China.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
uVision:
- Number of Experts: The results were "statistically analyzed by clinical experts." The exact number is not specified, but the term "experts" suggests more than one.
- Qualifications: "Clinical experts" is a general term. Specific qualifications (e.g., radiologist, radiologic technologist, years of experience) are not provided.
uAid:
- Ground truth establishment for uAid's classification categories (Foreign object, Incomplete lung fields, Unexposed shoulder blades, Centerline deviation) is not explicitly described in terms of experts or their qualifications. The acceptance criteria reference "relevant research and literature" and "industry guidelines and standards," suggesting that the ground truth for image quality categorization might be derived from these established definitions.
Clinical Image Quality (General):
- Number of Experts: "A board certified radiologist." This indicates one expert.
- Qualifications: "Board certified radiologist." This is a specific and high qualification for evaluating radiographic images.
4. Adjudication Method for the Test Set
uVision:
- The document states that the results were "statistically analyzed by clinical experts." This implies some form of review and judgment, but a formal adjudication method (e.g., 2+1, 3+1) is not described. It's presented as an evaluation of the system's compliance with technician criteria.
uAid:
- The method for establishing ground truth classifications for uAid's criteria (Foreign object, Incomplete lung fields, Unexposed shoulder blades, Centerline deviation) is not detailed, so an adjudication method is not provided.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
- No, an MRMC comparative effectiveness study was not explicitly described for either uVision or uAid or the overall device. The studies focused on evaluating the standalone performance of the AI features and the overall clinical image quality (subjective expert review), rather than comparing human readers with and without AI assistance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- Yes, standalone performance was evaluated for both uVision and uAid.
- uVision: The reported performance is how well the "automatically set system position and field size" meet "clinical technicians' criteria" when the uVision function is used for automatic positioning. While technicians use the function, the performance metric itself is about the accuracy of the system's automatic output before manual adjustment.
- uAid: The reported performance metrics (sensitivity, specificity, processing time, memory usage) are direct measurements of the algorithm's output in categorizing image quality criteria. The system classifies images and presents the evaluation "instantly accessible to radiologic technologists," but the evaluation itself is algorithmic.
7. The Type of Ground Truth Used
uVision:
- The ground truth for uVision appears to be clinical technicians' criteria. The metric is "meet clinical technicians' criteria with 95% compliance," implying that the "correct" positioning is defined by the standards and expectations of experienced technicians.
uAid:
- The ground truth for uAid's image quality classification is based on established clinical quality control criteria for chest X-rays. The acceptance criteria explicitly reference "mature industry guidelines and standards, such as those from European Radiology and the ACR-AAPM-SPR Practice Parameter." This suggests an expert consensus-driven or guideline-based ground truth for what constitutes a "Grade A" image or the presence of specific issues like foreign objects or incomplete lung fields.
8. The Sample Size for the Training Set
uVision:
- The sample size for the training set for uVision is not provided. The document explicitly states that the "testing dataset was collected independently from the training dataset, with separated subjects and during different time periods."
uAid:
- The sample size for the training set for uAid is not provided. It is mentioned that "The data collection started in October 2017... from different cooperative hospitals" for the overall data reservoir, but specific numbers for training versus testing are not given. Similar to uVision, the testing data is stated to be "entirely independent and does not share any overlap with the training data."
9. How the Ground Truth for the Training Set Was Established
uVision:
- The method for establishing the ground truth for the training set for uVision is not provided.
uAid:
- The method for establishing the ground truth for the training set for uAid is not provided. While the testing set's ground truth is implied to be based on industry guidelines and clinical criteria, the process for prospectively labeling training data is not detailed.
Ask a specific question about this device
Page 1 of 1