(196 days)
The uDR 780i Digital Medical X-ray system is intended for use by a qualified/trained doctor or technician on both adult and pediatric subjects for taking diagnostic exposures of the skull, spinal column, chest, abdomen, extremities, and other anatomic sites. Applications can be performed with the subject sitting, standing, or lying in the prone or supine position. Not for mammography.
The uDR 780i is a digital radiography (DR) system that is designed to provide radiography examinations of sitting, standing or lying patients. It consists of the following components: Tube Ceiling Suspension with tube and collimator, Bucky Wall Stand, Elevating Table, High Voltage Generator, wireless flat panel detectors and an acquisition workstation. The system generates images which can be transferred through DICOM network for printing, review and storage.
The provided text describes the 510(k) summary for the uDR 780i Digital Medical X-ray system. This document outlines the device's technical specifications and compares it to a predicate device (also named uDR 780i, K173953) to demonstrate substantial equivalence, rather than providing a detailed study proving the device meets specific acceptance criteria with quantifiable metrics.
Therefore, many of the requested details about acceptance criteria, specific performance metrics, sample sizes, expert qualifications, and ground truth establishment from a clinical trial or performance study cannot be found in this document. This summary focuses on demonstrating that the new device is functionally identical or improved in a way that doesn't raise new safety or effectiveness concerns compared to a previously cleared device.
However, I can extract information related to the "Clinical Image Evaluation" which serves as the closest equivalent to a performance study in this regulatory context:
1. A table of acceptance criteria and the reported device performance
The document does not provide a table of specific numerical acceptance criteria (e.g., sensitivity, specificity, accuracy) for a diagnostic AI device, nor does it report specific device performance metrics against such criteria. The clinical image evaluation is qualitative.
Acceptance Criteria (Implied) | Reported Device Performance (Qualitative) |
---|---|
Image Quality sufficient for clinical diagnosis | "Each image was reviewed with a statement indicating that image quality is sufficient for clinical diagnosis." |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document states: "Sample images of chest, abdomen, spine, pelvis, upper extremity and lower extremity were provided..."
- Sample Size: Not specified beyond "Sample images." The exact number of images for each body part or the total number of images is not given.
- Data Provenance: Not specified (e.g., country of origin, retrospective/prospective). While the applicant is based in Shanghai, China, it's not stated where the clinical images originated.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
The document states: "...provided with a board certified radiologist to evaluate the image quality in this submission."
- Number of Experts: One
- Qualifications of Expert(s): "board certified radiologist." Specific experience level (e.g., years) is not provided.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Adjudication Method: Not applicable. Only one radiologist evaluated the images; therefore, no adjudication method was used.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: No, an MRMC comparative effectiveness study was not done. The evaluation was a qualitative assessment of image quality by a single radiologist, not a study of human reader performance with or without AI assistance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Performance: No, a standalone algorithm-only performance study was not done. The evaluation focused on the clinical image quality of the output from the "uDR 780i Digital Medical X-ray system," not an AI algorithm within it. The system itself is a hardware device for capturing X-ray images.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Type of Ground Truth: The "ground truth" concerning image quality was established by the qualitative assessment of a "board certified radiologist" who determined if the image quality was "sufficient for clinical diagnosis." This is a form of expert opinion/assessment rather than a definitive medical ground truth like pathology or patient outcomes.
8. The sample size for the training set
- Training Set Sample Size: Not applicable/not mentioned. The document describes an X-ray imaging system, not an AI software component that would typically involve a "training set."
9. How the ground truth for the training set was established
- Ground Truth for Training Set Establishment: Not applicable/not mentioned, as there is no mention of an AI component with a training set.
§ 892.1680 Stationary x-ray system.
(a)
Identification. A stationary x-ray system is a permanently installed diagnostic system intended to generate and control x-rays for examination of various anatomical regions. This generic type of device may include signal analysis and display equipment, patient and equipment supports, component parts, and accessories.(b)
Classification. Class II (special controls). A radiographic contrast tray or radiology diagnostic kit intended for use with a stationary x-ray system only is exempt from the premarket notification procedures in subpart E of part 807 of this chapter subject to the limitations in § 892.9.