Search Results
Found 2 results
510(k) Data Aggregation
(196 days)
uDR 780i
The uDR 780i Digital Medical X-ray system is intended for use by a qualified/trained doctor or technician on both adult and pediatric subjects for taking diagnostic exposures of the skull, spinal column, chest, abdomen, extremities, and other anatomic sites. Applications can be performed with the subject sitting, standing, or lying in the prone or supine position. Not for mammography.
The uDR 780i is a digital radiography (DR) system that is designed to provide radiography examinations of sitting, standing or lying patients. It consists of the following components: Tube Ceiling Suspension with tube and collimator, Bucky Wall Stand, Elevating Table, High Voltage Generator, wireless flat panel detectors and an acquisition workstation. The system generates images which can be transferred through DICOM network for printing, review and storage.
The provided text describes the 510(k) summary for the uDR 780i Digital Medical X-ray system. This document outlines the device's technical specifications and compares it to a predicate device (also named uDR 780i, K173953) to demonstrate substantial equivalence, rather than providing a detailed study proving the device meets specific acceptance criteria with quantifiable metrics.
Therefore, many of the requested details about acceptance criteria, specific performance metrics, sample sizes, expert qualifications, and ground truth establishment from a clinical trial or performance study cannot be found in this document. This summary focuses on demonstrating that the new device is functionally identical or improved in a way that doesn't raise new safety or effectiveness concerns compared to a previously cleared device.
However, I can extract information related to the "Clinical Image Evaluation" which serves as the closest equivalent to a performance study in this regulatory context:
1. A table of acceptance criteria and the reported device performance
The document does not provide a table of specific numerical acceptance criteria (e.g., sensitivity, specificity, accuracy) for a diagnostic AI device, nor does it report specific device performance metrics against such criteria. The clinical image evaluation is qualitative.
Acceptance Criteria (Implied) | Reported Device Performance (Qualitative) |
---|---|
Image Quality sufficient for clinical diagnosis | "Each image was reviewed with a statement indicating that image quality is sufficient for clinical diagnosis." |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document states: "Sample images of chest, abdomen, spine, pelvis, upper extremity and lower extremity were provided..."
- Sample Size: Not specified beyond "Sample images." The exact number of images for each body part or the total number of images is not given.
- Data Provenance: Not specified (e.g., country of origin, retrospective/prospective). While the applicant is based in Shanghai, China, it's not stated where the clinical images originated.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
The document states: "...provided with a board certified radiologist to evaluate the image quality in this submission."
- Number of Experts: One
- Qualifications of Expert(s): "board certified radiologist." Specific experience level (e.g., years) is not provided.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Adjudication Method: Not applicable. Only one radiologist evaluated the images; therefore, no adjudication method was used.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: No, an MRMC comparative effectiveness study was not done. The evaluation was a qualitative assessment of image quality by a single radiologist, not a study of human reader performance with or without AI assistance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Performance: No, a standalone algorithm-only performance study was not done. The evaluation focused on the clinical image quality of the output from the "uDR 780i Digital Medical X-ray system," not an AI algorithm within it. The system itself is a hardware device for capturing X-ray images.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Type of Ground Truth: The "ground truth" concerning image quality was established by the qualitative assessment of a "board certified radiologist" who determined if the image quality was "sufficient for clinical diagnosis." This is a form of expert opinion/assessment rather than a definitive medical ground truth like pathology or patient outcomes.
8. The sample size for the training set
- Training Set Sample Size: Not applicable/not mentioned. The document describes an X-ray imaging system, not an AI software component that would typically involve a "training set."
9. How the ground truth for the training set was established
- Ground Truth for Training Set Establishment: Not applicable/not mentioned, as there is no mention of an AI component with a training set.
Ask a specific question about this device
(30 days)
uDR 780i
The uDR 780i digital medical x-ray system is intended for use by a qualified/trained doctor or technician on both adult and pediatric subjects for taking diagnostic radiographic exposures of the skull, spinal column, chest, abdomen, extremities, and other anatomic sites. Applications can be performed with the subject sitting, standing, or lying in the prone or supine position. Not for mammography.
The uDR 780i is a digital radiography (DR) system that is designed to provide radiography examinations of sitting, standing or lying patients. It consists of the following components: Tube Ceiling Suspension with tube and collimator, Bucky Wall Stand, Elevating Table, High Voltage Generator, wireless flat panel detectors and an acquisition workstation. The system generates images which can be transferred through DICOM network for printing, review and storage.
Based on the provided text, the device in question (uDR 780i) is a digital medical X-ray system, not an AI-powered diagnostic device. Therefore, the "acceptance criteria" and "study that proves the device meets the acceptance criteria" are focused on demonstrating substantial equivalence to a predicate conventional X-ray system, rather than an AI/ML algorithm's performance on a diagnostic task like detecting pathology.
The document is a 510(k) summary for a medical device. For devices like this, the "acceptance criteria" are generally compliance with recognized performance standards and demonstration that the device's technical specifications and intended use are substantially equivalent to a legally marketed predicate device. The "study" largely involves non-clinical bench testing and a review of clinical images by an expert to confirm diagnostic quality.
Here's the breakdown of information based on the typical requirements for an AI/ML medical device, applied to this conventional X-ray system where applicable:
Acceptance Criteria and Device Performance (for a Conventional X-ray System)
The acceptance criteria are not explicitly laid out in a quantifiable table as they would be for an AI diagnostic device (e.g., sensitivity, specificity thresholds). Instead, the "acceptance" is based on demonstrating substantial equivalence to a predicate device (NOVA FA DR System, K133782) regarding intended use, technological characteristics, safety, and effectiveness.
Table 1: Comparison of Technology Characteristics (Relevant for "Performance")
Item | Proposed Device uDR 780i | Predicate Device NOVA FA DR System | Remark (If applicable, implies meeting acceptance through equivalence) |
---|---|---|---|
General | |||
Product Code | KPR | KPR | Same |
Regulation No. | 892.1680 | 896.1680 | Same |
Class | II | II | Same |
Intended Use | Diagnostic radiographic exposures of skull, spinal column, chest, abdomen, extremities, other anatomic sites for adults and pediatrics. Not for mammography. | Diagnostic radiographic exposures of skull, spinal column, chest, abdomen, extremities, other body parts for adults and pediatrics. Not for mammography. | Same |
Specifications (Selected - see full Table 1 for all) | |||
High Voltage Generator | |||
Rated Power/kW | 65kW/80kW | 50kW/65kW/80kW | Note 1: Does not affect safety/effectiveness as clinical applications achievable under 65kW/80kW. |
Max. tube Voltage (kV) | 150kV | 150kV | Same |
Shortest exposure time | 1ms | 1ms | Same |
X-Ray Tube Assemble | |||
Focus Nominal Value | 0.6/1.2mm | 0.6/1.2mm | Same |
Maximum peak voltage | 150kV | 150kV | Same |
Anode Heat Content | 300kHU/400kHU | 300kHU/400kHU | Same |
Anode Target Angle | 12° | 12° | Same |
X-ray tube assembly Heat content | 900kJ (1.3MHU) / 1111kJ (1.5MHU) | 900kJ (1.3MHU) / 950kJ (1.33MHU) | Note 2: Determines continuous exposure time, but not image quality, thus no impact on safety/effectiveness. |
Flat Panel Detector | |||
Configuration | Battery or AC operated | Battery or AC operated | Same |
Digital Panels | Amorphous silicon (a-Si) | Amorphous silicon (a-Si) | Same |
Scintillator | Cesium iodide (CsI) | Cesium iodide (CsI) | Same |
Specifications | 3320x3408 125μm | 3320x3408 125μm | Same |
Effective radiographic size | 41.5cm x 42.6cm | 41.5cm x 42.6cm | Same |
Collimator | |||
Inherent filtration | 1mm Al | 1mm Al | Same |
Copper prefilter | without filter, 0.1 mm, 0.2 mm, 0.3 mm; | without filter, 0.1 mm, 0.2 mm, 0.3 mm; | Same |
Display | |||
Specification | 24inch, 1200x1920 | 19inch, 1024 x 1280 | Note 3: Larger display is user-friendly, no impact on safety/effectiveness. |
DICOM | DICOM3 | DICOM3 | Same |
Patient Table | |||
Motorized vertical travel | 38.2cm | 40cm | Note 4: Small difference, affects user experience but not clinical application or safety/effectiveness. |
X-ray absorption | ≤0.8mmAl | ≤1mmAl | Note 5: Lower absorption is better, optimizes image quality, no impact on safety/effectiveness. |
Max. patient weight | 225kg | 350kg | Note 7: 225kg sufficient for most patients, no impact on safety/effectiveness. |
Software Function (e.g., Image Search, Image Viewing, Image measurement, Image Annotation, Raw image Data processing, Post image data processing) | Yes | Yes | All "Same" functionality |
Safety | |||
Electrical Safety | AAMI ANSI ES60601-1; IEC 60601-1 | Comply with IEC60601-1 | Same |
EMC | Comply with IEC60601-1-2 | Comply with IEC60601-1-2 | Same |
Biocompatibility | Tested (ISO 10993-5, -10) | Comply with ISO10993-5, ISO10993-10 | Same |
Study Information (as applicable for a conventional X-ray system submission)
-
Sample size used for the test set and the data provenance:
- The document states "Sample clinical images accompanied by dose information and information detailing acquisition protocols and parameters were reviewed." It does not specify a numerical sample size for this "test set" of images.
- Data Provenance: Not explicitly stated, but typically for 510(k) submissions of conventional imaging devices, real-world images from a hospital or clinic would be used. The document is from Shanghai United Imaging Healthcare Co., Ltd. in CHINA, implying the images could be from China, but this is not explicitly stated. It's a retrospective review of generated images, not implied as prospective.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- One expert was used: "a board certified radiologist".
- Qualifications: "board certified radiologist with a statement indicating that images are of diagnostic quality." No specific years of experience are mentioned.
-
Adjudication method for the test set:
- "None" explicitly described for establishing ground truth from multiple readers. There was only one radiologist reviewer for the "clinical images."
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done:
- No. This type of study (MRMC comparing human readers with and without AI assistance) is not applicable or mentioned because the uDR 780i is a conventional X-ray system, not an AI-powered diagnostic assist device. The "effectiveness" is demonstrated through substantial equivalence to a predicate device and diagnostic quality of images.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Not applicable. This is a conventional X-ray system. The "performance" is about image quality for human interpretation, not an algorithm's standalone diagnostic ability. The statement "images are of diagnostic quality" indicates that the images produced by the device, when viewed by a human expert, are sufficient for diagnosis.
-
The type of ground truth used:
- Expert consensus (from a single board-certified radiologist) on the diagnostic quality of the images generated by the device. It's about image quality, not the presence or absence of specific pathologies based on a gold standard (like pathology or outcomes data).
-
The sample size for the training set:
- Not applicable. As a conventional X-ray system, there isn't an AI model with a distinct "training set" in the sense of machine learning. The system's "training" refers to its design and engineering adhering to standards and producing high-quality images.
-
How the ground truth for the training set was established:
- Not applicable for the same reason as point 7. The "ground truth" here is the engineering and design principles that ensure the system produces diagnostically acceptable images, confirmed through non-clinical testing and expert review of clinical images.
Ask a specific question about this device
Page 1 of 1