Search Results
Found 2 results
510(k) Data Aggregation
(48 days)
uDR 380i Pro
uDR 380i Pro is a mobile digital radiography device intended to acquire X-ray images of the human anatomy for medical diagnosis. uDR 380i Pro can be used on both adult and pediatric patient by a qualified and trained operator. This device is not intended for mammography.
uDR 380i Pro is a diagnostic mobile x-ray system utilizing digital radiography (DR) technology. It can be moved to different environments for an examination, like emergency room. ICU and ward. It mainly consists of a lifting column - telescopic cantilever frame system, system motion assembly, X-ray System (high voltage generator, x-ray tube, collimator and wireless flat panel detectors which have been cleared in K230175), power supply system and software for acquiring and processing the clinical images.
The provided text is a 510(k) summary for the uDR 380i Pro mobile X-ray system. This document primarily focuses on establishing substantial equivalence to a predicate device (K222339) and does not contain detailed information about acceptance criteria or a comprehensive study demonstrating the device's performance against specific acceptance criteria.
The key change in this 510(k) submission is the addition of new flat panel detectors (CXDI-710C and CXDI-810C) and associated control software (CXDI Control Software NE). The document explicitly states: "The modifications performed on the uDR 380i Pro (K222339) in this submission are due to the addition of flat panel detectors, including CXDI-710C and CXDI-810C, and CXDI Control Software NE which were cleared in K230175." and "The device software is unchanged from the predicate device, except for the addition of CXDI Control Software NE."
Therefore, the performance data provided is primarily to demonstrate that these new components do not adversely affect the safety and effectiveness or alter the fundamental scientific technology of the device compared to the predicate.
Here's an analysis of the provided information concerning acceptance criteria and study details:
1. A table of acceptance criteria and the reported device performance:
The document does not present a formal table of "acceptance criteria" for the device's overall performance. Instead, it compares the technological characteristics of the proposed device to the predicate device in Table 1: Comparison of Technology Characteristics. This table implicitly defines the acceptance (or "sameness") criteria based on the predicate device's specifications.
ITEM | Predicate Device: uDR 380i Pro (K222339) | Proposed Device: uDR 380i Pro | Remark |
---|---|---|---|
Product Code | IZL | IZL | Same |
Regulation No. | 21 CFR 892.1720 | 21 CFR 892.1720 | Same |
Class | II | II | Same |
Indications Use | Mobile digital radiography device for X-ray images of human anatomy for medical diagnosis for adult and pediatric patients. Not for mammography. | Mobile digital radiography device for X-ray images of human anatomy for medical diagnosis for adult and pediatric patients. Not for mammography. | Same |
Specifications (Selected) | |||
DQE (Flat Panel Detector) | Typical: 58% @3uGy,0.5lp/mm | Typical: 0.58±10% @3uGy,0.5lp/mm (for AR-C3543W&AR-B2735W), Typical: 0.58±10% @2.5uGy,0.5lp/mm (for CXDI-710C & CXDI-810C) | Note 1: DQE Performance is similar. When operated under the intended use, it did not raise new safety and effectiveness concerns. |
Disk Size | 500GB | ≥500GB | Note 2: The disk size of the proposed device is a range value, which is only a descriptive update, however the disk size can satisfy its intended use. So it did not raise new safety and effectiveness concerns. |
Acceptance is generally implied if the new device's specifications are "Same" or the differences are justified as not raising new safety/effectiveness concerns (as indicated by "Note 1" and "Note 2").
2. Sample size used for the test set and the data provenance:
- Test Set Sample Size: The document states: "Sample image of Head, chest, abdomen, spine, pelvis, upper extremity and lower extremity were provided with a board certified radiologist to evaluate the image quality in this submission." It does not specify the exact number of images or cases in this sample set. It's described as "Sample image," implying a representative, but not quantitatively defined, set.
- Data Provenance: The document does not explicitly state the country of origin or whether the data was retrospective or prospective. Given that it's a 510(k) submission for a Chinese manufacturer (Shanghai United Imaging Healthcare Co.,Ltd.), the data could be from China, but this is not confirmed.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: "a board certified radiologist" - This indicates one expert.
- Qualifications: "board certified radiologist"
4. Adjudication method for the test set:
- Adjudication Method: "Each image was reviewed with a statement indicating that image quality are sufficient for clinical diagnosis." This suggests a single-reader review without an explicit adjudication process involving multiple readers. It does not mention a 2+1, 3+1, or similar multi-reader adjudication.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC Study: The document does not describe a multi-reader multi-case (MRMC) comparative effectiveness study. There is no mention of AI assistance or human readers improving with AI vs. without AI. The device is a mobile X-ray system, and the changes relate to its hardware (detectors) and basic control software, not an AI-powered diagnostic tool requiring such a study for a 510(k).
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Not Applicable in the traditional sense: This device is an X-ray imaging system, not an AI algorithm for diagnosis. The "performance data" provided ("Clinical Image Evaluation") is about the quality of the images produced, which are then interpreted by a human. It's not a standalone diagnostic algorithm.
7. The type of ground truth used:
- Expert Consensus (single expert, effectively): The "ground truth" for image quality assessment was established by a single "board certified radiologist" who determined if the "image quality [is] sufficient for clinical diagnosis." This is effectively expert opinion/assessment rather than a gold standard like pathology or long-term outcomes data.
8. The sample size for the training set:
- The document does not specify a sample size for a training set. This is generally because the submission is for a conventional imaging device with new detectors, rather than an AI/Machine Learning device that requires explicit training data and validation sets. The "software" mentioned (CXDI Control Software NE) is for detector control and image acquisition/processing, not a deep learning algorithm.
9. How the ground truth for the training set was established:
- Not applicable/Not provided: As no training set is mentioned in the context of AI/ML, there is no discussion of how ground truth for such a set was established. The "Clinical Image Evaluation" section focuses on verification of image quality for the new detectors.
Ask a specific question about this device
(58 days)
uDR 380i Pro
uDR 380i Pro is a mobile digital radiography device intended to acquire X-ray images of the human anatomy for medical diagnosis. uDR 380i Pro can be used on both adult and pediatric patient by a qualified and trained operator. This device is not intended for mammography.
uDR 380i Pro is a diagnostic mobile x-ray system utilizing digital radiography (DR) technology. It can be moved to different environments for an examination, like emergency room. ICU and ward. It mainly consists of a lifting column - telescopic cantilever frame system, system motion assembly, X-ray System (high voltage generator, x-ray tube, collimator and wireless flat panel detectors which have been cleared in K170332 and K192632), power supply system and software for acquiring and processing the clinical images.
uDR 380i Pro is intended to acquire X-ray images for both adult and pediatric, especially for person who may not be able to be moved to a traditional RAD room. The system offers:
- A 14"×17" or 14"×14" flat panel detector
- . A high-power, 32 kW or 50kW generator
- A maneuverable drive system
- X-ray tube-collimator assembly with flexible movement
- Storage for detectors and supplies
- Image Acquisition Workstation with touchscreen user interface
I am sorry, but the provided text does not contain detailed information about the acceptance criteria or a study that proves the device meets specific acceptance criteria in the way you've outlined with performance metrics, sample sizes, expert qualifications, and adjudication methods.
The document is a 510(k) premarket notification for a medical device (uDR 380i Pro) and focuses on demonstrating substantial equivalence to a predicate device (Carestream DRX-Revolution). While it lists some technical specifications and claims that these differences do not raise new safety and effectiveness concerns, it does not present a formal study with acceptance criteria and reported device performance in the format you requested.
Here's a breakdown of what is available in the provided text in relation to your request:
1. A table of acceptance criteria and the reported device performance:
- The document provides a "Comparison of Technological Characteristics with the Predicate Devices" table (pages 4-7) which lists various specifications for both the proposed device (uDR 380i Pro) and the predicate device (DRX-Revolution).
- Instead of acceptance criteria, it provides the specifications of both devices and uses "Remark" notes (Note 1 to Note 13) to discuss any differences and justify why these differences do not raise new safety and effectiveness concerns.
- For example, for "Maximum Output Power," the proposed device lists "32kW/ 50kW" compared to the predicate's "32kW." The remark states that the larger output power represents better capability and does not raise new safety and effectiveness concerns.
- For "DQE" (Detective Quantum Efficiency), it states "Typical: 58% @3uGy,0.5lp/mm" for the proposed device and "Typical: 63% @2.5uGy,0.5lp/mm" for the predicate, noting that "Performance is similar" and "it did not raise new safety and effectiveness concerns."
- This is a comparison for substantial equivalence, not a standalone performance study against pre-defined acceptance criteria.
2. Sample size used for the test set and the data provenance:
- The document mentions a "Clinical Image Evaluation" on page 10. It states: "Sample image of Head, chest, abdomen, spine, pelvis, upper extremity and lower extremity were provided with a board certified radiologist to evaluate the image quality in this submission."
- However, it does not specify the sample size (number of images or cases) used for this evaluation.
- The data provenance is not explicitly stated (e.g., country of origin, retrospective or prospective).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- It states that "a board certified radiologist" was used.
- It refers to "a board certified radiologist" (singular), implying one expert.
- No specific experience level (e.g., 10 years of experience) is mentioned beyond "board certified."
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- The document states that "Each image was reviewed with a statement indicating that image quality are sufficient for clinical diagnosis."
- This description points to a subjective review rather than a formal adjudication process (like 2+1 or 3+1 consensus). It sounds like a single radiologist's assessment.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC comparative effectiveness study is mentioned. The device is a mobile X-ray system, not an AI-based diagnostic tool that assists human readers.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- The device is a hardware system (mobile X-ray) with accompanying software for image acquisition and processing. It is not an algorithm for standalone diagnostic performance. Therefore, this question is not directly applicable in the context of this device. The "Clinical Image Evaluation" was about the image quality produced by the system for visual assessment by a radiologist.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For the "Clinical Image Evaluation," the "ground truth" was the assessment of image quality by "a board certified radiologist" who determined if the images were "sufficient for clinical diagnosis." This falls under expert opinion/assessment of image quality rather than a definitive diagnosis established by other means like pathology or outcomes data.
8. The sample size for the training set:
- The document does not mention any training set size because the submission is for a medical imaging device (hardware and software for image acquisition), not a machine learning or AI algorithm that requires a training set for its core function of interpretation or diagnosis.
9. How the ground truth for the training set was established:
- As no training set is discussed or implied for the device's primary function, this information is not available.
In summary, the provided FDA 510(k) summary focuses on demonstrating substantial equivalence based on technical specifications and a general assessment of clinical image quality by a radiologist, rather than a detailed performance study against specific acceptance criteria for an AI-enabled diagnostic device.
Ask a specific question about this device
Page 1 of 1