(92 days)
The device is intended to capture for display radiographic images of human anatomy including both pediatric and adult patients. The device is intended for use in general projections wherever conventional screen-film systems or CR systems may be used. Excluded from the indications for use are mammography, fluoroscopy, and angiography applications.
The Carestream DRX-1 System is a diagnostic imaging system utilizing digital radiography (DR) technology that is used with diagnostic x-ray systems. The system consists of the Carestream DRX-1 System Console (operator console), flat panel digital imager (detector), and optional tether interface box. The system can operate with either the Carestream DRX-1 System Detector (GOS), the DRX-2530C Detector (CsI), the DRX Plus 3543 (GOS) Detector, or the DRX Plus 3543C (CsI) Detector and can be configured to register and use any of the detectors. Images captured with a flat panel digital detector can be communicated to the operator console via tethered or wireless connection.
The provided text describes the Carestream DRX-1 System with DRX Plus Detectors, which is a diagnostic imaging system utilizing digital radiography (DR) technology. The document indicates that this system is substantially equivalent to the predicate device, the Carestream DRX-1 System (with DRX 2530C Detector), based on non-clinical testing and a clinical image concurrence study.
Here's the breakdown of the information you requested based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document describes acceptance criteria but does not provide a direct table of each criterion alongside its specific performance value. Instead, it states that "Predefined acceptance criteria were met and demonstrated that the device is as safe, as effective, and performs as well as or better than the predicate device."
It lists the types of criteria considered for non-clinical testing:
- Image quality
- Intended use
- Workflow related performance
- Shipping performance
- General functionality and reliability (including both hardware and software requirements)
It also details specific parameters for which acceptance criteria were identified:
Acceptance Criteria Category | Specific Parameters | Reported Device Performance |
---|---|---|
Image Quality | MTF (at various spatial resolutions), DQE (at various spatial resolutions), Sensitivity, Ghosting, Exposure Latitude, Signal Uniformity, Dark Noise (ADC), Resolution, Pixel Pitch, Total Pixel Area, Usable Pixel Area | Met predefined criteria; statistically equivalent to or better than the predicate device. Image quality parameters such as DQE, sensitivity, and MTF of the DRX Plus detectors demonstrate this superior performance. |
Physical/Technical | Weight, Pixel Size, Boot-up time, Operating temperature | Met predefined criteria; new detectors are lighter and thinner (1.47cm vs 1.55cm). |
Environmental | IPX7 liquid resistance | Improved from IPX1 to IPX7 liquid resistance. |
General | Functionality, reliability (hardware & software), shipping performance, workflow related performance | Met predefined criteria; works as intended. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: The document mentions a "concurrence study of clinical image pairs" was performed but does not specify the sample size (number of images or cases) used for this clinical image concurrency study.
- Data Provenance: The document does not specify the country of origin for the data. The study involved "clinical image pairs," implying the use of patient data. It is a retrospective study since the images are "pairs" that were already existing.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- The document implies a "Reader Study" was conducted but does not specify the number of experts or the "readers" involved.
- Qualifications of Experts: The document does not specify the qualifications of the experts/readers.
4. Adjudication Method for the Test Set
- The document does not specify the adjudication method used (e.g., 2+1, 3+1, none) for the test set in the reader study.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done, and Effect Size
- MRMC Study: A "Reader Study" was performed, which is typically a form of MRMC study, as it involved human readers assessing images. The text states: "Results of the Reader Study indicated that the diagnostic capability of the Carestream DRX-1 System with DRX Plus Detectors is statistically equivalent to or better than that of the predicate device."
- Effect Size: The document states the diagnostic capability is "statistically equivalent to or better than" the predicate device, but it does not provide a specific effect size or quantifiable improvement of human readers with AI (as this is a detector, not an AI system) or relative to the predicate device. It evaluates the detector's impact on diagnostic capability.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done
- This device is an imaging detector system, not an AI algorithm. Therefore, the concept of "standalone (algorithm only)" performance without human-in-the-loop is not directly applicable in the same way it would be for an AI diagnostic tool. However, the non-clinical (bench) testing, which evaluated various technical parameters like MTF, DQE, sensitivity, etc., can be considered the "standalone" or intrinsic performance evaluation of the device itself, independent of human interpretation.
7. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, etc.)
- For the clinical image concurrence study, the ground truth source is not explicitly stated. It refers to "clinical image pairs" and the "diagnostic capability," suggesting that the ground truth would inherently be based on established clinical diagnoses or reference standards against which the image interpretations were compared.
8. The Sample Size for the Training Set
- This document describes a new detector system and a comparative study against a predicate device. It is not an AI model that requires a "training set" in the conventional sense. The "training" here refers to the engineering, design, and calibration processes during the detector's development, for which a sample size is not specified or relevant in the context of typical AI model training.
9. How the Ground Truth for the Training Set Was Established
- As this is a hardware device (detector) and not an AI model requiring a training phase with labeled data, the concept of "ground truth for the training set" is not applicable. The "ground truth" for the device's design and manufacturing would be its engineering specifications and the physical laws of X-ray detection.
§ 892.1680 Stationary x-ray system.
(a)
Identification. A stationary x-ray system is a permanently installed diagnostic system intended to generate and control x-rays for examination of various anatomical regions. This generic type of device may include signal analysis and display equipment, patient and equipment supports, component parts, and accessories.(b)
Classification. Class II (special controls). A radiographic contrast tray or radiology diagnostic kit intended for use with a stationary x-ray system only is exempt from the premarket notification procedures in subpart E of part 807 of this chapter subject to the limitations in § 892.9.