Search Results
Found 1 results
510(k) Data Aggregation
(105 days)
CARESTREAM DRX-1 SYSTEM WITH DRX 2530C DETECTOR
The device is intended to capture for display radiographic images of human anatomy including both pediatric and adult patients. The device is intended for use in general projection radiographic applications wherever conventional screen-film systems or CR systems may be used. Excluded from the indications for use are mammography, fluoroscopy, and angiography applications.
The Carestream DRX-1 System is a diagnostic imaging system utilizing digital radiography (DR) technology that is used with diagnostic x-ray systems. The system consists of the Carestream DRX-1 System Console (operator console), flat panel digital imager (detector), and optional tether interface box. The system can operate with either the Carestream DRX-1 System Detector (GOS) or the DRX-2530C Detector (Csl) and can be configured to register and use both detectors. Images captured with the flat panel digital detector can be communicated to the operator console via tethered or wireless connection.
Here's a breakdown of the acceptance criteria and study information based on the provided text for K130464:
Please note that the provided 510(k) summary is for a device upgrade (new detector for an existing system) and focuses on demonstrating substantial equivalence to the predicate device. Therefore, a full-blown AI performance study with detailed metrics like sensitivity, specificity, or AUC for a diagnostic AI algorithm is not present, as the device itself is an imaging acquisition system, not an AI diagnostic tool. The "Reader Study" mentioned is a comparative effectiveness study to show that the new detector produces diagnostically equivalent images to the predicate.
Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category | Reported Device Performance |
---|---|
Non-clinical (bench) testing | "The performance characteristics and operation / usability of the Carestream DRX-1 System with DRX 2530C Detector were evaluated in non-clinical (bench) testing. These studies have demonstrated the intended workflow, related performance, overall function, shipping performance, verification and validation of requirements for intended use, and reliability of the system including both software and hardware requirements. Non-clinical test results have demonstrated that the device conforms to its specifications. Predefined acceptance criteria were met and demonstrated that the device is as safe, as effective, and performs as well as or better than the predicate device." |
Clinical Concurrence (Diagnostic Capability) | "A concurrence study of clinical image pairs was performed... to demonstrate the diagnostic capability of the Carestream DRX-1 System with DRX 2530C Detector. Results of the Reader Study indicated that the diagnostic capability of the Carestream DRX-1 System with DRX 2530C Detector is statistically equivalent to or better than that of the predicate device. These results support a substantial equivalence determination." |
Study Details
Detailed information regarding sample sizes, expert qualifications, and adjudication methods for the clinical study is very limited in this 510(k) summary. The document primarily states that a "concurrence study of clinical image pairs" and a "Reader Study" were conducted to demonstrate diagnostic equivalence.
-
Sample Size Used for the Test Set and Data Provenance:
- Sample Size: Not explicitly stated in the provided document. The text mentions "clinical image pairs."
- Data Provenance: Not explicitly stated (e.g., country of origin). The study implicitly uses "clinical image pairs," suggesting prospective or retrospective collection of patient images. The nature of a "concurrence study" implies that images were acquired using both the new device and the predicate device (or comparable standard) for direct comparison.
-
Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:
- Number of Experts: Not explicitly stated. The term "Reader Study" implies multiple readers.
- Qualifications of Experts: Not explicitly stated (e.g., specific experience or subspecialty).
-
Adjudication Method for the Test Set:
- Not explicitly stated. A "concurrence study" and "Reader Study" typically involve readers independently evaluating images, and then their findings are compared to each other, to predefined criteria, or to a ground truth. Common adjudication methods (like 2+1 or 3+1) are not detailed in this summary.
-
Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- Was it done? Yes, a "Reader Study" was performed. While not explicitly termed "MRMC," the description "Results of the Reader Study indicated that the diagnostic capability... is statistically equivalent to or better than that of the predicate device" strongly suggests a comparative study involving multiple readers assessing images. The purpose was to show diagnostic capability of the new detector in comparison to the predicate.
- Effect size of human readers improvement with AI vs without AI assistance: This information is not applicable as this study is not evaluating an AI diagnostic tool or AI assistance. It is evaluating the diagnostic equivalence of an imaging acquisition device (a new X-ray detector). The "improvement" is in the context of the new detector's image quality being diagnostically equivalent or better than the predicate detector, not AI assistance.
-
Standalone Performance (algorithm only without human-in-the-loop performance):
- Was it done? This is not applicable in the context of this 510(k). The device is an X-ray detector, which is an image acquisition component, not an standalone algorithm. Its performance is intrinsically linked to human interpretation of the images it produces.
-
Type of Ground Truth Used:
- The document implies that the ground truth for the "concurrence study" would have been established by expert interpretation of the images (either from the new device or the predicate) to determine if the new device's images offered equivalent diagnostic information. Pathology or outcomes data are not mentioned. It's likely based on expert consensus or comparison against established diagnostic images from the predicate.
-
Sample Size for the Training Set:
- Not applicable. This submission is for an imaging acquisition device (X-ray detector), not a machine learning algorithm. Therefore, there is no "training set" in the AI/ML sense. The device's performance is driven by its physical and electronic characteristics, and its "training" would be its design, engineering, and manufacturing process.
-
How the Ground Truth for the Training Set Was Established:
- Not applicable, as there is no training set for an AI/ML algorithm.
Ask a specific question about this device
Page 1 of 1