Search Results
Found 1 results
510(k) Data Aggregation
(17 days)
CR 10-X DIGITIZER
Agfa's Computed Radiography (CR) System with CR 10-X Digitizer is indicated for use in general projection radiographic applications to capture for display diagnostic quality radiographic images of human anatomy. The system may be used wherever conventional screen-film systems are used.
Agfa's Computed Radiography (CR) System With CR 10-X Digitizer is not indicated for use in mammography.
Agfa's Computed Radiography (CR) Systems with CR 10-X Digitizer is a solid state x-ray imaging device. Principles of operation and technological characteristics of the new and predicate devices are largely the same as other computed radiography systems:
- Phosphor coated imaging plates and cassettes for image capture.
- Laser digitizer for generating the electronic image. .
- NX workstation for image previewing, processing and routing .
Given the provided text, the Agfa Computed Radiography (CR) Systems with CR 10-X Digitizer is a medical imaging device. The 510(k) summary primarily focuses on demonstrating substantial equivalence to a predicate device (Agfa's CR 30-X digitizer, K062223) rather than proving performance against specific acceptance criteria in a quantitative study with detailed metrics.
Here's a breakdown of the requested information based on the provided text:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly state quantitative acceptance criteria in terms of numerical performance thresholds. Instead, the approach is based on demonstrating that the new device (CR 10-X Digitizer) is "substantially equivalent" to the predicate device (CR 30-X Digitizer). The "performance" is generally described as "similar" or "same as predicate" in terms of technological characteristics and image quality.
Acceptance Criteria Category | Reported Device Performance (CR 10-X Digitizer) |
---|---|
Intended Use | Same as predicate (general projection radiographic applications, diagnostic quality images, not for mammography) |
Technological Characteristics | Largely the same as predicate (e.g., phosphor coated imaging plates, laser digitizer, NX workstation, scanning technology, light collection, scanning resolution (100μ), dynamic range acquisition (16 bit), image processing (MUSICA, MUSICA2), dynamic range display (12 bit)). Minor differences (e.g., active matrix slightly different, throughput (34 plates/hr) is lower than predicate (60 plates/hr), cassette sizes are fewer). These differences "do not alter the intended therapeutic/diagnostic effect." |
Image Quality | "Meets specifications and operates as planned" with internal and experts comparing the device to its predicate. (No specific quantitative metrics or thresholds provided.) |
Safety and Standards Compliance | Conforms to IEC 60601-1, IEC 60601-1-2, ACR/NEMA PS3.1-3.18 (DICOM), ISO 14971, ISO 13485. |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document states: "Tests included image quality tests with internal and experts comparing the device to its predicate." However, it does not specify the sample size for any test set (e.g., number of images, number of patients).
The data provenance (country of origin, retrospective/prospective) is not mentioned.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
The document mentions "experts comparing the device to its predicate" for image quality tests. However, it does not specify the number of experts or their qualifications (e.g., specialty, years of experience).
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The document does not describe any adjudication method for establishing ground truth or comparing device performance. It simply states "internal and experts comparing the device to its predicate."
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No, a multi-reader, multi-case (MRMC) comparative effectiveness study was not mentioned in the provided text. The device is a CR digitizer, not an AI-powered diagnostic tool, so the concept of human readers improving "with AI vs without AI assistance" is not applicable here. The study described is a comparison of the new hardware device to a predicate hardware device.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
This refers to the performance of the device itself. The document states that "The device has completed verification and validation testing to confirm it meets specifications and operates as planned." This includes image quality tests. While it's not an "algorithm only" in the sense of AI, it represents the standalone performance of the imaging system. However, specific performance metrics for this standalone performance (beyond technological characteristics) are not detailed other than being compared by experts to the predicate.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
For image quality tests, the "ground truth" or reference for comparison appears to be the predicate device's image quality, as assessed by "internal and experts." This suggests a form of expert opinion or visual comparison, rather than an objective "ground truth" like pathology or outcomes data.
8. The sample size for the training set
The document does not mention a training set as this is a hardware device (CR digitizer) and not an AI/machine learning algorithm that typically requires a training set.
9. How the ground truth for the training set was established
Since there is no mention of a training set, this information is not applicable.
Ask a specific question about this device
Page 1 of 1