Search Results
Found 1 results
510(k) Data Aggregation
(28 days)
The CR-Pro is a free standing, laser driven, image digitizer intended to produce digital copies of phosphor plate recorded images in 16 levels of gray scale. The digital copies are transmitted to an internal Intel Pentium 4 based personal computer (PC) where they may be displayed, processed, or compressed for archiving or transmission via computer networks to other medical facility sites.
This device is not to be used for primary imaging diagnosis in mammography.
The eRadlink CR-Pro is a digitizing scanner that converts radiographic film transparency images to digital format. This is accomplished by utilizing a laser beam light source and a proprietary sealed path of fiber optics. The new technology provides superior image quality, requires no internal optics cleaning, no optical alignment and is inherently highly accurate and reliable.
Phosphor plates from a minimum of 10 inches to a maximum of 14 inches in width, is driven past the digitizing laser beam by a clocked, stepping motor. Scanned data is electronically converted from analog to 16 bit digital gray scale and transmitted to the internal computer for processing.
Here's an analysis of the provided text regarding the eRadlink CR-Pro's acceptance criteria and studies:
Summary of Device Performance and Acceptance Criteria
The provided 510(k) summary for the eRadlink CR-Pro does not explicitly define "acceptance criteria" in terms of specific performance thresholds for image quality metrics (e.g., SNR, spatial resolution, contrast resolution) that the device must meet. Instead, the "Effectiveness" section states:
"Program testing and calibration using Beryllium gray-scale wedge, body part phantoms and typical x-ray plate samples has demonstrated the CR-Pro’s conformance to its defined specifications."
This implies that the acceptance criteria are tied to the device's defined specifications and successful demonstration of conformance through specific tests. While the specifications for various hardware components (like dynamic range, pixel/mm, gray scale) are listed in comparison to a predicate device, specific acceptance thresholds for image quality when tested with phantoms are not detailed.
Table of Acceptance Criteria and Reported Device Performance
As specific numerical acceptance criteria for image quality from phantom studies are not explicitly stated as performance thresholds, I will present the relevant specifications from the comparison tables which implicitly represent the device's targeted performance based on the predicate device.
Acceptance Criterion (Implicit from Predicate & Device Spec) | Reported Device Performance (eRadlink CR-Pro) |
---|---|
Dynamic Range | 0.0 - 3.5 OD (Similar to Predicate's 0.5 - 3.8 OD) |
Gray Scale Depth | 16 (bits Transmitted) (Superior to Predicate's 8 or 12 bits) |
Spot Size | 100 µm (Matches Predicate) |
Pixel/mm | 8.0 (Comparable to Predicate's 10.09) |
Digitizing Rate | 100 lines/sec (Comparable to Predicate's 115 lines/sec) |
Image Quality Conformance | Demonstrated conformance to defined specifications through program testing and calibration using Beryllium gray-scale wedge, body part phantoms, and typical x-ray plate samples. |
Compliance with Safety Standards | IEC 60601-1, 2; 21CFR1040.10, DICOM 3:2004, EN55022:1998, EN55024:1998, EN61000-3-2:2000, EN61000-3-3:1995, SMPTE RP 215-2001, SMPTE 349M-2001 |
Study Information
-
Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- The document does not specify a numerical sample size for a clinical test set.
- The effectiveness was demonstrated through "program testing and calibration using Beryllium gray-scale wedge, body part phantoms and typical x-ray plate samples." These are laboratory-based tests using inanimate objects, not a clinical trial withpatient data.
- Data provenance: Not applicable as it's not clinical data. The tests would have been performed by the manufacturer, eRadlink Inc., presumably in Torrance, California, USA, where they are based. The study is prospective in the sense that the device was tested to demonstrate current functionality.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not applicable. As the testing involved phantoms and calibration, there was no need for expert clinicians to establish "ground truth" in the diagnostic sense. The ground truth would be based on the known physical properties and measurements from the test objects and the expected output of the device as per its design specifications.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not applicable, as there was no clinical test set requiring adjudication.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No MRMC comparative effectiveness study was done. This device is a digitizing scanner, not an AI-powered diagnostic tool. The submission focuses on demonstrating substantial equivalence to a predicate device for its core function of converting radiographic film to digital format.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- The "effectiveness" testing described is inherently standalone in the sense that the device performs its digitizing function, and its output is assessed against its specifications using physical test objects. There is no "human-in-the-loop" performance being evaluated in this context, nor is there an "algorithm only" performance reported in the way it might for an AI diagnostic device.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- The ground truth for the "effectiveness" testing consisted of the known physical properties of the Beryllium gray-scale wedge and body part phantoms, and the defined specifications of the CR-Pro device. It's a technical ground truth related to image capture and conversion accuracy, not a clinical ground truth.
-
The sample size for the training set
- Not applicable. This device is a hardware digitizer with associated software for processing and transmission. It does not appear to involve machine learning or AI models that would require a "training set" in the conventional sense. The software functions (e.g., image rotation, DICOM send/receive) are standard functionalities, not learned from data.
-
How the ground truth for the training set was established
- Not applicable, as there was no training set.
Ask a specific question about this device
Page 1 of 1