Search Results
Found 1 results
510(k) Data Aggregation
(29 days)
CR 85-X
To provide diagnostic quality images to aid in physician diagnosis. Intended to provide diagnostic quality images to aid in physician diagnosis for general radiography and gastro-intestinal imaging applications.
The predicate and newly modified devices are computed radiograpy imaging systems. Instead of traditional screens and photographic film for producing the diagnostic image, these systems utilize an "imaging plate," a plate coated with photo-stimulable storage phosphors that are sensitive to X-rays and capable of retaining a latent image. After exposure, this imaging plate is inserted into a digitizer that scans it with a laser and releases the latent image in the form of light that is converted into a digital image file. The image can then be previewed on a computer workstation, adjusted if necessary then stored locally, sent to an archive, printed or sent to a softcopy capable display such as a PACS system.
The CR85-X and the ADC Compact Plus are similar. The CR85-X utilizes an improved light collector to obtain maximum light efficiency. However, the basic principles of operation are unchanged.
The provided text describes a Special 510(k) for a device modification (Agfa's CR85-X Digitizer) and primarily focuses on demonstrating substantial equivalence to a predicate device (Agfa's ADC Compact Plus). As such, it does not detail a study with specific acceptance criteria and performance metrics in the way one might expect for a de novo device submission.
Instead, the submission asserts that the modified device (CR 85-X) has the same indications for use and technological characteristics as the predicate device. For the "few characteristics that may not be precise enough to ensure equivalence," the submission states that "performance data was collected, and this data demonstrates substantial equivalence." However, in keeping with the format of a Special 510(k), these specific performance data were not included in the submission. The declarations provide certification that the data demonstrate equivalence.
Therefore, many of the requested details about acceptance criteria, specific performance metrics, sample sizes, expert involvement, and study types are not explicitly present in the provided document.
Here's an attempt to answer the questions based on the available information, noting when information is missing:
1. A table of acceptance criteria and the reported device performance
Acceptance Criteria | Reported Device Performance |
---|---|
Primary Goal: Substantial Equivalence to predicate device (ADC Compact Plus/CR 75.0) | "Performance data was collected, and this data demonstrates substantial equivalence." (Specific metrics not provided in this document). |
Proper performance to specifications | Tested through various in-house reliability and imaging performance demonstration tests (details not provided). |
Compliance with EN 60601-1-1 (medical electrical equipment - General requirements for safety) | Meets requirements. |
Compliance with EN 60601-1-2 (medical electrical equipment - Electromagnetic compatibility) | Meets requirements. |
Diagnostic quality images to aid in physician diagnosis | Stated in Indications for Use. Demonstrated to be equivalent to predicate. |
Diagnostic quality images for general radiography, orthopedic, and gastro-intestinal imaging applications | Stated in Indications for Use. Demonstrated to be equivalent to predicate. |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size (Test Set): Not specified in the provided document. The submission states, "performance data was collected," but does not detail the size or nature of the test set.
- Data Provenance: Not specified.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not specified. The document does not describe the methodology for establishing ground truth for any performance testing.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not specified.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No. The device is a digitizer for computed radiography (CR) systems, providing digital images. It is not an AI-assisted diagnostic tool for which an MRMC study comparing human readers with and without AI assistance would typically be conducted. The focus is on the imaging system's equivalence in producing diagnostic quality images.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- The device itself is a standalone hardware digitizer. Its "performance" refers to its ability to scan exposed X-ray cassettes and convert latent images into digital files of diagnostic quality, functionally equivalent to its predicate. The document implies performance testing of the device's imaging capabilities was done (e.g., "imaging performance demonstration tests"), but no specific details on such a standalone study are provided.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- Not specified. Since the focus is on maintaining diagnostic image quality compared to a predicate, the "ground truth" for performance would likely revolve around objective image quality metrics and potentially expert assessment of usability and diagnostic utility, but this is an inference, not stated fact.
8. The sample size for the training set
- Not applicable/Not specified. This device is a hardware digitizer, not an AI/ML algorithm that requires a training set in the conventional sense. Its "training" would be its design and engineering to meet specifications.
9. How the ground truth for the training set was established
- Not applicable/Not specified, as it's not an AI/ML algorithm requiring a training set with established ground truth.
Ask a specific question about this device
Page 1 of 1