(102 days)
The software performs digital enhancement of a radiographic image generated by an x-ray device. The software can be used to process adult and pediatric x-ray images. This excludes mammography applications.
Eclipse software runs inside the Image View product application software (not considered stand-alone software). Smart Noise Cancellation is an optional feature (module) that enhances projection radiography acquisitions captured from digital radiography imaging receptors (Computed Radiography (CR) and Digital Radiography (DR). Eclipse II with Smart Noise Cancellation supports the Carestream DRX family of detectors which includes all CR and DR detectors.
The Smart Noise Cancellation module consists of a Convolutional Network (CNN) trained using clinical images with added simulated noise to represent reduced signal-to-noise acquisitions.
Eclipse II with Smart Noise Cancellation incorporates enhanced noise reduction prior to executing Eclipse image processing software. The software has the capability to lower dose up to 50% when processed through the Eclipse II software with SNC, resulting in improved image quality. A 50% dose reduction for CSI panel images and 40% dose reduction for GOS panel images when processed with Eclipse II and SNC results in image quality as good as or better than nominal dose images
The provided document describes the modification of the Eclipse II software to include a Smart Noise Cancellation (SNC) module. The primary goal of this modification is to enable lower radiation doses while maintaining or improving image quality. The study discussed is a "concurrence study" involving board-certified radiologists to evaluate diagnostic image quality.
Here's the breakdown of the acceptance criteria and study details:
1. Table of Acceptance Criteria and Reported Device Performance:
The document doesn't explicitly state "acceptance criteria" in a table format with specific numerical thresholds for image quality metrics. Instead, it describes the objective of the study which effectively serves as the performance goal for the device.
Acceptance Criterion (Implicit Performance Goal) | Reported Device Performance |
---|---|
Diagnostic quality images at reduced dose. | Statistical test results and graphical summaries demonstrate that the software delivers diagnostic quality images at 50% dose reduction for CsI panel images and 40% dose reduction for GOS panel images. |
Image quality at reduced dose | Image quality with reduced radiation doses is equivalent to or exceeds the quality of nominal dose images of exams. |
2. Sample Size Used for the Test Set and Data Provenance:
- Sample Size for Test Set: Not explicitly stated. The document mentions "clinical images" and "exams, detector types and exposure levels" were used, but a specific number of images or cases for the test set is not provided.
- Data Provenance: Not explicitly stated. The document refers to "clinical images," but there is no information about the country of origin or whether the data was retrospective or prospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:
- Number of Experts: Not explicitly stated. The study was performed by "board certified radiologists." The number of radiologists is not specified.
- Qualifications of Experts: "Board certified radiologists." No information is given regarding their years of experience.
4. Adjudication Method for the Test Set:
- Adjudication Method: Not explicitly stated. The document mentions a "5-point visual difference scale (-2 to +2) tied to diagnostic confidence" and a "4-point RadLex scale" for evaluating overall diagnostic capability. However, it does not describe how multiple expert opinions were combined or adjudicated if there were disagreements (e.g., 2+1, 3+1).
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance:
- MRMC Study: The study appears to be a multi-reader study as it was "performed by board certified radiologists." However, it is not a comparative effectiveness study comparing human readers with AI assistance vs. without AI assistance. The study's aim was to determine if the software itself (Eclipse II with SNC) could produce diagnostic quality images at reduced dose, assessed by human readers. It's evaluating the output of the software, not the improvement of human readers using the software as an assistance tool.
- Effect Size: Not applicable, as it's not an AI-assisted human reading study.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done:
- Standalone Performance: No, a standalone (algorithm only) performance evaluation was not done. The evaluation involved "board certified radiologists" assessing the diagnostic quality of the images processed by the software. This is a human-in-the-loop assessment of the processed images, not a standalone performance of the algorithm making diagnoses.
7. The Type of Ground Truth Used:
- Type of Ground Truth: The ground truth for image quality and diagnostic capability was established by expert consensus (or at least expert assessment), specifically "board certified radiologists," using a 5-point visual difference scale and a 4-point RadLex scale. This is a subjective assessment by experts, rather than an objective ground truth like pathology or outcomes data.
8. The Sample Size for the Training Set:
- Sample Size for Training Set: Not explicitly stated. The document mentions that the Convolutional Network (CNN) was "trained using clinical images with added simulated noise." However, no specific number of images or cases used for training is provided.
9. How the Ground Truth for the Training Set Was Established:
- Ground Truth for Training Set: The document states the CNN was "trained using clinical images with added simulated noise to represent reduced signal-to-noise acquisitions." This implies that the ground truth for training likely revolved around distinguishing actual image data from added simulated noise. This is an intrinsic ground truth generated by the method of simulating noise on known clean clinical images, rather than a clinical ground truth established by expert review for diagnostic purposes.
§ 892.1680 Stationary x-ray system.
(a)
Identification. A stationary x-ray system is a permanently installed diagnostic system intended to generate and control x-rays for examination of various anatomical regions. This generic type of device may include signal analysis and display equipment, patient and equipment supports, component parts, and accessories.(b)
Classification. Class II (special controls). A radiographic contrast tray or radiology diagnostic kit intended for use with a stationary x-ray system only is exempt from the premarket notification procedures in subpart E of part 807 of this chapter subject to the limitations in § 892.9.