(85 days)
The software performs digital enhancement of a radiographic image generated by an x-ray device. The software can be used to process adult and pediatric x-ray images. This excludes mammography applications.
The Eclipse II image processing software, like the original Eclipse image processing software, enhances projection radiography acquisitions captured from digital radiography imaging receptors (computed radiography (CR) and direct radiography (DR)).
The original Eclipse image processing software used a 4 band frequency decomposition method to enhance the output image. By comparison, the Eclipse (subject) image processing software uses 4 or more band frequency decomposition method. The additional number of bands allows for flexibility in frequency adjustments.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:
Device: Eclipse II (image processing software)
Purpose: Digital enhancement of radiographic images generated by an x-ray device for adult and pediatric x-ray images (excluding mammography).
While the provided document mentions a "clinical Reader Study" was performed and that its results "demonstrate that the Eclipse II Software provides diagnostic quality images," it does not provide the specific details required to fully address all parts of your request. The 510(k) summary is a high-level overview and often refers to detailed study reports that are not included in this public facing document.
Therefore, many of the requested fields will be marked as "Not provided in the document" or "Inferred/Assumed based on typical FDA submission practices" where possible.
Acceptance Criteria and Reported Device Performance
Acceptance Criteria (What was measured) | Reported Device Performance (Result) |
---|---|
Primary Endpoints (Inferred) | |
Diagnostic Quality of Enhanced Images | "Results of the Reader Study demonstrate that the Eclipse II Software provides diagnostic quality images." (Specific metrics like AUC, sensitivity, specificity, or reader confidence improvements are not provided). |
Equivalence/Non-inferiority to Predicate Device (Inferred) | The submission aims to demonstrate substantial equivalence to the predicate device (Kodak Eclipse Image Processing Software). The study results are intended to support this. Specific quantitative measures for equivalence are not provided. |
Secondary Endpoints (Inferred) | |
Intended Workflow Compliance | "These studies demonstrated the intended workflow..." (No specific quantitative metric provided.) |
Overall Function | "...overall function..." (No specific quantitative metric provided.) |
Verification and Validation of Requirements for Intended Use | "...verification and validation of requirements for intended use..." (No specific quantitative metric provided.) |
Reliability of the Software | "...reliability of the software." (No specific quantitative metric provided.) |
Conformance to Specifications | "Non-clinical test results have demonstrated that the device conforms to its specifications." (No specific quantitative metric provided.) |
Study Details
-
Sample sizes used for the test set and the data provenance:
- Test Set Sample Size: Not provided in the document.
- Data Provenance: Not provided in the document (e.g., country of origin, specific hospitals). The document states it processes "adult and pediatric x-ray images," implying a diverse patient population.
- Retrospective or Prospective: Not specified, but reader studies typically use retrospective image sets.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: Not provided in the document.
- Qualifications of Experts: Not provided in the document, but it can be assumed they were radiologists or clinicians experienced in interpreting radiographic images.
-
Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not provided in the document. For reader studies, consensus or majority vote among multiple readers is common for establishing ground truth or for assessing agreement.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- The document explicitly states "A clinical Reader Study was performed." This strongly suggests an MRMC study design, as reader studies are primarily MRMC by nature.
- Effect size of improvement: Not provided in the document. The document only states the study "demonstrate[s] that the Eclipse II Software provides diagnostic quality images," but offers no comparative metrics against human readers without AI assistance or a specific effect size (e.g., AUC uplift, confidence score change).
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- The primary focus is on a "Reader Study," which implies human-in-the-loop (i.e., humans reading images processed by the software).
- The phrase "Non-clinical test results have demonstrated that the device conforms to its specifications" could encompass some standalone algorithm performance testing, but specific metrics for standalone performance (e.g., image quality metrics like PSNR, SSIM, or specific contrast/detail enhancement measures) are not provided as acceptance criteria for this public summary.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Given it's a "Reader Study" for image enhancement, the ground truth for image quality or diagnostic utility would typically be established by expert consensus or a "truth panel" of experienced radiologists, often based on clinical findings, follow-up, or other imaging modalities, though this is not explicitly stated in the document.
-
The sample size for the training set:
- Not provided in the document. The document describes the software's function (enhancing images using frequency decomposition) rather than an AI model that requires a distinct training phase. If Eclipse II uses deep learning, training set details would be relevant, but the description points more towards traditional image processing algorithms ("4 or more band frequency decomposition method").
-
How the ground truth for the training set was established:
- Not applicable/Not provided. Based on the description of 4+ band frequency decomposition, this is likely a rule-based or algorithmic image processing software rather than a machine learning model that relies on a labeled training set for learning. If it did involve machine learning, the mechanism for establishing ground truth for training would be crucial, but it's not discussed here.
§ 892.1680 Stationary x-ray system.
(a)
Identification. A stationary x-ray system is a permanently installed diagnostic system intended to generate and control x-rays for examination of various anatomical regions. This generic type of device may include signal analysis and display equipment, patient and equipment supports, component parts, and accessories.(b)
Classification. Class II (special controls). A radiographic contrast tray or radiology diagnostic kit intended for use with a stationary x-ray system only is exempt from the premarket notification procedures in subpart E of part 807 of this chapter subject to the limitations in § 892.9.