Search Results
Found 2 results
510(k) Data Aggregation
(219 days)
The software performs digital enhancement of a radiographic image generated by an x-ray device. The software can be used to process adult and pediatric x-ray images. This excludes mammography applications.
Eclipse software runs inside the ImageView product application software (also namely console software). The Eclipse image processing software II with Smart Noise Cancellation is similar to the predicate Eclipse image processing software (K180809). Eclipse with Smart Noise Cancellation is an optional feature that enhances projection radiography acquisitions captured from digital radiography imaging receptors (Computed Radiography (CR) and Direct Radiography (DR). The modified software is considered an extension of the software (it is not stand alone and is to be used only with the predicate device supports the Carestream DRX family of detectors, this includes all CR and DR detectors. The primary difference between the predicate and the subject device is the addition of a Smart Noise Cancellation module. The Smart Noise Cancellation module consists of a Convolutional Network (CNN) trained using clinical images with added simulated noise to represent reduced signal-to-noise acquisitions. Eclipse with Smart Noise Cancellation (modified device) incorporates enhanced noise reduction prior to executing Eclipse II image processing software.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
Based on the provided text, the device Eclipse II with Smart Noise Cancellation is considered substantially equivalent to its predicate Eclipse II (K180809) due to modifications primarily centered around an enhanced noise reduction feature. The acceptance criteria and the study that proves the device meets these criteria are inferred from the demonstrated equivalence to the predicate device and the evaluation of the new Smart Noise Cancellation module.
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are implicitly tied to the performance of the predicate device and the new feature's ability to maintain or improve upon key image quality attributes without introducing new safety or effectiveness concerns.
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Diagnostic Quality Preservation/Improvement: The investigational software (Eclipse II with Smart Noise Cancellation) must deliver diagnostic quality images equivalent to or exceeding the predicate software (Eclipse II). | Clinical Evaluation: "The statistical test results and graphical summaries demonstrate that the investigational software delivers diagnostic quality images that exceed the quality of the predicate software over a range of exams, detector types and exposure levels." |
No Substantial Residual Image Artifacts: The noise reduction should not introduce significant new artifacts. | Analysis of Difference Images: "The report focused on the analysis of the residual image artifacts. In conclusion, the images showed no substantial residual edge information within regions of interest." |
Preservation/Improvement of Detectability: The detectability of lesions should not be negatively impacted and ideally improved. | Ideal Observer Evaluation: "The evaluation demonstrated that detectability is preserved or improved with the investigational software for all supported detector types and exposure levels tested." |
No New Questions of Safety & Effectiveness: The modifications should not raise new safety or effectiveness concerns. | Risk Assessment: "Risks were assessed in accordance to ISO 14971 and evaluated and reduced as far as possible with risk mitigations and mitigation evidence." |
Overall Conclusion: "The differences within the software do not raise new or different questions of safety and effectiveness." | |
Same Intended Use: The device must maintain the same intended use as the predicate. | Indications for Use: "The software performs digital enhancement of a radiographic image generated by an x-ray device. The software can be used to process adult and pediatic x-ray images. This excludes mammography applications." (Stated as "same" for both predicate and modified device in comparison chart) |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: Not explicitly stated. The text mentions "a range of exams, detector types and exposure levels" for the clinical evaluation, and "clinical images with added simulated noise" for the CNN training.
- Data Provenance: Not explicitly stated. The text mentions "clinical images," implying real-world patient data, but does not specify the country of origin or whether it was retrospective or prospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: Not explicitly stated. The text mentions a "clinical evaluation was performed by board certified radiologists." It does not specify the number involved.
- Qualifications of Experts: "Board certified radiologists." No specific years of experience are provided.
4. Adjudication Method for the Test Set
- Adjudication Method: Not explicitly stated. The text mentions images were evaluated using a "5-point visual difference scale (-2 to +2) tied to diagnostic confidence" and a "4-point RadLex scale" for overall diagnostic capability. It does not describe a method for resolving discrepancies among multiple readers, such as 2+1 or 3+1.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
- MRMC Comparative Effectiveness Study: Yes, a clinical evaluation was performed by board-certified radiologists comparing the investigational software to the predicate software. While it doesn't explicitly use the term "MRMC," the description of a clinical evaluation by multiple radiologists comparing two versions of software suggests this type of study was conducted.
- Effect Size of Human Readers Improvement with AI vs. without AI Assistance: The text states, "The statistical test results and graphical summaries demonstrate that the investigational software delivers diagnostic quality images that exceed the quality of the predicate software over a range of exams, detector types and exposure levels." This indicates an improvement in diagnostic image quality with the new software (which incorporates AI - the CNN noise reduction), suggesting that human readers benefit from this enhancement. However, a specific effect size (e.g., AUC improvement, percentage increase in accuracy) is not provided in the summary.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Performance: Partially. The "Ideal Observer Evaluation" seems to be a more objective, algorithm-centric assessment of detectability, stating that "detectability is preserved or improved with the investigational software." Also, the "Analysis of the Difference Images" checked for artifacts without human interpretation as the primary outcome. However, the overall "diagnostic quality" assessment was clinical, involving human readers.
7. The Type of Ground Truth Used
- Type of Ground Truth: The text implies a human expert consensus/evaluation as the primary ground truth for diagnostic quality. The "5-point visual difference scale" and "4-point RadLex scale" evaluated by "board certified radiologists" serve as the basis for assessing diagnostic image quality. For the "Ideal Observer Evaluation," the ground truth likely involved simulated lesions.
8. The Sample Size for the Training Set
- Training Set Sample Size: Not explicitly stated. The text mentions "clinical images with added simulated noise" were used to train the Convolutional Network (CNN).
9. How the Ground Truth for the Training Set Was Established
- Ground Truth for Training Set: The ground truth for training the Smart Noise Cancellation module (a Convolutional Network) was established using "clinical images with added simulated noise to represent reduced signal-to-noise acquisitions." This suggests that the model was trained to learn the relationship between noisy images (simulated low SNR) and presumably clean or less noisy versions of those clinical images to perform noise reduction. The text doesn't specify how the "clean" versions were obtained or verified, but it implies a supervised learning approach where the desired noise-free output served as the ground truth.
Ask a specific question about this device
(40 days)
The device is intended to capture for display radiographic images of human anatomy including both pediatric and adult patients. The device is intended for use in general projections wherever conventional screen-film systems or CR systems may be used. Excluded from the indications for use are mammography, fluoroscopy, and angiography applications
The modified DRX Plus 3543C is a scintillator-photodetector device (Solid State X-ray Imager) utilizing an amorphous silicon flat panel image sensor. The modified detector is redesigned with the intent to reduce weight and increase durability, while utilizing a non-glass substrate material and cesium iodide scintillator. The modified detector, like the predicate is designed to interact with Carestream's DRX-1 System (K090318).
The modified DRX Plus 3543C Detector, like the predicate, creates a digital image from the x-rays incident on the input surface during an x-ray exposure. The flat panel imager absorbs incident x-rays and converts the energy into visible light photons. These light photons are converted into electrical charge and stored in structures called "pixels." The digital value in each pixel of the image is directly related to the intensity of the incident x-ray flux at that particular location on the surface of the detector. Image acquisition software is used to correct the digital image for defective pixels and lines on the detector, perform gain and offset correction and generate sub-sampled preview images
The provided text describes a 510(k) submission for a medical device, the Lux 35 Detector, which is a digital X-ray flat panel detector. The submission aims to demonstrate substantial equivalence to a predicate device (DRX Plus 3543 Detector). The information focuses on design modifications and non-clinical testing.
Here's an analysis of the acceptance criteria and study details based on the provided text, highlighting where information is present and where it is not:
Device: Lux 35 Detector (Carestream Health, Inc.)
Study Type: Non-clinical (bench) testing, specifically a Phantom Image Study, to demonstrate substantial equivalence of image quality to a predicate device.
1. Table of Acceptance Criteria and Reported Device Performance:
The document doesn't explicitly state "acceptance criteria" for image quality in a tabular format with pass/fail thresholds. Instead, it provides a qualitative comparison of image attributes. The closest interpretation of "acceptance criteria" is that the modified device's image quality needed to be "equivalent to just noticeably better than" the predicate.
Acceptance Criterion (Inferred) | Reported Device Performance (Lux 35 Detector vs. Predicate) |
---|---|
Image Detail Performance | Ratings for detail were "significantly greater than 0," indicating images were equivalent to or better than predicate. |
Image Sharpness Performance | Ratings for sharpness were "significantly greater than 0," indicating images were equivalent to or better than predicate. |
Image Noise Performance | Ratings for noise were "significantly greater than 0," indicating images were equivalent to or better than predicate. |
Appearance of Artifacts | Qualitative assessment, results not numerically quantified but implied to be equivalent or better given overall conclusion. |
DQE (Detective Quantum Efficiency) | 55% (RQA-5, 1 cycle/mm, 2.5 µGy) for Lux 35 vs. 26% (RQA-5, 1 cycle/mm, 3.1 µGy) for Predicate. This represents "improved image quality." |
MTF (Modulation Transfer Function) | 62% (RQA-5, 1 cycle/mm) for Lux 35 vs. 54% (RQA-5, 1 cycle/mm) for Predicate. This represents "improved image quality." |
Overall Image Quality Comparison | "Greater than 84% of all responses were rated 0 or higher in favor of the modified DRX Plus 3543C panel." "All ratings for the attributes (detail contrast, sharpness and noise) were significantly greater than 0 indicating that the modified DRX Plus 3543C images were equivalent to just noticeably better than the predicate images." "The image quality of the modified device is at least as good as or better than that of the predicate device." |
2. Sample Size Used for the Test Set and Data Provenance:
- Sample Size: Not explicitly stated. The text mentions "a Phantom Image Study" but does not quantify the number of images or runs.
- Data Provenance: This was a non-clinical bench testing study using phantoms. Therefore, there is no patient data or geographical provenance. The study was likely conducted at Carestream's facilities. It is a prospective study in the sense that the testing was performed specifically for this submission.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Experts:
- Number of Experts: Not specified. The text mentions "Greater than 84% of all responses were rated 0 or higher," implying a group of evaluators, but their number is not provided.
- Qualifications of Experts: Not specified. It's unclear if these were radiologists, imaging scientists, or other relevant personnel.
4. Adjudication Method for the Test Set:
- Adjudication Method: Not specified. The phrase "Greater than 84% of all responses were rated 0 or higher" suggests individual ratings were collected, but how conflicts or multiple ratings were aggregated or adjudicated is not detailed.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done:
- Answer: No. The study was a "Phantom Image Study" focused on technical image quality attributes, not human reader performance.
- Effect Size of Human Readers: Not applicable, as no MRMC study was performed.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done:
- Answer: Yes, in a sense. The evaluation of DQE and MTF are standalone technical performance metrics of the detector itself, independent of human interpretation. The "Phantom Image Study" also evaluates the output of the device (images) based on technical attributes, rather than a human diagnostic task.
7. The Type of Ground Truth Used:
- Type of Ground Truth: For the phantom image study, the "ground truth" for evaluating image quality attributes (detail, sharpness, noise, artifacts) is based on technical image quality metrics (DQE, MTF) and potentially expert consensus on visual assessments of phantom images against known ideal phantom characteristics. It is not based on patient outcomes, pathology, or clinical diagnoses.
8. The Sample Size for the Training Set:
- Sample Size for Training Set: Not applicable. This device is a hardware component (X-ray detector) and the study described is a non-clinical evaluation of its image quality, not an AI/algorithm that requires a training set of data.
9. How the Ground Truth for the Training Set Was Established:
- Ground Truth Establishment for Training Set: Not applicable, as this is not an AI/algorithm that requires a training set.
Ask a specific question about this device
Page 1 of 1