Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K213307
    Date Cleared
    2022-01-14

    (102 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Eclipse II with Smart Noise Cancellation

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The software performs digital enhancement of a radiographic image generated by an x-ray device. The software can be used to process adult and pediatric x-ray images. This excludes mammography applications.

    Device Description

    Eclipse software runs inside the Image View product application software (not considered stand-alone software). Smart Noise Cancellation is an optional feature (module) that enhances projection radiography acquisitions captured from digital radiography imaging receptors (Computed Radiography (CR) and Digital Radiography (DR). Eclipse II with Smart Noise Cancellation supports the Carestream DRX family of detectors which includes all CR and DR detectors.

    The Smart Noise Cancellation module consists of a Convolutional Network (CNN) trained using clinical images with added simulated noise to represent reduced signal-to-noise acquisitions.

    Eclipse II with Smart Noise Cancellation incorporates enhanced noise reduction prior to executing Eclipse image processing software. The software has the capability to lower dose up to 50% when processed through the Eclipse II software with SNC, resulting in improved image quality. A 50% dose reduction for CSI panel images and 40% dose reduction for GOS panel images when processed with Eclipse II and SNC results in image quality as good as or better than nominal dose images

    AI/ML Overview

    The provided document describes the modification of the Eclipse II software to include a Smart Noise Cancellation (SNC) module. The primary goal of this modification is to enable lower radiation doses while maintaining or improving image quality. The study discussed is a "concurrence study" involving board-certified radiologists to evaluate diagnostic image quality.

    Here's the breakdown of the acceptance criteria and study details:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document doesn't explicitly state "acceptance criteria" in a table format with specific numerical thresholds for image quality metrics. Instead, it describes the objective of the study which effectively serves as the performance goal for the device.

    Acceptance Criterion (Implicit Performance Goal)Reported Device Performance
    Diagnostic quality images at reduced dose.Statistical test results and graphical summaries demonstrate that the software delivers diagnostic quality images at 50% dose reduction for CsI panel images and 40% dose reduction for GOS panel images.
    Image quality at reduced doseImage quality with reduced radiation doses is equivalent to or exceeds the quality of nominal dose images of exams.

    2. Sample Size Used for the Test Set and Data Provenance:

    • Sample Size for Test Set: Not explicitly stated. The document mentions "clinical images" and "exams, detector types and exposure levels" were used, but a specific number of images or cases for the test set is not provided.
    • Data Provenance: Not explicitly stated. The document refers to "clinical images," but there is no information about the country of origin or whether the data was retrospective or prospective.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:

    • Number of Experts: Not explicitly stated. The study was performed by "board certified radiologists." The number of radiologists is not specified.
    • Qualifications of Experts: "Board certified radiologists." No information is given regarding their years of experience.

    4. Adjudication Method for the Test Set:

    • Adjudication Method: Not explicitly stated. The document mentions a "5-point visual difference scale (-2 to +2) tied to diagnostic confidence" and a "4-point RadLex scale" for evaluating overall diagnostic capability. However, it does not describe how multiple expert opinions were combined or adjudicated if there were disagreements (e.g., 2+1, 3+1).

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance:

    • MRMC Study: The study appears to be a multi-reader study as it was "performed by board certified radiologists." However, it is not a comparative effectiveness study comparing human readers with AI assistance vs. without AI assistance. The study's aim was to determine if the software itself (Eclipse II with SNC) could produce diagnostic quality images at reduced dose, assessed by human readers. It's evaluating the output of the software, not the improvement of human readers using the software as an assistance tool.
    • Effect Size: Not applicable, as it's not an AI-assisted human reading study.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done:

    • Standalone Performance: No, a standalone (algorithm only) performance evaluation was not done. The evaluation involved "board certified radiologists" assessing the diagnostic quality of the images processed by the software. This is a human-in-the-loop assessment of the processed images, not a standalone performance of the algorithm making diagnoses.

    7. The Type of Ground Truth Used:

    • Type of Ground Truth: The ground truth for image quality and diagnostic capability was established by expert consensus (or at least expert assessment), specifically "board certified radiologists," using a 5-point visual difference scale and a 4-point RadLex scale. This is a subjective assessment by experts, rather than an objective ground truth like pathology or outcomes data.

    8. The Sample Size for the Training Set:

    • Sample Size for Training Set: Not explicitly stated. The document mentions that the Convolutional Network (CNN) was "trained using clinical images with added simulated noise." However, no specific number of images or cases used for training is provided.

    9. How the Ground Truth for the Training Set Was Established:

    • Ground Truth for Training Set: The document states the CNN was "trained using clinical images with added simulated noise to represent reduced signal-to-noise acquisitions." This implies that the ground truth for training likely revolved around distinguishing actual image data from added simulated noise. This is an intrinsic ground truth generated by the method of simulating noise on known clean clinical images, rather than a clinical ground truth established by expert review for diagnostic purposes.
    Ask a Question

    Ask a specific question about this device

    K Number
    K202441
    Date Cleared
    2021-04-02

    (219 days)

    Product Code
    Regulation Number
    892.1680
    Why did this record match?
    Device Name :

    Eclipse II with Smart Noise Cancellation

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The software performs digital enhancement of a radiographic image generated by an x-ray device. The software can be used to process adult and pediatric x-ray images. This excludes mammography applications.

    Device Description

    Eclipse software runs inside the ImageView product application software (also namely console software). The Eclipse image processing software II with Smart Noise Cancellation is similar to the predicate Eclipse image processing software (K180809). Eclipse with Smart Noise Cancellation is an optional feature that enhances projection radiography acquisitions captured from digital radiography imaging receptors (Computed Radiography (CR) and Direct Radiography (DR). The modified software is considered an extension of the software (it is not stand alone and is to be used only with the predicate device supports the Carestream DRX family of detectors, this includes all CR and DR detectors. The primary difference between the predicate and the subject device is the addition of a Smart Noise Cancellation module. The Smart Noise Cancellation module consists of a Convolutional Network (CNN) trained using clinical images with added simulated noise to represent reduced signal-to-noise acquisitions. Eclipse with Smart Noise Cancellation (modified device) incorporates enhanced noise reduction prior to executing Eclipse II image processing software.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

    Based on the provided text, the device Eclipse II with Smart Noise Cancellation is considered substantially equivalent to its predicate Eclipse II (K180809) due to modifications primarily centered around an enhanced noise reduction feature. The acceptance criteria and the study that proves the device meets these criteria are inferred from the demonstrated equivalence to the predicate device and the evaluation of the new Smart Noise Cancellation module.

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implicitly tied to the performance of the predicate device and the new feature's ability to maintain or improve upon key image quality attributes without introducing new safety or effectiveness concerns.

    Acceptance Criteria (Implied)Reported Device Performance
    Diagnostic Quality Preservation/Improvement: The investigational software (Eclipse II with Smart Noise Cancellation) must deliver diagnostic quality images equivalent to or exceeding the predicate software (Eclipse II).Clinical Evaluation: "The statistical test results and graphical summaries demonstrate that the investigational software delivers diagnostic quality images that exceed the quality of the predicate software over a range of exams, detector types and exposure levels."
    No Substantial Residual Image Artifacts: The noise reduction should not introduce significant new artifacts.Analysis of Difference Images: "The report focused on the analysis of the residual image artifacts. In conclusion, the images showed no substantial residual edge information within regions of interest."
    Preservation/Improvement of Detectability: The detectability of lesions should not be negatively impacted and ideally improved.Ideal Observer Evaluation: "The evaluation demonstrated that detectability is preserved or improved with the investigational software for all supported detector types and exposure levels tested."
    No New Questions of Safety & Effectiveness: The modifications should not raise new safety or effectiveness concerns.Risk Assessment: "Risks were assessed in accordance to ISO 14971 and evaluated and reduced as far as possible with risk mitigations and mitigation evidence."
    Overall Conclusion: "The differences within the software do not raise new or different questions of safety and effectiveness."
    Same Intended Use: The device must maintain the same intended use as the predicate.Indications for Use: "The software performs digital enhancement of a radiographic image generated by an x-ray device. The software can be used to process adult and pediatic x-ray images. This excludes mammography applications." (Stated as "same" for both predicate and modified device in comparison chart)

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: Not explicitly stated. The text mentions "a range of exams, detector types and exposure levels" for the clinical evaluation, and "clinical images with added simulated noise" for the CNN training.
    • Data Provenance: Not explicitly stated. The text mentions "clinical images," implying real-world patient data, but does not specify the country of origin or whether it was retrospective or prospective.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Number of Experts: Not explicitly stated. The text mentions a "clinical evaluation was performed by board certified radiologists." It does not specify the number involved.
    • Qualifications of Experts: "Board certified radiologists." No specific years of experience are provided.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not explicitly stated. The text mentions images were evaluated using a "5-point visual difference scale (-2 to +2) tied to diagnostic confidence" and a "4-point RadLex scale" for overall diagnostic capability. It does not describe a method for resolving discrepancies among multiple readers, such as 2+1 or 3+1.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done

    • MRMC Comparative Effectiveness Study: Yes, a clinical evaluation was performed by board-certified radiologists comparing the investigational software to the predicate software. While it doesn't explicitly use the term "MRMC," the description of a clinical evaluation by multiple radiologists comparing two versions of software suggests this type of study was conducted.
    • Effect Size of Human Readers Improvement with AI vs. without AI Assistance: The text states, "The statistical test results and graphical summaries demonstrate that the investigational software delivers diagnostic quality images that exceed the quality of the predicate software over a range of exams, detector types and exposure levels." This indicates an improvement in diagnostic image quality with the new software (which incorporates AI - the CNN noise reduction), suggesting that human readers benefit from this enhancement. However, a specific effect size (e.g., AUC improvement, percentage increase in accuracy) is not provided in the summary.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Standalone Performance: Partially. The "Ideal Observer Evaluation" seems to be a more objective, algorithm-centric assessment of detectability, stating that "detectability is preserved or improved with the investigational software." Also, the "Analysis of the Difference Images" checked for artifacts without human interpretation as the primary outcome. However, the overall "diagnostic quality" assessment was clinical, involving human readers.

    7. The Type of Ground Truth Used

    • Type of Ground Truth: The text implies a human expert consensus/evaluation as the primary ground truth for diagnostic quality. The "5-point visual difference scale" and "4-point RadLex scale" evaluated by "board certified radiologists" serve as the basis for assessing diagnostic image quality. For the "Ideal Observer Evaluation," the ground truth likely involved simulated lesions.

    8. The Sample Size for the Training Set

    • Training Set Sample Size: Not explicitly stated. The text mentions "clinical images with added simulated noise" were used to train the Convolutional Network (CNN).

    9. How the Ground Truth for the Training Set Was Established

    • Ground Truth for Training Set: The ground truth for training the Smart Noise Cancellation module (a Convolutional Network) was established using "clinical images with added simulated noise to represent reduced signal-to-noise acquisitions." This suggests that the model was trained to learn the relationship between noisy images (simulated low SNR) and presumably clean or less noisy versions of those clinical images to perform noise reduction. The text doesn't specify how the "clean" versions were obtained or verified, but it implies a supervised learning approach where the desired noise-free output served as the ground truth.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1