Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K203159
    Device Name
    Lux 35 Detector
    Manufacturer
    Date Cleared
    2020-12-02

    (40 days)

    Product Code
    Regulation Number
    892.1680
    Why did this record match?
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The device is intended to capture for display radiographic images of human anatomy including both pediatric and adult patients. The device is intended for use in general projections wherever conventional screen-film systems or CR systems may be used. Excluded from the indications for use are mammography, fluoroscopy, and angiography applications

    Device Description

    The modified DRX Plus 3543C is a scintillator-photodetector device (Solid State X-ray Imager) utilizing an amorphous silicon flat panel image sensor. The modified detector is redesigned with the intent to reduce weight and increase durability, while utilizing a non-glass substrate material and cesium iodide scintillator. The modified detector, like the predicate is designed to interact with Carestream's DRX-1 System (K090318).

    The modified DRX Plus 3543C Detector, like the predicate, creates a digital image from the x-rays incident on the input surface during an x-ray exposure. The flat panel imager absorbs incident x-rays and converts the energy into visible light photons. These light photons are converted into electrical charge and stored in structures called "pixels." The digital value in each pixel of the image is directly related to the intensity of the incident x-ray flux at that particular location on the surface of the detector. Image acquisition software is used to correct the digital image for defective pixels and lines on the detector, perform gain and offset correction and generate sub-sampled preview images

    AI/ML Overview

    The provided text describes a 510(k) submission for a medical device, the Lux 35 Detector, which is a digital X-ray flat panel detector. The submission aims to demonstrate substantial equivalence to a predicate device (DRX Plus 3543 Detector). The information focuses on design modifications and non-clinical testing.

    Here's an analysis of the acceptance criteria and study details based on the provided text, highlighting where information is present and where it is not:

    Device: Lux 35 Detector (Carestream Health, Inc.)

    Study Type: Non-clinical (bench) testing, specifically a Phantom Image Study, to demonstrate substantial equivalence of image quality to a predicate device.

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document doesn't explicitly state "acceptance criteria" for image quality in a tabular format with pass/fail thresholds. Instead, it provides a qualitative comparison of image attributes. The closest interpretation of "acceptance criteria" is that the modified device's image quality needed to be "equivalent to just noticeably better than" the predicate.

    Acceptance Criterion (Inferred)Reported Device Performance (Lux 35 Detector vs. Predicate)
    Image Detail PerformanceRatings for detail were "significantly greater than 0," indicating images were equivalent to or better than predicate.
    Image Sharpness PerformanceRatings for sharpness were "significantly greater than 0," indicating images were equivalent to or better than predicate.
    Image Noise PerformanceRatings for noise were "significantly greater than 0," indicating images were equivalent to or better than predicate.
    Appearance of ArtifactsQualitative assessment, results not numerically quantified but implied to be equivalent or better given overall conclusion.
    DQE (Detective Quantum Efficiency)55% (RQA-5, 1 cycle/mm, 2.5 µGy) for Lux 35 vs. 26% (RQA-5, 1 cycle/mm, 3.1 µGy) for Predicate. This represents "improved image quality."
    MTF (Modulation Transfer Function)62% (RQA-5, 1 cycle/mm) for Lux 35 vs. 54% (RQA-5, 1 cycle/mm) for Predicate. This represents "improved image quality."
    Overall Image Quality Comparison"Greater than 84% of all responses were rated 0 or higher in favor of the modified DRX Plus 3543C panel." "All ratings for the attributes (detail contrast, sharpness and noise) were significantly greater than 0 indicating that the modified DRX Plus 3543C images were equivalent to just noticeably better than the predicate images." "The image quality of the modified device is at least as good as or better than that of the predicate device."

    2. Sample Size Used for the Test Set and Data Provenance:

    • Sample Size: Not explicitly stated. The text mentions "a Phantom Image Study" but does not quantify the number of images or runs.
    • Data Provenance: This was a non-clinical bench testing study using phantoms. Therefore, there is no patient data or geographical provenance. The study was likely conducted at Carestream's facilities. It is a prospective study in the sense that the testing was performed specifically for this submission.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Experts:

    • Number of Experts: Not specified. The text mentions "Greater than 84% of all responses were rated 0 or higher," implying a group of evaluators, but their number is not provided.
    • Qualifications of Experts: Not specified. It's unclear if these were radiologists, imaging scientists, or other relevant personnel.

    4. Adjudication Method for the Test Set:

    • Adjudication Method: Not specified. The phrase "Greater than 84% of all responses were rated 0 or higher" suggests individual ratings were collected, but how conflicts or multiple ratings were aggregated or adjudicated is not detailed.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done:

    • Answer: No. The study was a "Phantom Image Study" focused on technical image quality attributes, not human reader performance.
    • Effect Size of Human Readers: Not applicable, as no MRMC study was performed.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done:

    • Answer: Yes, in a sense. The evaluation of DQE and MTF are standalone technical performance metrics of the detector itself, independent of human interpretation. The "Phantom Image Study" also evaluates the output of the device (images) based on technical attributes, rather than a human diagnostic task.

    7. The Type of Ground Truth Used:

    • Type of Ground Truth: For the phantom image study, the "ground truth" for evaluating image quality attributes (detail, sharpness, noise, artifacts) is based on technical image quality metrics (DQE, MTF) and potentially expert consensus on visual assessments of phantom images against known ideal phantom characteristics. It is not based on patient outcomes, pathology, or clinical diagnoses.

    8. The Sample Size for the Training Set:

    • Sample Size for Training Set: Not applicable. This device is a hardware component (X-ray detector) and the study described is a non-clinical evaluation of its image quality, not an AI/algorithm that requires a training set of data.

    9. How the Ground Truth for the Training Set Was Established:

    • Ground Truth Establishment for Training Set: Not applicable, as this is not an AI/algorithm that requires a training set.
    Ask a Question

    Ask a specific question about this device

    K Number
    K163157
    Device Name
    SmartGrid
    Manufacturer
    Date Cleared
    2017-03-21

    (131 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K060137

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    SmartGrid feature is a software option that provides, upon request by user, a diagnostic radiograph image with a reduction in visible x-ray scatter similar to the effect of an anti-scatter grid.

    Device Description

    The SmartGrid software is designed to improve contrast and reduce the appearance of scatter in radiographic images that have been acquired without a physical grid. SmartGrid encapsulates an algorithm for estimating and removing scatter from radiographic images. The SmartGrid feature is accessible through DirectView DR Product application software. Users will be able to select SmartGrid processing before an image is acquired, or to change whether SmartGrid processing is applied to a previously acquired image.

    AI/ML Overview

    The provided text describes the SmartGrid software, which aims to reduce visible x-ray scatter in diagnostic radiographic images. The acceptance criteria and the study proving the device meets these criteria can be extracted from the "Discussion of Testing" section.

    Here's the breakdown of the information requested:

    1. A table of acceptance criteria and the reported device performance

    The text doesn't explicitly list acceptance criteria in a quantitative table format. Instead, it describes the findings of the study related to image quality. We can infer the acceptance criteria from these reported outcomes.

    Acceptance Criterion (Inferred from Study Results)Reported Device Performance (SmartGrid)
    Production of diagnostic quality images."SmartGrid processing software produces diagnostic quality images."
    Image quality compared to non-grid reference images (at all exposure levels)."At all exposure levels, SmartGrid processing produced images rated as good as or better than the non-grid reference images."
    Diagnostic quality compared to grid reference acquisitions at lower exposures."SmartGrid processing software produces images with statistically equivalent diagnostic quality at lower exposures than the grid reference acquisitions."
    Image quality after Scatter Factor Estimation and Scatter Correction.Both SmartGrid and the predicate device "depend on the proper estimation of the scatter-to-primary ratio (SPR) to calculate scatter distribution and perform image enhancement." SmartGrid's performance was found to be substantially equivalent.
    Effectiveness of noise suppression.Both SmartGrid and the predicate device "suppress noise." SmartGrid's performance was found to be substantially equivalent.

    2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Sample Size for Test Set: The text states that "images of cadaveric specimens and phantoms were acquired and used in the study." It does not specify the exact number of images or cases in the test set.
    • Data Provenance:
      • Country of Origin: Not specified.
      • Retrospective or Prospective: Not explicitly stated, but the acquisition of "images of cadaveric specimens and phantoms" for the study suggests a prospective acquisition for the purpose of the study, rather than leveraging pre-existing clinical images.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    • Number of Experts: The study was conducted as a "radiologist reader study," implying multiple radiologists. The exact number is not specified.
    • Qualifications of Experts: The individuals are referred to as "Radiologists." Specific experience levels or board certifications are not mentioned.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    The text states that "Radiologists reviewed and rated investigational and reference images (both grid and non-grid), for diagnostic quality, using a Radlex subjective diagnostic rating scale." It does not describe any specific adjudication method (e.g., majority vote, consensus after discussion, or a senior radiologist as tie-breaker) if there were disagreements among readers.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC Study: Yes, a "radiologist reader study" was performed, comparing SmartGrid processed images against grid and non-grid reference images. This inherently involves multiple readers (radiologists) reviewing multiple cases (images).
    • Effect Size of Human Reader Improvement: The study focused on the diagnostic quality of the images produced by SmartGrid, rather than the improvement of human readers with AI assistance.
      • It reports that SmartGrid images were rated "as good as or better than the non-grid reference images."
      • And "statistically equivalent diagnostic quality at lower exposures than the grid reference acquisitions."
        The study does not quantify how much human readers "improved" in their diagnostic performance when assisted by SmartGrid compared to not using SmartGrid. Instead, it validates the image quality produced by the software.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    No, the primary study described is a "radiologist reader study," which by definition involves human readers evaluating the images processed by the algorithm. There is no mention of a standalone algorithm performance evaluation without human input in the document.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    The ground truth was established by expert subjective ratings using a "Radlex subjective diagnostic rating scale." This is a form of expert consensus on image quality rather than an objective clinical truth like pathology or patient outcomes. The text explicitly states that the images were rated for "diagnostic quality."

    8. The sample size for the training set

    The provided text does not mention the sample size for the training set used to develop the SmartGrid algorithm. The "Discussion of Testing" solely focuses on the performance evaluation study.

    9. How the ground truth for the training set was established

    The provided text does not mention how the ground truth for the training set was established, as it does not discuss the training phase of the algorithm development.

    Ask a Question

    Ask a specific question about this device

    K Number
    K133442
    Date Cleared
    2014-03-11

    (119 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K060137, K092363

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The software's intended use is to assist diagnosis of chest pathology by minimizing anatomical distractions such as the ribs and clavicle in chest x-ray images.

    Device Description

    The Bone Suppression Software is a software component for use on diagnostic x-ray systems utilizing digital radiography (DR) or computed radiography (CR) technology. The software option suppresses bone anatomy in order to enhance visualization of chest pathology in a companion image that is delivered in addition to the original diagnostic image.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the Bone Suppression Software, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided document does not explicitly list quantitative acceptance criteria for the Bone Suppression Software's performance. It states that "Predefined acceptance criteria were met and demonstrated that the device is as safe, as effective, and performs as well as the predicate device." However, the specific metrics or thresholds for these criteria are not detailed.

    Therefore, this section focuses on the qualitative claims made and the reported outcome.

    Acceptance Criteria CategoryReported Device Performance (Qualitative)
    Safety and Effectiveness"demonstrated that the device is as safe, as effective, and performs as well as the predicate device."
    Image Acceptability"Clinical testing was conducted to evaluate the acceptability of the companion images for assisting diagnosis." (Implied: results were acceptable)
    Design Output Compliance"Performance testing was conducted to verify the design output met the design input requirements." (Implied: met requirements)
    User Needs/Intended Uses"to validate the device conformed to the defined user needs and intended uses." (Implied: conformed)
    Substantial Equivalence"demonstrated that the device is as safe, as effective, and performs as well as the predicate device." (Supports a claim of substantial equivalence to predicates.)

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: Not specified in the provided text. The document mentions "Clinical testing was conducted," but does not give a number of cases.
    • Data Provenance: Not specified. It's unclear what country the data came from or whether it was retrospective or prospective.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts

    • Number of Experts: Not specified.
    • Qualifications of Experts: Not specified. The document only mentions "Clinical testing was conducted to evaluate the acceptability of the companion images for assisting diagnosis." It implies that qualified professionals reviewed the images, but their specific roles or experience levels are not detailed.

    4. Adjudication Method for the Test Set

    • Not specified. The document does not describe any specific adjudication method (e.g., 2+1, 3+1 consensus) for establishing ground truth or evaluating the clinical images.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

    • MRMC Study: The document does not explicitly state that an MRMC comparative effectiveness study was done comparing human readers with and without AI assistance (bone suppression).
    • Effect Size: Not provided. Since an MRMC study is not confirmed, no effect size is discussed. The study's focus was on the "acceptability of the companion images for assisting diagnosis" and demonstrating equivalence to a predicate.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    • The document describes the "Bone Suppression Software" as a "software component" that "suppresses bone anatomy in order to enhance visualization of chest pathology in a companion image that is delivered in addition to the original diagnostic image." This heavily implies that the software provides a processed image for human interpretation, rather than a standalone diagnostic output. While the algorithm performs its function independently, the "clinical testing" mentioned focuses on the acceptability of the companion images for assisting diagnosis, indicating an evaluation of the output intended for human review, not a standalone diagnostic performance evaluation against ground truth.

    7. The Type of Ground Truth Used

    • The document implies that the ground truth for evaluation was based on the "acceptability of the companion images for assisting diagnosis" by clinical review. However, the specific type of ground truth (e.g., pathology, clinical follow-up, expert consensus on disease presence) against which the diagnostic enhancement was measured is not explicitly stated. It appears the ground truth was primarily related to the acceptability and utility of the suppressed images for assisting diagnosis rather than independently verifying the presence or absence of specific pathologies with a definitive gold standard.

    8. The Sample Size for the Training Set

    • Not specified. The document does not provide any information about the training data or its sample size.

    9. How the Ground Truth for the Training Set Was Established

    • Not specified. As no information on the training set or its ground truth is provided, the establishment method is unknown.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1