Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K233599
    Date Cleared
    2024-03-18

    (130 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    X-Clever (ASHK100G)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Software used in a device that saves, enlarges, reduces, views as well as analyzes, transfers and prints medical images. (excluding fluoroscopic, angiographic, and mammographic applications.)

    Device Description

    LG Acquisition Workstation Software ASHK100G is a diagnostic software for final postprocessed X-ray images of body parts of actual patients acquired through the integration of digital X-ray detectors (DXD+ASHK100G; refer to below list for the compatible LG DXD series) and X-ray generators. By integrating the [MWL] and the [PACS] server, this software can be used to check the information and images of the patients' body parts in real time in an HIS (Hospital Information System) based environment.

    AI/ML Overview

    The provided text is a 510(k) Summary for the X-Clever (ASHK100G) device. Within this summary, information is given about performance testing relating to a new Wide Dynamic View (WDV) algorithm. However, the document does not provide a table of acceptance criteria, specific reported device performance metrics against those criteria, or the detailed study design (sample sizes, expert qualifications, adjudication methods, MRMC study details, ground truth specifics for test and training sets) that would typically be found in a detailed study report.

    Here's a breakdown of the available information and what is not present:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document states: "The performance test results indicate that the WDV algorithm enhances the performance of the proposed medical device by normalizing tissue through the creation of a regional map based on image location and distribution characteristics. This leads to more natural and consistent images compared to those that rely solely on residual and image brightness signal composition."

    However, this is a qualitative statement, not a table with specific acceptance criteria (e.g., quantitative metrics like AUC, sensitivity, specificity, or specific perceptual scores with thresholds) and corresponding numerical results.

    2. Sample size used for the test set and the data provenance:

    • Sample size: Not specified.
    • Data provenance: Not specified. The document only mentions "clinical opinions on the images processed with WDV," implying human readers were involved in assessing image quality, but it does not detail the origin of these images (e.g., country, retrospective/prospective).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    The document mentions "clinical opinions on the images processed with WDV." This suggests that experts evaluated the images, but:

    • Number of experts: Not specified.
    • Qualifications of experts: Not specified (e.g., "radiologist with 10 years of experience").

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    Not specified. The process of how "clinical opinions" were combined or used to establish a ground truth or a performance measure is not detailed.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • MRMC study: The document does not explicitly state that a formal Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done comparing human readers with and without AI assistance. It mentions "clinical opinions on the images processed with WDV," which implies human review of images processed by the device's new algorithm, but it doesn't describe a comparison between human performance with and without the device's assistance.
    • Effect size of human reader improvement: Not reported.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    The device (X-Clever ASHK100G) is described as "Software used in a device that saves, enlarges, reduces, views as well as analyzes, transfers and prints medical images." The "WDV algorithm" is an image processing algorithm. Its performance is assessed in terms of generating "more natural and consistent images." This evaluation implicitly refers to the standalone performance of the algorithm in processing images, but it's not a diagnostic algorithm outputting clinical findings directly. The "clinical opinions" are likely an assessment of the quality of the images produced by the algorithm, rather than its diagnostic accuracy for specific conditions.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    The "ground truth" seems to be effectively expert opinion/consensus on image quality. The document explicitly states: "the performance test for the WDV algorithm includes clinical opinions on the images processed with WDV." It's not based on pathology, outcomes data, or a definitive diagnostic reference standard for a specific disease. Instead, it's about the perceived improvement in image characteristics.

    8. The sample size for the training set:

    Not specified. The document discusses a "new WDV algorithm" and "optimization process," indicating machine learning or image processing algorithm development, but it does not provide details on training data.

    9. How the ground truth for the training set was established:

    Not specified. As the training set size itself is not mentioned, neither is the method for establishing its ground truth.


    Summary of Available Information (from the provided text):

    • Device: X-Clever (ASHK100G), a medical image management and processing system.
    • Key Change: Addition of a Wide Dynamic View (WDV) algorithm for image processing.
    • Performance Claim for WDV: "enhances the performance of the proposed medical device by normalizing tissue through the creation of a regional map based on image location and distribution characteristics. This leads to more natural and consistent images compared to those that rely solely on residual and image brightness signal composition."
    • Performance Evaluation: A "performance test for the WDV algorithm includes clinical opinions on the images processed with WDV."
    • Conclusion: "the addition of the WDV feature has not had any negative impact on the performance and safety of the proposed device."
    • Clinical Studies: "No clinical studies were considered necessary and performed. ... Therefore, a separate clinical study is not applicable in this case."

    In essence, the document confirms that a performance test was conducted for the WDV algorithm using clinical opinions on image quality, and the results were positive (improved image naturalness and consistency). However, it lacks the detailed quantitative metrics, sample sizes, and expert qualification specifics typically requested for a comprehensive study description.

    Ask a Question

    Ask a specific question about this device

    K Number
    K212137
    Device Name
    X-Clever
    Manufacturer
    Date Cleared
    2021-12-10

    (155 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    X-Clever

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Software used in a device that saves, enlarges, reduces, views as well as analyzes, transfers and prints medical images. (excluding fluoroscopic, angiographic, and mammographic applications.)

    Device Description

    LG Acquisition Workstation Software ASHK100G is a diagnostic software for final post-processed X-ray images of body parts of actual patients acquired through the integration of digital X-ray detectors (DXDs) and X-ray generators. By integrating the [MWL] and the [PACS] server, this software can be used to check the information and images of the patients' body parts in real time in a HIS (Hospital Information System) based environment.

    A new image post-processing algorithm, MLP3 has been added to the proposed device. MLP3 provides image quality that is substantially equivalent to or slightly better than the predicate device even at lower x-ray dose levels. In addition, the functions have been added or modified to improve the user interface.

    AI/ML Overview

    The provided text describes the LG Acquisition Workstation Software ASHK100G (Model ASHK100G, Trade Name X-Clever), a medical image management and processing system. The main focus of the provided information regarding acceptance criteria and study proving device performance is on the MLP3 new image post-processing algorithm.

    Here's an analysis of the provided text to extract the requested information:

    1. A table of acceptance criteria and the reported device performance

    The document doesn't explicitly state quantitative acceptance criteria in a pass/fail table format, but it does describe the performance goal for the MLP3 algorithm: to provide image quality "substantially equivalent to or slightly better than the predicate device even at lower x-ray dose levels" (for noise reduction) and "comparable to that of an image taken with an anti-scatter grid" (for scatter correction).

    Acceptance Criteria (Implied)Reported Device Performance
    MLP3 Noise Reduction Algorithm: Image quality of low dose images to be enhanced and comparable to standard dose images, with reduced radiation dose.Clinical Study Results (Adult Chest PA X-rays):
    • "The study showed that the image quality of adult chest PA x-rays taken at lower radiation doses and processed with our new image processing algorithm improved the overall diagnostic image quality, which became substantially equivalent to that of x-ray images acquired at standard dose levels."
    • "On average, radiation dose of images acquired at lower doses were approximately 50% less than that of images acquired at standard doses." |
      | MLP3 Scatter Correction Algorithm: Image quality of non-grid images to be enhanced and comparable to images taken with an anti-scatter grid, with reduced radiation dose/grid use. | Clinical Study Results (Adult Chest AP X-rays):
    • "The study showed that with our new algorithm, the image quality is improved for adult chest AP x-rays taken without an anti-scatter grid, and the improved image quality becomes comparable to images taken with an anti-scatter grid."
    • "On average, the radiation dose of non-grid images were 37% lower than that of grid images." |
      | Non-Clinical (In-house) Image Quality Evaluation (Adults): Image quality of MLP3 processed images to be equivalent to or slightly better than MLP2 (predicate). | Results: "The results show that our new image processing algorithm provides image quality equivalent to or slightly better than the predicate device." |
      | Non-Clinical (In-house) Image Quality Evaluation (Pediatric and Infant): Image quality of MLP3 processed phantom images to be equivalent to or slightly better than MLP2 (predicate). | Results: "The results show that our new image processing algorithm provides image quality equivalent to or slightly better than the predicate device." |

    2. Sample sizes used for the test set and the data provenance

    • MLP3 Noise Reduction Algorithm Study:
      • Sample Size: Clinical images acquired from forty patients.
      • Data Provenance: "From one small clinical study performed at one clinical site," indicating prospective data collection from a single, likely domestic, location (Republic of Korea, based on applicant address). The study uses actual patient images.
    • MLP3 Scatter Correction Algorithm Study:
      • Sample Size: Clinical images acquired from forty patients.
      • Data Provenance: "From one small clinical study performed at one clinical site," indicating prospective data collection from a single, likely domestic, location (Republic of Korea). The study uses actual patient images.
    • In-house Image Quality Evaluation for Adults (Non-Clinical):
      • Sample Size: Clinical images of 30 common radiographic positions. (Note: this is positions, not necessarily distinct patients, though likely involves multiple patients).
      • Data Provenance: In-house bench testing results.
    • In-house Image Quality Evaluation for Pediatric and Infant (Non-Clinical):
      • Sample Size: Phantom testing results for a range of exams. Specific number of phantom images is not given, but refers to "chest, skull, abdomen and pelvis."
      • Data Provenance: In-house bench testing results.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • MLP3 Noise Reduction Algorithm Study:
      • Number of Experts: Three
      • Qualifications: "board certified radiologists."
    • MLP3 Scatter Correction Algorithm Study:
      • Number of Experts: Two
      • Qualifications: "board certified radiologists."

    4. Adjudication method for the test set

    The document does not explicitly state an adjudication method (e.g., 2+1, 3+1). It only says that the image qualities "were evaluated" by the radiologists. This implies that their opinions (or scores) were aggregated in some way, but the specific method for resolving inconsistencies or reaching a consensus is not detailed.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    The studies described were primarily focused on the algorithm's impact on image quality, which then allows for reduced dose or elimination of an anti-scatter grid while maintaining diagnostic image quality for human readers. These are not multi-reader multi-case (MRMC) comparative effectiveness studies designed to show how human readers directly improve their performance (e.g., accuracy, confidence) when assisted by AI vs. unassisted.

    Instead, the studies show:

    • MLP3 (AI-powered post-processing) results in images from lower doses being "substantially equivalent" in diagnostic image quality to standard dose images.
    • MLP3 results in images without a grid being "comparable" in image quality to images with a grid.

    The effect size is described in terms of dose reduction rather than human reader performance improvement:

    • Noise Reduction: "On average, radiation dose of images acquired at lower doses were approximately 50% less than that of images acquired at standard doses."
    • Scatter Correction: "On average, the radiation dose of non-grid images were 37% lower than that of grid images."

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The primary evaluation of the MLP3 algorithm explicitly involved human readers (board-certified radiologists) evaluating the image quality. The "in-house image quality evaluation" (bench testing) also implies visual assessment of image quality, likely by human experts, to compare MLP3 and MLP2 output. Therefore, a purely "algorithm-only" performance metric independent of human assessment is not detailed as a primary outcome. The algorithm's function is to process images for human interpretation.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    The ground truth for the clinical studies appears to be expert evaluation/consensus of diagnostic image quality by board-certified radiologists. They assessed whether the processed images (low dose/no grid) were "substantially equivalent" or "comparable" in diagnostic quality to the higher dose/grid-obtained images. There is no mention of pathology or outcomes data as ground truth for this aspect of the device's performance.

    8. The sample size for the training set

    The document does not provide any information regarding the sample size for the training set used to develop or train the MLP3 algorithm. It only details the test sets.

    9. How the ground truth for the training set was established

    Since the document does not mention the training set size, it also does not provide information on how the ground truth for the training set was established.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1