Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K051127
    Manufacturer
    Date Cleared
    2005-06-03

    (31 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    KIKA Imaging Lab is intended to be used for diagnostic image management, archiving, annotation, acceptance, transfer, display, storage, image processing of diagnostic ultrasound, CD, MRI, and X-ray images, including manipulation, compression and quantification of images.

    The KIKA Imaging Lab brings is typically used in web data sharing domains, especially in clinical trial management, post marketing surveillance, adverse event management, tele-expertise or telemedicine.

    When interpreted by a trained physician, reviewed images can be used as an element for diagnosis.

    Lossy compressed mammographic images and digitized film screen images must not be reviewed for primary image interpretations. Mammographic images may only be interpreted using an FDA approved monitor that offers at least 5 Mpixel resolution and meets other technical specifications reviewed and accepted by FDA.

    Device Description

    The KIKA Imaging Lab is a Web-based medical image and workflow management system that allows reviewing, manipulating, interpreting, archiving and interchanging medical multi-modality image in the DICOM format. The functions of this workflow management system provide image viewing and manipulation in a diagnostic imaging setting. KIKA Imaging Lab Viewer is a secured web communication platform, to handle image management needs between all users of web clinical trials (investigators, experts, data managers etc.). The investigator may upload images from a CD source to a web based patient database, add annotations, adjusts display of images, scroll all the images of a DICOM series, and show cine playbacks.

    Images are stored in a DICOM 3.0 compliant format using various different standard compression methods. Parameters and measurements are saved into a file attached to the image on the remote server. The original image is never altered.

    The expert can then review images in a DICOM browser interface, retrieve images to local hard disc, do measurements (calibration, distance, area, angle) and display images in double view layout useful to compare images from different visits or series.

    AI/ML Overview

    This submission (K051127) for the KIKA Imaging Lab Viewer is a 510(k) premarket notification claiming substantial equivalence to predicate devices, namely the eFilm™ Workstation™ with Modules (K020995) and UltrPro PACs. The provided text describes the device, its intended use, and a general statement about testing, but it does not include specific acceptance criteria or a dedicated study protocol with performance results in the format requested.

    The document states:

    • "The KIKA Imaging Lab Viewer is tested according to the specifications that are documented in a Verification and Validation Test Description and Plan."
    • "In all instances, the KIKA Imaging Lab Viewer functioned as intended and was all tests and validations observed as expected."
    • "Performance Data demonstrate that the KIKA Imaging Lab Viewer is as safe and effective as the UltrPro PACs and E-Film Workstation."

    However, no actual performance data (e.g., measured accuracy, precision, speed, or specific metrics like sensitivity/specificity) is provided. Therefore, I cannot meaningfully fill out a table of acceptance criteria and reported device performance from the given information. The submission focuses on demonstrating substantial equivalence based on intended use, technological characteristics, and principles of operation, rather than on detailed clinical performance metrics for a specific diagnostic task.

    Here's an attempt to answer the questions based on the lack of specific information provided in the document:


    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria (e.g., Sensitivity, Specificity, ROC AUC, Accuracy, Processing Time)Reported Device Performance (Value/Range)
    Not specified in the provided document.Not specified in the provided document.

    Explanation: The 510(k) summary states that the device was tested per "Verification and Validation Test Description and Plan" and "functioned as intended." It concludes that "Performance Data demonstrate that the KIKA Imaging Lab Viewer is as safe and effective as the UltrPro PACs and E-Film Workstation." However, no specific quantitative or qualitative acceptance criteria (e.g., for image quality, diagnostic accuracy, processing speed, or user interface performance) or the corresponding measured performance values are provided in the document.

    2. Sample size used for the test set and the data provenance

    • Sample Size (Test Set): Not specified.
    • Data Provenance (e.g., country of origin, retrospective/prospective): Not specified. The document briefly mentions "clinical trials (investigators, experts, data managers etc.)" and "web data sharing domains, especially in clinical trial management," implying that images from clinical trials could be a source, but it doesn't describe the test set used for the claims of "performance data."

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Number of Experts: Not specified.
    • Qualifications of Experts: Not specified. The document mentions that "When interpreted by a trained physician, reviewed images can be used as an element for diagnosis," but this refers to the intended use, not the ground truth establishment for a validation study.

    4. Adjudication method for the test set

    • Adjudication Method: Not specified.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC Study: No. This submission describes a PACS viewer, not an AI-assisted diagnostic tool. Therefore, an MRMC study comparing human readers with and without AI assistance is not applicable to the documented claims.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Standalone Performance Study: No specific standalone performance study with detailed metrics is described. The device is a "viewer" and "workflow management system," implying human interaction is integral to its intended use for interpretation.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • Type of Ground Truth: Not specified. Given the nature of the device as a PACS viewer, the "performance data" likely refers to technical validation (e.g., image display accuracy, functionality of measurement tools, data integrity during transfer/storage) rather than diagnostic accuracy against a clinical ground truth.

    8. The sample size for the training set

    • Sample Size (Training Set): Not applicable. This device is not described as an AI/ML diagnostic algorithm that requires a training set in the conventional sense. It's a software viewer and management system.

    9. How the ground truth for the training set was established

    • Ground Truth Establishment (Training Set): Not applicable, as it's not an AI/ML diagnostic algorithm.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1