Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K190993
    Date Cleared
    2020-03-05

    (324 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    Why did this record match?
    Reference Devices :

    K153346

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Kinepict Medical Imaging Tool version v2.2 is intended to visualize blood vessel structures by detecting the movement of the contrast medium bolus in standard-of-care angiography examination. This software is intended to be used in addition to, or as replacement for current DSA imaging.

    Kinepict Software can be deployed on independent hardware such as a stand-alone diagnostic review, post-processing, and reporting workstation. It can also be configured within a network to send and receive DICOM data. Furthermore, Kinepict Software can be deployed on systems of several angiography system family. It provides image guided solutions in the operating room, for image guided surgery, by Image Fusion and by navigation systems, image guided solutions in interventional cardiology and electrophysiology and image guided solutions for interventional oncology, interventional radiology, and interventional neuroradiology.

    Kinepict Software can also be combined with fluoroscopy systems or Radiographic systems.

    Device Description

    The Kinepict Medical Imaging Tool is medical diagnostic software for real-time viewing, diagnostic review, post-processing, image manipulation, optimization, communication, reporting and storage of medical images and data on exchange media. It provides image guided solutions in the operating room, for image guided surgery, by Image Fusion and by navigation systems, image guided solutions in interventional cardiology and electrophysiology and image guided solutions for interventional oncology, interventional radiology, and interventional neuroradiology. It can be deployed with Syngo Application Software VD11 (Siemens Medical Solutions USA Inc. under K153346) or Windows based software options, which are intended to assist the physician in evaluation of digital radiographic examinations, including diagnosis and/or treatment planning.

    Kinepict Medical Imaging Tool is designed to work with digital radiographic, fluoroscopic, interventional and angiographic systems. The software platform with common software architecture, platform

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the Kinepict Medical Imaging Tool version v2.2, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document doesn't explicitly state "acceptance criteria" in a bulleted or numbered list. However, based on the Summary and Conclusions and Primary effectiveness endpoints and results sections, the implicit acceptance criteria appear to be:

    Acceptance Criteria (Inferred from Study Goals)Reported Device Performance (Kinepict DVA)
    Signal-to-Noise Ratio (SNR) Improvement over DSAClinical Study: Overall median SNR of DVA images was 2.3-fold higher than that of DSA images (based on 1902 ROIs in 110 image pairs). Non-Clinical Data: Kinetic images provided 3.3 times (median) and 2.3 times (median) better SNR than raw and post-processed DSA images, respectively (based on 45 XA series). Non-clinical test concluded DVA provides "better" SNR than DSA.
    Visual Image Quality Improvement/Equivalency to DSA (Expert Consensus)Clinical Study: Raters judged the DVA images better in 69% of all comparisons (out of 238 pairs). Summary and Conclusions: Six specialists found an LA (level of agreement) of > 73% (p > 0.0001) that kinetic imaging provided higher quality images than DSA.
    Inter-rater Agreement for Visual QualityClinical Study: Interrater agreement was 81% and Fleiss κ was 0.17 (P < .001).
    No additional risks compared to predicate device"As an additional tool, it does not raise any additional risk comparing to the predicate device."
    Equivalency in post-processing functions, image storage, and sending functionsProven to be "similar" or use the "same DICOM technic and ports" as the predicate.

    2. Sample Size Used for the Test Set and Data Provenance

    • Clinical Study (Image Quality and SNR Comparison):
      • Patients: 42 patients with symptomatic Peripheral Artery Disease (PAD).
      • Image Pairs for Visual Evaluation: 238 pairs of DSA and DVA images.
      • Image Pairs for SNR Calculation: 110 image pairs with 1902 regions of interest (ROIs).
      • Data Provenance: Monocentric (Heart and Vascular Center, Budapest, Hungary), prospective, non-randomized, single-arm study. The DVA images were generated retrospectively from raw data obtained during prospective patient angiography.
    • Non-Clinical Data (SNR Comparison):
      • XA Series: 45 anonymized XA series from multiple patients.
      • Data Provenance: The XA series were "acquired as part of the clinical study: 2830/2017," suggesting their origin is from the same or a similar clinical context as the main clinical study, likely Budapest, Hungary.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Number of Experts: 6 clinical experts.
    • Qualifications: Vascular surgeons and interventional radiologists with a clinical experience of at least 8 years.

    4. Adjudication Method for the Test Set

    The document describes a "blinded, randomized manner" for the visual evaluation by the 6 clinical experts. It states "Raters judged the DVA images better in 69 % of all comparisons" and then reports inter-rater agreement (81%) and Fleiss' kappa. This suggests a consensus-based approach where individual expert opinions were aggregated, but it doesn't specify a formal adjudication method like "2+1" or "3+1." It seems each expert independently rated the image pairs, and then their individual judgments were used to calculate proportions of agreement and overall preference.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size

    • Was an MRMC study done? Yes, a form of MRMC study was conducted for the visual evaluation. Six expert readers compared multiple cases (238 image pairs).
    • Effect Size of Human Reader Improvement: The document does not describe a study where human readers used AI vs. without AI assistance (i.e., human-in-the-loop performance improvement). The study focuses on comparing the quality of images generated by the AI device itself (DVA) versus standard DSA images, and then having human readers evaluate these image types. Therefore, an "effect size of how much human readers improve with AI vs without AI" is not reported because this specific type of comparative effectiveness study was not performed or described in the provided text.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    Yes, the primary clinical and non-clinical studies described assess the standalone performance of the Kinepict Medical Imaging Tool's DVA algorithm in generating images and its intrinsic signal-to-noise ratio. The expert visual evaluation, while involving humans, evaluates the output of the algorithm (DVA images) compared to the predicate's output (DSA images), rather than the algorithm assisting human decision-making. The device is intended to "visualize blood vessel structures" and can be used "as replacement for current DSA imaging," implying standalone capability in image generation and quality.

    7. The Type of Ground Truth Used

    • Clinical Study (Visual Quality): Expert Consensus. The "ground truth" for superior image quality was established by the agreement among 6 experienced clinical experts who visually compared DVA and DSA image pairs.
    • Clinical and Non-Clinical Study (SNR): Quantifiable Image Metric. The Signal-to-Noise Ratio (SNR) is an objective, quantitative measure derived from image data, which served as a form of ground truth for image fidelity and signal clarity.

    8. The Sample Size for the Training Set

    The document does not report the sample size for the training set used to develop the Kinepict Medical Imaging Tool's algorithm.

    9. How the Ground Truth for the Training Set Was Established

    The document does not report how the ground truth for the training set was established. It only describes the studies performed to validate the device's performance post-development.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1