Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K202056
    Date Cleared
    2021-09-10

    (413 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Kinepict Medical Imaging Tool version 4.0 (KMIT v4.0) is intended to visualize blood vessel structures by detecting the movement of the contrast bolus during angiography examination and to give the medical experts the option to reduce x-ray dose. The effectively achievable radiation reduction depends on the characteristics of the different cath labs (angiography unit, acquisition protocol, anatomical region) and requires the optimization of locally appled acquisition protocols. The software is intended to be used in addition to, or as a replacement for current DSA imaging. KMT v4.0 can be deployed on independent workstation hardware for stand-alone diagnostic assessment, post-processing, reporting. It can be configured within a network to send and receive DICOM data. Furthermore, KMIT v4.0 can be deployed on several kinds of angiography systems. It provides solutions in the operating room for imageguided solutions for interventional oncology, interventional radiology, and interventional neuroradiology. KMIT v4.0 can be also combined with fluoroscopic or radiographic systems.

    Device Description

    The Kinepict Medical Imaging Tool version 4.0 (KMIT v4.0) is medical diagnostic software for real-time viewing, diagnostic review, post-processing, optimization, communication, reporting, and storage of medical images and data on exchange media. It provides imageguided solutions in the operating room, for image-guided surgery, and image-guided solutions for interventional oncology, interventional radiology, and interventional neuroradiology. KMIT v4.0 can be deployed on independent workstation hardware for stand-alone diagnostic assessment, post-processing, reporting, which are intended to assist the physician in the evaluation of digital radiographic examinations, including diagnosis and/or treatment planning. KMIT v4.0 is designed to work with digital radiographic, fluoroscopic, interventional, and angiographic systems. The algorithm behind the Digital Variance Angiography (DVA) image calculation is the same in KMIT v2.2 (Kinepict Health Ltd. under K190993) and v4.0 (see Device Description / Calculation of DVA section). The postprocessing functions like contrast and brightness settings, choosing mask image, pixel shift applications, and anonymizing options are the same in the two software. Image storing and image sending functions are using the same DICOM technic and ports as Kinepict Medical Imaging Tool v2.2. In v4.0 automatic upload to PACS and automatic opening of screen size view is implemented. These differences do not have any effect on safety and efficiency. In summary, the KMIT v4.0 software does not introduce any new potential safety risks and is substantially equivalent to and performs as well as the predicate device.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

    Kinepict Medical Imaging Tool 4.0 - Acceptance Criteria and Study Details

    The primary acceptance criteria for the Kinepict Medical Imaging Tool 4.0 (KMIT v4.0) revolve around its ability to maintain diagnostic quality while enabling a reduction in X-ray radiation dose during angiography. The key performance metrics evaluated are:

    • Non-inferiority of low-dose DVA (ldDVA) images compared to normal dose DSA (ndDSA) images in terms of visual evaluation scores.
    • No significant difference in the number of recognized arteries and the proportion of arteries suitable for diagnosis between ldDVA and ndDSA.
    • High sensitivity and specificity of ldDVA in reproducing diagnostic categories established by ndDSA.

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance CriteriaReported Device Performance
    Visual Evaluation Non-Inferiority (Likert Score comparison of ldDVA vs ndDSA)Crural Region: ldDVA received significantly higher scores than ndDSA (difference 0.25±0.07, p=0.001). Femoral Region: No significant difference (-0.08±0.06, p=0.435). Abdominal Region: Initial significant difference favoring ndDSA (-0.26±0.12, p=0.036), but after excluding patients with excessive intestinal gases (3/30), the difference became non-significant (-0.10 ± 0.09, p=0.350).
    No significant difference in the overall number of recognized arteries.No significant difference (ndDSA: 5.56 ± 0.01, ldDVA: 5.46 ± 0.01).
    No significant difference in the proportion of arteries suitable for diagnosis.No significant difference (ndDSA: 92.3 ± 0.1 %, ldDVA: 93.5 ± 0.1 %).
    High sensitivity of low-dose DVA images to reproduce the same diagnostic category as normal dose DSA images.0.84 sensitivity.
    High specificity of low-dose DVA images to reproduce the same diagnostic category as normal dose DSA images.0.84 specificity.
    Accuracy of ldDVA vs ndDSA in determining valid diagnostic categories (after expert adjudication for discordant cases).Abdominal & Femoral Regions: Identical accuracy. Crural Region: ldDVA had highest accuracy (91% vs 80% for ndDSA).
    Ability to reduce radiation exposure while maintaining image quality and diagnostic value. (Implicit overarching goal based on "quality reserve")The study concluded that the "quality reserve of DVA can be effectively converted into radiation dose reduction without compromising the image quality and diagnostic value of angiograms."

    2. Sample Size and Data Provenance

    • Sample Size (Test Set): 30 Peripheral Artery Disease (PAD) patients.
    • Data Provenance: Prospective interventional clinical study conducted between April and July 2019. The country of origin of the data is not explicitly stated, but the company is based in Hungary. The study was performed using a Siemens Artis Zee angiography system and Ultravist 370 iodinated contrast medium.

    3. Number of Experts and Qualifications for Ground Truth

    • Visual Evaluation: Seven specialists evaluated image quality using a 5-grade Likert scale. Their specific qualifications (e.g., years of experience, subspecialty) are not explicitly stated, beyond being referred to as "specialists" in the field of angiography.
    • Task-Based Survey: Six readers identified artery segments and evaluated stenosis. Their specific qualifications are not explicitly stated.
    • Discordant Decisions Adjudication: An "expert" determined the valid diagnostic category when decisions were discordant. The number of such experts and their specific qualifications are not explicitly stated beyond "an expert."

    4. Adjudication Method for the Test Set

    • Visual Evaluation: No adjudication method is explicitly stated for the 7 specialists' Likert scale evaluations. The text suggests that the scores from these specialists were analyzed directly (e.g., "ldDVA received significantly higher visual evaluation scores...").
    • Task-Based Diagnostic Test: For the 6 readers, no adjudication method is explicitly stated for their initial identification of arteries or stenosis evaluation.
    • Discordant Decisions: When the 6 readers had "discordant decisions" regarding diagnostic categories, an "expert" supervised and determined the valid diagnostic category. This suggests a form of adjudication for disagreements, but the specific process (e.g., 2+1, 3+1) is not detailed beyond the involvement of a single expert for supervision.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Yes, a form of MRMC study was conducted. The study involved 7 "specialists" for visual evaluation and 6 "readers" for a task-based diagnostic test, evaluating multiple cases (30 patients, with images from three anatomical regions per patient).
    • Effect size of human readers improve with AI vs without AI assistance: This study's primary focus was on the equivalence or non-inferiority of AI-processed low-dose images compared to standard high-dose images, rather than directly measuring how much human readers improve with AI assistance over unaided interpretation of standard images. It demonstrates that AI enables a dose reduction while maintaining diagnostic performance.
      • The results show that ldDVA (AI-processed images from low-dose acquisition) reproduced ndDSA diagnostic categories with 0.84 sensitivity and 0.84 specificity.
      • Crucially, when discordant decisions were resolved by an expert, the accuracy of ndDSA and ldDVA was identical in two regions, and ldDVA outperformed ndDSA in the crural region (91% vs 80% accuracy). This implies that AI-processed low-dose images can, in some contexts, even lead to better or equivalent diagnostic outcomes by the readers compared to standard dose images, despite the dose reduction. The "improvement" is in the AI's ability to maintain or enhance quality from lower dose inputs.

    6. Standalone (Algorithm Only) Performance

    • Yes, standalone performance was implicitly assessed. The comparison of Contrast-to-Noise Ratio (CNR) between DSA (standard method) and DVA (Kinepict algorithm output) demonstrates an algorithmic performance advantage: "DVA produced consistently higher (two to threefold) CNR than DSA." This is a measure of the algorithm's raw image processing capability.
    • Also, the improved accuracy of ldDVA in the crural region (91% vs 80%) after expert adjudication suggests the standalone algorithm's output provides superior information, even if initially some readers struggled.

    7. Type of Ground Truth Used

    • Expert Consensus / Expert Interpretation:
      • For the visual evaluation, the idDVA (low-dose DVA) and ndDSA (normal-dose DSA) scores from the 7 specialists served as the ground truth for comparison within that specific assessment, based on their individual expert interpretations. The comparison was for non-inferiority.
      • For the task-based diagnostic test (recognized arteries, stenosis), the "valid diagnostic category" was determined by an "expert" in cases of discordant reader decisions, implying an expert-adjudicated ground truth.
      • The "normal dose DSA images" themselves served as a de-facto ground truth (or reference standard) against which the "low-dose DVA images" were compared for equivalence in diagnostic categories.

    8. Sample Size for the Training Set

    • The document does not provide information regarding the sample size used for the training set of the Kinepict Medical Imaging Tool 4.0. The study described is a clinical validation study, not a development or training study.

    9. How the Ground Truth for the Training Set Was Established

    • The document does not provide information regarding how the ground truth for the training set was established. This detail is typically found in design and development documentation, not necessarily in a 510(k) summary focused on clinical validation.
    Ask a Question

    Ask a specific question about this device

    K Number
    K190993
    Date Cleared
    2020-03-05

    (324 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Kinepict Medical Imaging Tool version v2.2 is intended to visualize blood vessel structures by detecting the movement of the contrast medium bolus in standard-of-care angiography examination. This software is intended to be used in addition to, or as replacement for current DSA imaging.

    Kinepict Software can be deployed on independent hardware such as a stand-alone diagnostic review, post-processing, and reporting workstation. It can also be configured within a network to send and receive DICOM data. Furthermore, Kinepict Software can be deployed on systems of several angiography system family. It provides image guided solutions in the operating room, for image guided surgery, by Image Fusion and by navigation systems, image guided solutions in interventional cardiology and electrophysiology and image guided solutions for interventional oncology, interventional radiology, and interventional neuroradiology.

    Kinepict Software can also be combined with fluoroscopy systems or Radiographic systems.

    Device Description

    The Kinepict Medical Imaging Tool is medical diagnostic software for real-time viewing, diagnostic review, post-processing, image manipulation, optimization, communication, reporting and storage of medical images and data on exchange media. It provides image guided solutions in the operating room, for image guided surgery, by Image Fusion and by navigation systems, image guided solutions in interventional cardiology and electrophysiology and image guided solutions for interventional oncology, interventional radiology, and interventional neuroradiology. It can be deployed with Syngo Application Software VD11 (Siemens Medical Solutions USA Inc. under K153346) or Windows based software options, which are intended to assist the physician in evaluation of digital radiographic examinations, including diagnosis and/or treatment planning.

    Kinepict Medical Imaging Tool is designed to work with digital radiographic, fluoroscopic, interventional and angiographic systems. The software platform with common software architecture, platform

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the Kinepict Medical Imaging Tool version v2.2, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document doesn't explicitly state "acceptance criteria" in a bulleted or numbered list. However, based on the Summary and Conclusions and Primary effectiveness endpoints and results sections, the implicit acceptance criteria appear to be:

    Acceptance Criteria (Inferred from Study Goals)Reported Device Performance (Kinepict DVA)
    Signal-to-Noise Ratio (SNR) Improvement over DSAClinical Study: Overall median SNR of DVA images was 2.3-fold higher than that of DSA images (based on 1902 ROIs in 110 image pairs). Non-Clinical Data: Kinetic images provided 3.3 times (median) and 2.3 times (median) better SNR than raw and post-processed DSA images, respectively (based on 45 XA series). Non-clinical test concluded DVA provides "better" SNR than DSA.
    Visual Image Quality Improvement/Equivalency to DSA (Expert Consensus)Clinical Study: Raters judged the DVA images better in 69% of all comparisons (out of 238 pairs). Summary and Conclusions: Six specialists found an LA (level of agreement) of > 73% (p > 0.0001) that kinetic imaging provided higher quality images than DSA.
    Inter-rater Agreement for Visual QualityClinical Study: Interrater agreement was 81% and Fleiss κ was 0.17 (P < .001).
    No additional risks compared to predicate device"As an additional tool, it does not raise any additional risk comparing to the predicate device."
    Equivalency in post-processing functions, image storage, and sending functionsProven to be "similar" or use the "same DICOM technic and ports" as the predicate.

    2. Sample Size Used for the Test Set and Data Provenance

    • Clinical Study (Image Quality and SNR Comparison):
      • Patients: 42 patients with symptomatic Peripheral Artery Disease (PAD).
      • Image Pairs for Visual Evaluation: 238 pairs of DSA and DVA images.
      • Image Pairs for SNR Calculation: 110 image pairs with 1902 regions of interest (ROIs).
      • Data Provenance: Monocentric (Heart and Vascular Center, Budapest, Hungary), prospective, non-randomized, single-arm study. The DVA images were generated retrospectively from raw data obtained during prospective patient angiography.
    • Non-Clinical Data (SNR Comparison):
      • XA Series: 45 anonymized XA series from multiple patients.
      • Data Provenance: The XA series were "acquired as part of the clinical study: 2830/2017," suggesting their origin is from the same or a similar clinical context as the main clinical study, likely Budapest, Hungary.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Number of Experts: 6 clinical experts.
    • Qualifications: Vascular surgeons and interventional radiologists with a clinical experience of at least 8 years.

    4. Adjudication Method for the Test Set

    The document describes a "blinded, randomized manner" for the visual evaluation by the 6 clinical experts. It states "Raters judged the DVA images better in 69 % of all comparisons" and then reports inter-rater agreement (81%) and Fleiss' kappa. This suggests a consensus-based approach where individual expert opinions were aggregated, but it doesn't specify a formal adjudication method like "2+1" or "3+1." It seems each expert independently rated the image pairs, and then their individual judgments were used to calculate proportions of agreement and overall preference.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size

    • Was an MRMC study done? Yes, a form of MRMC study was conducted for the visual evaluation. Six expert readers compared multiple cases (238 image pairs).
    • Effect Size of Human Reader Improvement: The document does not describe a study where human readers used AI vs. without AI assistance (i.e., human-in-the-loop performance improvement). The study focuses on comparing the quality of images generated by the AI device itself (DVA) versus standard DSA images, and then having human readers evaluate these image types. Therefore, an "effect size of how much human readers improve with AI vs without AI" is not reported because this specific type of comparative effectiveness study was not performed or described in the provided text.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    Yes, the primary clinical and non-clinical studies described assess the standalone performance of the Kinepict Medical Imaging Tool's DVA algorithm in generating images and its intrinsic signal-to-noise ratio. The expert visual evaluation, while involving humans, evaluates the output of the algorithm (DVA images) compared to the predicate's output (DSA images), rather than the algorithm assisting human decision-making. The device is intended to "visualize blood vessel structures" and can be used "as replacement for current DSA imaging," implying standalone capability in image generation and quality.

    7. The Type of Ground Truth Used

    • Clinical Study (Visual Quality): Expert Consensus. The "ground truth" for superior image quality was established by the agreement among 6 experienced clinical experts who visually compared DVA and DSA image pairs.
    • Clinical and Non-Clinical Study (SNR): Quantifiable Image Metric. The Signal-to-Noise Ratio (SNR) is an objective, quantitative measure derived from image data, which served as a form of ground truth for image fidelity and signal clarity.

    8. The Sample Size for the Training Set

    The document does not report the sample size for the training set used to develop the Kinepict Medical Imaging Tool's algorithm.

    9. How the Ground Truth for the Training Set Was Established

    The document does not report how the ground truth for the training set was established. It only describes the studies performed to validate the device's performance post-development.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1