Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K151598
    Date Cleared
    2015-08-17

    (66 days)

    Product Code
    Regulation Number
    892.1650
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    VesselNavigator Rel. 1.0

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    VesselNavigator provides image guidance by superimposing live fluoroscopic images on a 3D volume of the vessel anatomy to assist in catheter maneuvering and device placement.

    VesselNavigator is intended to assist in the treatment of endovascular diseases during procedures such as (but not limited to) AAA, TAA, carotid stenting, iliac interventions.

    Device Description

    VesselNavigator is a software product (Interventional Tool) intended to assist in the treatment of endovascular diseases during an endovascular intervention procedure. VesselNavigator is intended to be used in combination with a Philips Interventional X-ray system. VesselNavigator can be used during any endovascular intervention and covers all vascular anatomy except coronaries and intracranial vessels.

    It provides live 3D image guidance for navigating endovascular devices through intended vascular structures in the body, reusing previously acquired diagnostic 3D images. After registration, the 3D volume can be used as a 3D roadmap for navigation; live 2D fluoroscopic images will be overlaid on the 3D volume. In addition, VesselNavigator provides tools to segment the relevant vasculature in the 3D volume (where the end-user is able to edit the segmentation results), place landmarks for easy recognition of key anatomical points of interests, and store and recall of preferred view angles.

    AI/ML Overview

    The provided text does not contain specific acceptance criteria for the device (VesselNavigator) or a detailed study proving it meets such criteria in terms of performance metrics (e.g., accuracy, sensitivity, specificity).

    The document is a 510(k) premarket notification summary, which focuses on demonstrating substantial equivalence to a predicate device, rather than detailed performance studies with quantitative acceptance criteria typically found in, for example, AI/ML device submissions.

    However, based on the information provided, here's a breakdown of what can be extracted or inferred regarding performance and validation:


    1. A table of acceptance criteria and the reported device performance

    No explicit quantitative acceptance criteria or detailed performance metrics (accuracy, precision, etc.) are provided in this document. The "performance" described is largely functional and safety-related, aimed at demonstrating substantial equivalence.

    Acceptance Criteria (Inferred from "Nonclinical Performance Data")Reported Device Performance
    Compliance with IEC 62304 (Medical device software)Complies
    Compliance with IEC 62366 (Usability engineering)Complies
    Compliance with ISO 14971 (Risk management)Complies
    Compliance with NEMA PS 3.1-3.20 (DICOM)Complies
    Compliance with FDA Guidance for "Software Contained in Medical Devices"Complies
    Software verification for system level requirementsTests performed successfully
    Software verification for identified hazard mitigationsTests performed successfully
    Vessel segmentation tool validationTested and validated
    Intended use and commercial claims validationTested and validated
    Usability testing with representative intended usersTested and validated
    Does not raise new questions on safety or effectivenessNo new questions raised
    As safe and effective as its predicate deviceDemonstrated

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document primarily describes software verification and validation, and usability testing. It does not mention a "test set" in the context of a dataset of patient cases used to evaluate algorithmic performance (e.g., a test set for an AI model). Therefore, information on sample size and data provenance for such a test set is not available in this document.

    The validation included "usability testing with representative intended users," but the number of users or specific test cases is not provided.


    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    This information is not available. The document focuses on software engineering and usability validation, not on evaluating diagnostic or analytical performance against expert-established ground truth on a clinical dataset.


    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    This information is not available. There is no mention of a clinical "test set" requiring expert adjudication.


    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not performed and is not mentioned. The device is a "VesselNavigator" and is presented as an "Interventional Tool" providing "image guidance by superimposing live fluoroscopic images on a 3D volume." The submission indicates that "clinical studies to support substantial equivalence" were not required. The focus is on the device's functional equivalence and safety, not on improving human reader performance.


    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    A standalone performance evaluation (algorithm only) as typically understood for an AI/ML diagnostic device is not described in this document. The "VesselNavigator" is explicitly an "Interventional Tool" intended to "assist in catheter maneuvering and device placement" by superimposing images, implying a human-in-the-loop workflow. While there is a "vessel segmentation tool" as part of the functionality, its standalone segmentation performance (e.g., accuracy against ground truth) is not detailed. The validation mentioned "Software verification testing... as well as the identified hazard mitigations" and "Software validation testing included testing of the vessel segmentation tool, the intended use and commercial claims, and usability testing," but without specific performance metrics for the segmentation algorithm itself.


    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    The document primarily discusses validation against software requirements, hazard mitigations, intended use, and commercial claims. For functional aspects like the "vessel segmentation tool," the "ground truth" for validation would likely involve comparing the software's output to an expected or manually derived segmentation, but the specifics (e.g., expert manual segmentation, established anatomical models) are not detailed. Clinical outcomes data or pathology as ground truth are not mentioned in relation to performance validation.


    8. The sample size for the training set

    This information is not available. The document does not describe the device as a machine learning or AI algorithm that undergoes "training" on a dataset in the modern sense. It's described as a "software product" with "functionality to segment the relevant vasculature," which could imply rule-based or traditional image processing algorithms rather than a trained neural network. Therefore, a "training set" in the context of machine learning is not applicable based on the information provided.


    9. How the ground truth for the training set was established

    As described in point 8, the concept of a "training set" as used for AI/ML models is not mentioned or applicable based on the information provided in this 510(k) summary.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1