Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K253614

    Validate with FDA (Live)

    Date Cleared
    2026-03-17

    (119 days)

    Product Code
    Regulation Number
    892.2050
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EchoNavigator supports the interventionalist and surgeon in treatments where both live X-ray and live Echo guidance are used. The targeted patient population consists of patients with cardiovascular diseases requiring such a treatment.

    Device Description

    EchoNavigator is a Software Medical Device that assists the interventionalist and surgeon with image guidance during treatment of cardiovascular disease for which the procedure uses both live X-ray and live Echo guidance. EchoNavigator can be used with compatible Echo-probes and Echo units in combination with compatible Philips interventional X-ray systems.

    AI/ML Overview

    The provided FDA 510(k) clearance letter and summary for EchoNavigator R5.0 describes the device and its new features, particularly the AI-enabled M-TEER device detection and overlay (DeviceGuide functionality). While it states that no clinical study was required for overall substantial equivalence due to the predicate device, it details the verification and validation (V&V) study for the AI algorithm.

    Here's an analysis of the acceptance criteria and the study proving the device meets them:

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance CriterionReported Device Performance
    Positional Accuracy$\leq$ 5 mm positional accuracy
    Trajectory/Orientation Accuracy$\leq$ 10$^\circ$ trajectory/orientation accuracy
    Algorithm Latency (detection and localization of therapy device)$\leq$ 100 ms of reception of the echo image
    Clinical Review Assessment (sufficiency of algorithm output)93.1% of evaluated cases assessed as sufficient

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for primary testing: 72 unique patients
    • Sample size for out-of-domain testing: 41 patients
    • Total sample size for testing (unique patients): 113 unique patients
    • Data Provenance: Data were sourced from a diverse mix of institutions (academic centers and community hospitals) across the U.S. and Europe. Clinical data were acquired using a range of compatible Philips X-ray and Philips ultrasound systems under routine procedural conditions. The validation dataset included at least 50% U.S. site data. The study was retrospective, as data was acquired first, and then used for algorithm testing ("testing was performed after algorithm development was completed and the algorithm was frozen").

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The document does not explicitly state the number or specific qualifications of experts used to establish the ground truth for the test set regarding the positional and trajectory/orientation accuracy. It mentions that "algorithm outputs were compared to a reference standard ('ground truth') derived from pre-clinical high resolution imaging data." This suggests the ground truth for these quantitative metrics was established through high-resolution imaging rather than solely expert consensus on the clinical images.

    However, for the "sufficiency of algorithm output" in a clinical review, five physicians assessed the algorithm output. Their specific qualifications (e.g., years of experience, subspecialty) are not provided beyond being "physicians."

    4. Adjudication Method for the Test Set

    The document does not explicitly specify an adjudication method like 2+1 or 3+1 for the quantitative ground truth metrics (positional and trajectory/orientation accuracy). The ground truth for these was "derived from pre-clinical high resolution imaging data."

    For the clinical review where five physicians assessed sufficiency, the document does not mention an adjudication method beyond stating that five physicians assessed the algorithm output, achieving 93.1% sufficiency. It doesn't clarify if there was a consensus process or if individual assessments were averaged/aggregated.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve With AI vs. Without AI Assistance

    No, an MRMC comparative effectiveness study demonstrating human reader improvement with AI assistance versus without AI assistance was not conducted or reported. The study focused on the standalone performance of the AI algorithm for device detection and tracking.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    Yes, a standalone performance study was done for the AI algorithm. The reported positional accuracy ($\leq$ 5 mm), trajectory/orientation accuracy ($\leq$ 10$^\circ$), and algorithm latency ($\leq$ 100 ms) are all measures of the algorithm's standalone performance without human input during operation. The "clinical review" by five physicians assessed the output of the standalone algorithm for sufficiency.

    7. The Type of Ground Truth Used

    • For Positional Accuracy and Trajectory/Orientation Accuracy: A "reference standard ('ground truth') derived from pre-clinical high resolution imaging data" was used. This suggests a highly precise, objective measurement from specialized imaging.
    • For Clinical Review Assessment (sufficiency): Expert assessment by five physicians was used to determine the "sufficiency" of the algorithm's output.

    8. The Sample Size for the Training Set

    The document explicitly states: "Data used for development were not used for testing." However, it does not specify the sample size for the training set or development data.

    9. How the Ground Truth for the Training Set Was Established

    The document states: "To ensure independence between development and testing, testing was performed after algorithm development was completed and the algorithm was frozen. Data used for development were not used for testing." However, it does not describe how the ground truth for the training set was established.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1