Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K252235

    Validate with FDA (Live)

    Device Name
    PVAD IQ Software
    Manufacturer
    Date Cleared
    2025-12-18

    (154 days)

    Product Code
    Regulation Number
    892.2050
    Age Range
    18 - 120
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The PVAD IQ software is intended for non-invasive analysis of ultrasound images to detect and measure structures from cardiac ultrasound of patients 18 years old and above, with a Percutaneous Ventricular Assist Device (PVAD). Such use is typically utilized for clinical decision support by a qualified physician.

    Device Description

    PVAD IQ is a Software as a Medical Device (SaMD) solution designed to support clinicians in the positioning of Percutaneous Ventricular Assist devices (PVADs) through ultrasound image-based assessment. Percutaneous Ventricular Assist device is a temporary device used to provide hemodynamic support for patients experiencing cardiogenic shock or undergoing high-risk percutaneous coronary interventions (PCI).

    The PVAD IQ software is a machine learning model (MLM) based software, that operates on ultrasound clips (as the system input) and provides two outputs with regards to PVAD patients:

    1. Landmark identification and measurement - provides the two landmarks position detection, and computation of the mean distance between the two landmarks- the aortic annulus and the PVAD inlet.

    2. Acceptability classification, which is a binary classification of ultrasound clips that are "acceptable" or "non-acceptable", in terms of visibility of the two landmarks. A clip is defined as acceptable when both landmarks are simultaneously visible in a manner suitable for quantitative imaging.

    The User Interface (UI) enables the user to review or hide the mean distance measurement, annotate desired images, and add manual measurement, while keeping the raw data for further review as needed.

    The software output is shown on the screen either as the mean distance measurement, or as a notification related to non-acceptable clips.

    AI/ML Overview

    The PVAD IQ Software, a machine learning model (MLM) based software, provides two primary outputs for patients with Percutaneous Ventricular Assist Devices (PVADs): landmark identification and measurement (specifically, the distance between the aortic annulus and the PVAD inlet) and acceptability classification of ultrasound clips.

    1. Acceptance Criteria and Reported Device Performance

    The study established pre-specified acceptance criteria for the PVAD IQ software's performance, which it met.

    Acceptance CriteriaThresholdReported Device Performance
    Distance Measurement (MAE)Below 0.5 cm0.42 cm (95% CI: 0.38–0.47 cm)
    Acceptability Classification (Cohen's Kappa)Above 0.60.71 (95% CI: 0.66–0.75)
    Landmark Detection (AUC) - PVAD InletAbove 0.80.92 (0.9–0.94)
    Landmark Detection (AUC) - Aortic AnnulusAbove 0.80.98 (0.95, 1)
    Landmark Position (MAE) - PVAD InletBelow 0.5 cm0.44 cm (0.41–0.48 cm)
    Landmark Position (MAE) - Aortic AnnulusBelow 0.5 cm0.31 cm (0.3–0.33 cm)

    2. Sample Size and Data Provenance for Test Set

    • Sample Size: 963 clips
    • Number of Patients: 186 patients
    • Data Provenance: Geographically distinct test datasets. While specific countries are not mentioned, the ground truth annotations were provided by US (United States) board certified cardiac sonographers. The timing (retrospective or prospective) is not specified, but the data was used for evaluating a previously trained model, which often implies a retrospective application to a held-out test set.

    3. Number and Qualifications of Experts for Ground Truth (Test Set)

    • Number of Experts: Not explicitly stated as a specific number of individual experts. The document refers to "US (United States) board certified cardiac sonographers."
    • Qualifications of Experts: "US (United States) board certified cardiac sonographers experienced in PVAD/Impella® echocardiographic imaging."

    4. Adjudication Method for Test Set

    The adjudication method is not explicitly stated in the provided document. It only mentions that ground truth annotations were "provided by US (United States) board certified cardiac sonographers." It does not specify if multiple sonographers reviewed each case, how disagreements were resolved, or if a consensus mechanism (like 2+1 or 3+1) was used.

    5. MRMC Comparative Effectiveness Study

    An MRMC (Multi-Reader Multi-Case) comparative effectiveness study comparing AI assistance with unassisted human readers was not mentioned in the provided document. The study focused on the standalone performance of the PVAD IQ software.

    6. Standalone Performance Study

    Yes, a standalone (algorithm only without human-in-the-loop performance) study was conducted. The reported performance metrics (MAE, Cohen's Kappa, AUC) directly assess the algorithm's performance against the established ground truth.

    7. Type of Ground Truth Used

    The ground truth used was expert consensus/annotations. Specifically, "Ground truth annotations for the distance between the aortic annulus and the PVAD inlet were provided by US (United States) board certified cardiac sonographers experienced in PVAD/Impella® echocardiographic imaging." This implies human experts manually defining the "correct" measurements and classifications.

    8. Sample Size for the Training Set

    The sample size for the training set is not provided in this document. The document states that the PVAD IQ software is "trained with clinical data" but does not specify the volume or characteristics of this training data.

    9. How Ground Truth for Training Set Was Established

    The method for establishing ground truth for the training set is not explicitly detailed in this document. It broadly states that the software uses "non-adaptive machine learning algorithms trained with clinical data" and "refining annotations" is part of model retraining (under PCCP). While it can be inferred that ground truth for training data would also involve expert annotations, similar to the test set, the specific process, number of experts, or their qualifications for the training data are not provided.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1