Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K252235

    Validate with FDA (Live)

    Device Name
    PVAD IQ Software
    Manufacturer
    Date Cleared
    2025-12-18

    (154 days)

    Product Code
    Regulation Number
    892.2050
    Age Range
    18 - 120
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The PVAD IQ software is intended for non-invasive analysis of ultrasound images to detect and measure structures from cardiac ultrasound of patients 18 years old and above, with a Percutaneous Ventricular Assist Device (PVAD). Such use is typically utilized for clinical decision support by a qualified physician.

    Device Description

    PVAD IQ is a Software as a Medical Device (SaMD) solution designed to support clinicians in the positioning of Percutaneous Ventricular Assist devices (PVADs) through ultrasound image-based assessment. Percutaneous Ventricular Assist device is a temporary device used to provide hemodynamic support for patients experiencing cardiogenic shock or undergoing high-risk percutaneous coronary interventions (PCI).

    The PVAD IQ software is a machine learning model (MLM) based software, that operates on ultrasound clips (as the system input) and provides two outputs with regards to PVAD patients:

    1. Landmark identification and measurement - provides the two landmarks position detection, and computation of the mean distance between the two landmarks- the aortic annulus and the PVAD inlet.

    2. Acceptability classification, which is a binary classification of ultrasound clips that are "acceptable" or "non-acceptable", in terms of visibility of the two landmarks. A clip is defined as acceptable when both landmarks are simultaneously visible in a manner suitable for quantitative imaging.

    The User Interface (UI) enables the user to review or hide the mean distance measurement, annotate desired images, and add manual measurement, while keeping the raw data for further review as needed.

    The software output is shown on the screen either as the mean distance measurement, or as a notification related to non-acceptable clips.

    AI/ML Overview

    The PVAD IQ Software, a machine learning model (MLM) based software, provides two primary outputs for patients with Percutaneous Ventricular Assist Devices (PVADs): landmark identification and measurement (specifically, the distance between the aortic annulus and the PVAD inlet) and acceptability classification of ultrasound clips.

    1. Acceptance Criteria and Reported Device Performance

    The study established pre-specified acceptance criteria for the PVAD IQ software's performance, which it met.

    Acceptance CriteriaThresholdReported Device Performance
    Distance Measurement (MAE)Below 0.5 cm0.42 cm (95% CI: 0.38–0.47 cm)
    Acceptability Classification (Cohen's Kappa)Above 0.60.71 (95% CI: 0.66–0.75)
    Landmark Detection (AUC) - PVAD InletAbove 0.80.92 (0.9–0.94)
    Landmark Detection (AUC) - Aortic AnnulusAbove 0.80.98 (0.95, 1)
    Landmark Position (MAE) - PVAD InletBelow 0.5 cm0.44 cm (0.41–0.48 cm)
    Landmark Position (MAE) - Aortic AnnulusBelow 0.5 cm0.31 cm (0.3–0.33 cm)

    2. Sample Size and Data Provenance for Test Set

    • Sample Size: 963 clips
    • Number of Patients: 186 patients
    • Data Provenance: Geographically distinct test datasets. While specific countries are not mentioned, the ground truth annotations were provided by US (United States) board certified cardiac sonographers. The timing (retrospective or prospective) is not specified, but the data was used for evaluating a previously trained model, which often implies a retrospective application to a held-out test set.

    3. Number and Qualifications of Experts for Ground Truth (Test Set)

    • Number of Experts: Not explicitly stated as a specific number of individual experts. The document refers to "US (United States) board certified cardiac sonographers."
    • Qualifications of Experts: "US (United States) board certified cardiac sonographers experienced in PVAD/Impella® echocardiographic imaging."

    4. Adjudication Method for Test Set

    The adjudication method is not explicitly stated in the provided document. It only mentions that ground truth annotations were "provided by US (United States) board certified cardiac sonographers." It does not specify if multiple sonographers reviewed each case, how disagreements were resolved, or if a consensus mechanism (like 2+1 or 3+1) was used.

    5. MRMC Comparative Effectiveness Study

    An MRMC (Multi-Reader Multi-Case) comparative effectiveness study comparing AI assistance with unassisted human readers was not mentioned in the provided document. The study focused on the standalone performance of the PVAD IQ software.

    6. Standalone Performance Study

    Yes, a standalone (algorithm only without human-in-the-loop performance) study was conducted. The reported performance metrics (MAE, Cohen's Kappa, AUC) directly assess the algorithm's performance against the established ground truth.

    7. Type of Ground Truth Used

    The ground truth used was expert consensus/annotations. Specifically, "Ground truth annotations for the distance between the aortic annulus and the PVAD inlet were provided by US (United States) board certified cardiac sonographers experienced in PVAD/Impella® echocardiographic imaging." This implies human experts manually defining the "correct" measurements and classifications.

    8. Sample Size for the Training Set

    The sample size for the training set is not provided in this document. The document states that the PVAD IQ software is "trained with clinical data" but does not specify the volume or characteristics of this training data.

    9. How Ground Truth for Training Set Was Established

    The method for establishing ground truth for the training set is not explicitly detailed in this document. It broadly states that the software uses "non-adaptive machine learning algorithms trained with clinical data" and "refining annotations" is part of model retraining (under PCCP). While it can be inferred that ground truth for training data would also involve expert annotations, similar to the test set, the specific process, number of experts, or their qualifications for the training data are not provided.

    Ask a Question

    Ask a specific question about this device

    K Number
    K251416

    Validate with FDA (Live)

    Manufacturer
    Date Cleared
    2025-12-17

    (224 days)

    Product Code
    Regulation Number
    892.2100
    Age Range
    18 - 91
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The UltraSight Guidance is intended to assist medical professionals (not including expert sonographers) in acquiring cardiac ultrasound images. UltraSight Guidance is an accessory to compatible general-purpose diagnostic ultrasound systems. UltraSight Guidance is indicated for use in two-dimensional transthoracic echocardiography (2D-TTE) for adult patients, specifically in the acquisition of the following standard views: Parasternal Long-Axis (PLAX), Parasternal Short-Axis at the Aortic Valve (PSAX-AV), Parasternal Short-Axis at the Mitral Valve (PSAX-MV), Parasternal Short Axis at the Papillary Muscle (PSAX-PM), Apical 4-Chamber (AP4), Apical 5-Chamber (AP5), Apical 2-Chamber (AP2), Apical 3-Chamber (AP3), Subcostal 4-Chamber (SubC4), and Subcostal Inferior Vena Cava (SC-IVC).

    Device Description

    UltraSight Guidance is a software application based on machine learning that uses artificial intelligence to provide dynamic real-time guidance on the position and/or orientation of the transducer to help non-expert users acquire diagnostic quality tomographic views of the heart. The system provides guidance for ten standard cardiac views.

    Main features:
    • Quality Bar: The system displays an image quality bar that is continuously updated while the user scans the subject, while attempting to find the maximal quality. The quality bar is a score for image diagnosability. It represents the classification between high and low quality images, where high quality images are defined as grade 3 or more based on American College of Emergency Physicians (ACEP) guidelines (Rachel B. Liu et al., "Emergency Ultrasound Standard Reporting Guidelines", 2018, American College of Emergency Physicians)
    • Probe Guidance: The probe guidance feature provides graphic on-screen instructions for the user to emulate how a sonographer would manipulate the transducer to acquire the target cardiac view. The five possible guidance cues are rotation, tilt, rock, and slides in the lateral-medial and up/down directions (with respect to the subjects' head).

    The guidance user interface (UI) is composed of a 3D probe display that shows orientation guidance cues (rotations, tilts and rocks) and a cross that shows slide guidance cues (slides in x and y directions). The users infer the maneuver they should perform from viewing the 3D probe display and the slides cross. Supporting text messages appear on the screen regarding guidance cues.

    This device is a modification of a previously marketed device, where the main modification introduced is the addition of a pre-processing step done prior to feeding images as an input to the device. This modification is aimed to enable future compatibility with potentially additional ultrasound probes that meet pre-requisites as part of a pre-determined change control plan.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the UltraSight Guidance device, based on the provided FDA 510(k) clearance letter:

    UltraSight Guidance: Acceptance Criteria and Study Details

    The UltraSight Guidance device focuses on two main features: the "Quality Bar" and "Probe Guidance." The performance testing for each is described below.

    1. Table of Acceptance Criteria and Reported Device Performance

    FeatureAcceptance CriteriaReported Device Performance
    Quality BarMean Area Under the ROC Curve (AUC) > 0.8Mean AUC was within the pre-defined acceptance criteria of AUC > 0.8 with 95% CI showing good classification performance relative to the success criteria. (p. 10)
    Mean False Positive Rate (FPR) < 0.2The mean false positive rate (FPR) met the acceptance criteria of FPR < 0.2 with 95% CI showing good classification performance. (p. 10)
    Probe GuidanceMean Area Under the ROC Curve (AUC) > 0.8 (for each guidance cue)The mean AUC was within the acceptance criteria of AUC > 0.8 with 95% CI showing good classification performance. (p. 13) (Note: This is stated for each guidance cue, implying individual performance met this criterion, though not explicitly broken down per cue in the summary).

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size:
      • Quality Bar: 134 patients, comprising 26,362 ultrasound clips.
      • Probe Guidance: 134 patients, comprising 2.4 million ultrasound frames.
    • Data Provenance: The test dataset was collected from clinical sites geographically distinct from those used for the development dataset.
      • Countries of Origin: 111 patients from the US and 23 patients from Israel.
      • Retrospective/Prospective: Not explicitly stated as retrospective or prospective, but the description of "collected from clinical sites" and "comprising [number] ultrasound clips/frames" suggests it was likely retrospective collection of existing data or data collected for the study.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Quality Bar: Two expert cardiologists initially annotated the clips for diagnosability.
    • Probe Guidance: Ground truth was established by expert sonographers and/or cardiologists.
    • Qualifications of Experts: The document explicitly states "expert cardiologists" and "expert sonographers and/or cardiologists," implying recognized professionals in their respective fields, but does not provide specific details on years of experience or board certifications.

    4. Adjudication Method for the Test Set

    • Quality Bar: 2+1 Adjudication. Initially, two expert cardiologists provided annotations. In case of disagreement, a third cardiologist provided additional annotation, and the final label was determined by majority vote among the three experts. (p. 9-10)
    • Probe Guidance: Not explicitly detailed. The statement "Ground truth was established by expert sonographers and/or cardiologists" (p. 13) does not specify an adjudication method for potential disagreements among multiple experts, or if a single expert reviewed each case.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done

    • No MRMC comparative effectiveness study was specifically described for the initial clearance in the provided text. The testing focuses on the standalone performance of the algorithm (AI model) for the Quality Bar and Probe Guidance features.
    • It does mention a "Clinical utility test" as a potential future testing method as part of the PCCP for "Expanded list of cardiac views": "Clinical utility test will evaluate the ability of non-expert users to obtain diagnostic-quality cardiac images using UltraSight software for the newly added cardiac views." (p. 12). This might involve comparing performance with and without the device, but it is for future modifications, not the current clearance, and details are scarce.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) was Done

    • Yes, standalone performance (algorithm only) was done for both the Quality Bar and Probe Guidance features. The studies describe the classification performance of the algorithm against expert-established ground truth without human users interacting with the device for the testing itself. For example, for the Quality Bar, the device's classification of "diagnosable/non-diagnosable" was compared to expert labels. For Probe Guidance, the algorithm's prediction of guidance cues was compared to expert-defined optimal probe positions.

    7. The Type of Ground Truth Used

    • Quality Bar: Expert Consensus (diagnosability label). The ground truth ("diagnosable / non-diagnosable") was established by two expert cardiologists, with a third cardiologist resolving disagreements through majority vote. (p. 9-10)
    • Probe Guidance: Expert-established optimal probe position/orientation. Ground truth was established by "expert sonographers and/or cardiologists using the recorded probe position during ultrasound acquisition." This defined the "required rotation, tilt, rock and slides adjustments" against which the algorithm's guidance was compared. (p. 13)

    8. The Sample Size for the Training Set

    • The document states that the test dataset was "collected from clinical sites geographically distinct from those used for the development dataset" (p. 9). However, the specific sample size of the training (development) set is not provided in the clearance letter summary.

    9. How the Ground Truth for the Training Set Was Established

    • Similar to the training set sample size, the method for establishing ground truth for the training (development) set is not explicitly detailed in the provided text. It is generally assumed that similar expert annotation processes would have been followed for the training data, but the document does not specify this.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1