Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K210791
    Device Name
    Us2.v1
    Date Cleared
    2021-07-27

    (133 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Us2.v1 is a fully automated software platform that processes, analyses and makes measurements on acquired transthoracy cardiac ultrasound images, automatically producing a full report with measurements of several and functional parameters. The data produced by this software is intended to be used to support qualified cardiologists or licensed primary care providers for clinical decision-making. Us2.v1 is in adult patients. Us2.v1 has not been validated for the assessment of congenital heart disease, pericardial disease, and/or intra-cardiac lesions (e.g. tumours, thrombi).

    Device Description

    Us2.v1 is an image post-processing analysis software device used for viewing and quantifying cardiovascular ultrasound images in DICOM format. The device is intended to aid diagnostic review and analysis of echocardiographic data, patient record management and reporting.

    The software provides an interface for a skilled sonographer to perform the necessary markup on the echocardiographic image prior to review by the prescribing physician. The markup includes: the cardiac segments captured, measurements of distance, time, area and blood flow, quantitative analysis of cardiac function, and a summary report.

    The software allows the sonographer to enter their markup manually. It also provides automated markup and analysis, which the sonographer may choose to accept outright, to accept partially and modify, or to reject and ignore. Machine learning based view classification and border detection form the basis for this automated analysis. Additionally, the software has features for organizing, displaying and comparing to reference guidelines the quantitative data from cardiovascular images acquired from ultrasound scanners.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The documents state a single, overarching acceptance criterion:

    Acceptance CriterionReported Device Performance
    Non-inferiority margin (Δ=0.25) for the reference-scaled individual equivalence coefficient (IEC) such that IEC + 1.96 * SD(IEC) < Δ."Compared to reference standard echocardiographic human measurements made in triplicate by the independent Cardiovascular Imaging Core Laboratory at Brigham and Women's Hospital, the 95% confidence intervals of automated Us2.v1 measurements all fell below the pre-specified noninferiority margin of 0.25 for IEC." This indicates the device successfully met the non-inferiority criterion for all claimed measurements, demonstrating its interchangeability with human reference measurements.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: 600 unique echocardiographic studies.
      • 421 samples from patients with heart failure with reduced ejection fraction (HFrEF subjects).
      • 179 samples from normal subjects.
    • Data Provenance: Retrospective and non-interventional. The studies were "previously-acquired echocardiograms" and "selected from sets of previously annotated and overread studies." The specific country of origin is not explicitly stated, but the reference to "Brigham and Women's Hospital" suggests a US origin.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Number of Experts: Three human readers.
    • Qualifications of Experts: The ground truth was established by "the independent Cardiovascular Imaging Core Laboratory at Brigham and Women's Hospital." While specific individual qualifications are not detailed (e.g., "radiologist with 10 years of experience"), the reference to a "Core Laboratory" implies a team of qualified and experienced professionals in cardiovascular imaging, specifically echocardiography.

    4. Adjudication Method for the Test Set

    The text states that human measurements were "made in triplicate." This implies a form of consensus was likely used from the three readers, though the specific adjudication method (e.g., simple majority, averaging, or if a disagreement resolution process was in place) is not explicitly described beyond "made in triplicate." Given the calculation of an "individual equivalence coefficient (IEC) across three human readers," it suggests that each expert's measurement was compared against the device, and then a combined metric was computed.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

    No, an MRMC comparative effectiveness study that assesses human reader improvement with AI assistance was not explicitly described in this document. The study’s primary goal was to demonstrate the non-inferiority of the AI device's measurements compared to human reference standard measurements, not to show human improvement with AI assistance.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    Yes, a standalone performance evaluation was conducted. The study evaluated "automated Us2.v1 measurements" and compared them against "reference standard echocardiographic human measurements." The evaluation focused on the device's ability to produce measurements independently, without a human in the loop, to determine its equivalence to human experts.

    7. The Type of Ground Truth Used

    The ground truth used was expert consensus / reference standard echocardiographic measurements. These were "human measurements made in triplicate by the independent Cardiovascular Imaging Core Laboratory at Brigham and Women's Hospital."

    8. The Sample Size for the Training Set

    The document explicitly states: "Test datasets were strictly segregated from algorithm training datasets." However, the sample size for the training set is not provided in the given text.

    9. How the Ground Truth for the Training Set Was Established

    The document mentions that the test set was "selected from sets of previously annotated and overread studies with robust inclusion criteria." While this hints at how the ground truth for pre-existing data might have been established, it does not explicitly detail how the ground truth for the training set was established. It only guarantees segregation of test and training data.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1