K Number
K233676
Device Name
Us2.v2
Date Cleared
2024-04-01

(137 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Us2.v2 software is used to process acquired transthoracic cardiac ultrasound images, to analyze and make measurements on images in order to provide automated estimation of several cardiac structural parameters, including left right atrial and ventricular linear dimensions, volumes, systolic function, measured by B mode, M mode and Doppler (PW, CW, tissue) modalities. The data produced by this software is intended to be used to support qualified cardiologists, sonographers, or other licensed professional healthcare practitioners for clinical decision-making. Us2.v2 is indicated for use in adult patients.

Device Description

Us2.v2 is an image post-processing analysis software device used for viewing and quantifying cardiovascular ultrasound images in DICOM format. The device is intended to aid diagnostic review and analysis of echocardiographic data, patient record management and reporting. The primary intended function of Us2.v2 is to automatically provide clinically relevant and reproducible quantitative echocardiographic measurements, while reducing echocardiographic analysis time. In doing so, the primary benefit of Us2.v2 is to improve clinical echocardiographic workflow, enabling clinicians to generate and edit reports faster. with precision and with full control. Because Us2.v2 measurements cover the minimum echocardiographic dataset f or a standard adult echocardiogram (by European Society of Cardiovascular Imaging, British Society of Echocardiography and American Society of Echocardiography guidelines), our software is applicable to the vast majority of adult transthoracic echocardiograms. Our current sof tware aims to automate measurements of cardiac dimensions and lef t ventricular function and are applicable regardless of normal or disease states. We specifically indicate that our current product will not be reporting measurements associated with intra-cardiac lesions ( e.g. tumours, thrombi), nor complex adult congenital heart disease. The sof tware provides automated markup and analysis to generate a f ull report, on which a qualif ied sonographer/ reviewing physician could perf orm edits/ revise the markup on the echocardiographic image measurement during their approval process. The markup includes: the cardiac segments captured. measurements of distance, time, area and blood flow. quantitative analysis of cardiac function, and a summary report. The software allows the sonographer to enter their markup manually. It also provides automated markup and analysis, which the sonographer may choose to accept outright, to accept partially and modify, or to reject and ignore. Machine learning based view classification and border detection form the basis for this automated analysis. Additionally, the software has features for organizing, displaying and comparing to reference guidelines the quantitative data from cardiovascular images acquired from ultrasound scanners.

AI/ML Overview

Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

1. Table of Acceptance Criteria and Reported Device Performance

The acceptance criteria provided for Left Ventricular Strain are based on Root Mean Square Error (RMSE). For other measurements, Intraclass Correlation Coefficient (ICC) is used, with the column indicating the lower bound of the 95% confidence interval for ICC, and the ICC itself.

Measurement CategoryPerformance MetricAcceptance CriteriaReported Device Performance
Left Ventricular Strain
Global Longitudinal StrainRMSENot explicitly stated as a numerical threshold, but implies "against reference values generated using the comparator device."2.6 - 4.12
Regional Longitudinal StrainRMSENot explicitly stated as a numerical threshold, but implies "against reference values generated using the comparator device."4.84 - 9.54
Other Us2.v2 Measurements
LVOT Diameter (mm)ICC lower 95% CINot explicitly stated as a numerical threshold.0.77
LVOT Diameter (mm)ICCNot explicitly stated as a numerical threshold.0.78
RV a' (cm/s)ICC lower 95% CINot explicitly stated as a numerical threshold.0.84
RV a' (cm/s)ICCNot explicitly stated as a numerical threshold.0.85
RV e' (cm/s)ICC lower 95% CINot explicitly stated as a numerical threshold.0.85
RV e' (cm/s)ICCNot explicitly stated as a numerical threshold.0.86
RV s' (cm/s)ICC lower 95% CINot explicitly stated as a numerical threshold.0.89
RV s' (cm/s)ICCNot explicitly stated as a numerical threshold.0.90
TAPSE (mm)ICC lower 95% CINot explicitly stated as a numerical threshold.0.72
TAPSE (mm)ICCNot explicitly stated as a numerical threshold.0.74
AoV Pmax (mmHg)ICC lower 95% CINot explicitly stated as a numerical threshold.0.95
AoV Pmax (mmHg)ICCNot explicitly stated as a numerical threshold.0.96
AoV Pmean (mmHg)ICC lower 95% CINot explicitly stated as a numerical threshold.0.97
AoV Pmean (mmHg)ICCNot explicitly stated as a numerical threshold.0.98
AoV Vmax (m/s)ICC lower 95% CINot explicitly stated as a numerical threshold.0.98
AoV Vmax (m/s)ICCNot explicitly stated as a numerical threshold.0.98
AoV VTI (cm)ICC lower 95% CINot explicitly stated as a numerical threshold.0.96
AoV VTI (cm)ICCNot explicitly stated as a numerical threshold.0.97
AVA (cm^2)ICC lower 95% CINot explicitly stated as a numerical threshold.0.78
AVA (cm^2)ICCNot explicitly stated as a numerical threshold.0.82
LVOT Pmax (mmHg)ICC lower 95% CINot explicitly stated as a numerical threshold.0.88
LVOT Pmax (mmHg)ICCNot explicitly stated as a numerical threshold.0.90
LVOT Pmean (mmHg)ICC lower 95% CINot explicitly stated as a numerical threshold.0.90
LVOT Pmean (mmHg)ICCNot explicitly stated as a numerical threshold.0.91
LVOT Vmax (m/s)ICC lower 95% CINot explicitly stated as a numerical threshold.0.91
LVOT Vmax (m/s)ICCNot explicitly stated as a numerical threshold.0.92
LVOT VTI (cm)ICC lower 95% CINot explicitly stated as a numerical threshold.0.89
LVOT VTI (cm)ICCNot explicitly stated as a numerical threshold.0.91
VRICC lower 95% CINot explicitly stated as a numerical threshold.0.93
VRICCNot explicitly stated as a numerical threshold.0.94
Sinotub Junction (mm)ICC lower 95% CINot explicitly stated as a numerical threshold.0.74
Sinotub Junction (mm)ICCNot explicitly stated as a numerical threshold.0.78
Sinus Valsalva (mm)ICC lower 95% CINot explicitly stated as a numerical threshold.0.78
Sinus Valsalva (mm)ICCNot explicitly stated as a numerical threshold.0.82

Note: The document states that "Acceptance criteria were based on Root Mean Square Error against reference values generated using the comparator device" for Left Ventricular Strain, and for other measurements, ICC is used. However, specific numerical thresholds for these criteria are not explicitly stated in the provided text. The tables only show the reported performance values.

2. Sample Size Used for the Test Set and Data Provenance

  • Test Set Sample Sizes:
    • Dataset 1: n = 3029
    • Dataset 2: n = 260
    • Dataset 3: n = 192
  • Data Provenance: The document states "US-based cohorts used in Us2.v2 testing." It also specifies that "Test datasets are strictly segregated from algorithm training datasets, as they are from completely separate cohorts." The study is described as a "bench study to validate its performance in real-world conditions" using "the same patient data and the same images" as manual analysis, implying retrospective data from clinical settings. It doesn't explicitly state if it was prospective or retrospective, but the phrasing "same patient data and the same images" for comparison with manual analysis strongly suggests retrospective use of existing data.

3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

The document states that the performance of Us2.v2 measurements was compared "against manual analysis (of the same patient data and the same images) generated by trained echocardiography technicians or cardiologists, both in 'gold standard' reference echo core labs and 'real world' clinical settings."
It does not specify the exact number of experts (technicians or cardiologists) used, nor their specific qualifications (e.g., years of experience).

4. Adjudication Method for the Test Set

The document does not describe a specific adjudication method (e.g., 2+1, 3+1). It states that the "manual analysis...generated by trained echocardiography technicians or cardiologists" was the reference. It doesn't mention how discrepancies among multiple human readers (if any were used per case) were resolved.

5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done

No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not explicitly described. The study compared the device's automated measurements against a "manual analysis" reference, which was performed by "trained echocardiography technicians or cardiologists." There is no mention of human readers improving with AI vs. without AI assistance. The study focuses on the performance of the algorithm compared to human-generated measurements.

6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

Yes, the described study appears to be a standalone performance evaluation. The device's automated analysis is compared directly against manual measurements, demonstrating the algorithm's performance without explicitly including a human-in-the-loop workflow. The description "The automated analysis generated by Us2.v2 will be compared head-to-head against manual analysis" supports this.

7. The Type of Ground Truth Used

The ground truth used was expert consensus / manual analysis. Specifically, it was established by "trained echocardiography technicians or cardiologists, both in 'gold standard' reference echo core labs and 'real world' clinical settings."

8. The Sample Size for the Training Set

The sample size for the training set is not provided in the given text. The document only states that "Test datasets are strictly segregated from algorithm training datasets, as they are from completely separate cohorts."

9. How the Ground Truth for the Training Set Was Established

The document mentions that "Machine learning based view classification and border detection form the basis for this automated analysis" and that the test datasets are "strictly segregated from algorithm training datasets." However, it does not describe how the ground truth for the training set was established.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).