K Number
DEN230003
Device Name
Viz HCM
Manufacturer
Date Cleared
2023-08-03

(205 days)

Product Code
Regulation Number
870.2380
Type
Direct
Panel
CV
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Viz HCM is intended to be used in parallel to the standard of care to analyze recordings of 12-lead ECG made on compatible ECG devices. Viz HCM is capable of analyzing the ECG, detecting signs associated with hypertrophic cardiomyopathy (HCM), and allowing the user to view the ECG and analysis results. Viz HCM is indicated for use on 12-lead ECG recordings collected from patients 18 years of age or older. Viz HCM is not intended for use on patients with implanted pacemakers. Viz HCM is limited to analysis of ECG data and should not be used in-lieu of full patient evaluation or relied upon to make or confirm diagnosis. Viz HCM identifies patients for further HCM follow-up and does not replace the current standard of care methods for diagnosis of HCM. The results of the device are not intended to rule-out HCM follow-up.

Device Description

The Viz HCM ECG Analysis Algorithm (HCM Algorithm) is a machine learning-based software algorithm that analyzes 12-lead electrocardiograms (ECGs) for characteristics suggestive of hypertrophic cardiomyopathy (HCM). The mobile software module enables the end user to receive and toggle notifications for ECGs determined by the Viz HCM ECG Analysis Algorithm to contain signs suggestive of HCM.

The Viz HCM is a Software as a Medical Device (SaMD) intended to analyze ECG signals collected as part of a routine clinical assessment, independently and in parallel to the standard of care. Viz HCM is a combination of software modules that consists of an ECG analysis software algorithm and mobile application software module.

AI/ML Overview

Here's a breakdown of the acceptance criteria and the study proving the Viz HCM device meets them, based on the provided text:

Acceptance Criteria and Device Performance

The core acceptance criteria for the Viz HCM device are implicitly defined by the sponsor's performance metrics and the explicit special controls outlined by the FDA. The performance testing section provides the evidence that the device meets these criteria.

1. Table of Acceptance Criteria and Reported Device Performance

Given that this is a De Novo request, specific pre-defined quantitative acceptance criteria (e.g., "Sensitivity must be > X%") are often not explicitly stated upfront in the narrative. Instead, the "Performance Testing" section presents the demonstrated performance as evidence for acceptance. The FDA then evaluates if this performance is acceptable given the device's intended use and risks.

Based on the provided text, the key performance metrics and their reported values are:

Performance MeasureReported Device Performance (95% CI)Context/Implication (Acceptance Criteria)
Sensitivity68.4% (62.8% - 73.5%)Identifies patients with HCM. The FDA assesses if this sensitivity is acceptable given the device's role as a notification tool, not a diagnostic one, to prompt further follow-up.
Specificity99.1% (98.7% - 99.4%)Correctly identifies patients without HCM. A high specificity is crucial to minimize unnecessary follow-ups and reduce the burden on the healthcare system, especially given the low prevalence of HCM.
Positive Predictive Value (PPV) (at 0.002 prevalence)13.7% (10.1% - 19.9%)The probability that a positive result truly indicates HCM. Even with high specificity, the PPV is low due to the low prevalence of HCM, which the FDA explicitly acknowledges as acceptable given the device's benefit as an early identification tool.

Implicit Acceptance Criteria (from Special Controls and Risk Analysis):

  • Clinical Performance Testing (Special Control 1):
    • Device performs as intended under anticipated conditions of use.
    • Clinical validation uses a test dataset of real-world data from a representative patient population.
    • Data is representative of sources, quality, and encountered conditions.
    • Test dataset is independent from training/development data.
    • Sufficient cases from important cohorts (demographics, confounders, comorbidities, hardware/acquisition characteristics) are included for subgroup analysis.
    • Study protocols include ground truth adjudication processes.
    • Consistency of output demonstrated over the full range of inputs.
    • Performance goals justified in context of risks.
    • Objective performance measures reported with descriptive/developmental measures.
    • Summary-level demographic and subgroup analyses provided.
    • Test dataset includes a minimum of 3 geographically diverse sites (separate from training).
  • Software Verification, Validation, and Hazard Analysis (Special Control 2):
    • Model description, inputs/outputs, patient population.
    • Integration testing in intended system.
    • Impact of sensor acquisition hardware on performance.
    • Input signal/data quality control.
    • Mitigations for user error/subsystem failure.
  • Human Factors Assessment (Special Control 3):
    • Evaluates risk of misinterpretation of device output.
  • Labeling (Special Control 4):
    • Summary of performance testing, hardware, patient population, results, demographics, subgroup analyses, minimum performance.
    • Device limitations/subpopulations where performance may differ.
    • Warning against ruling out follow-up based on negative finding.
    • Statement that output shouldn't replace full clinical evaluation.
    • Warnings on sensor acquisition factors impacting results.
    • Guidance for interpretation and typical follow-up.
    • Type of hardware sensor data used.

Study Details for Proving Acceptance

2. Sample Size Used for the Test Set and Data Provenance

  • Test Set Sample Size: 3,196 ECG cases (291 HCM-Positive and 2905 HCM-Negative).
  • Data Provenance: Retrospective study. Data collected from 3 hospitals in the US (Boston, Massachusetts area - 2 sites; Salem, Massachusetts - 1 site). The Boston sites are described as racially and ethnically diverse, while the Salem site was predominantly Caucasian or Latino. Data was collected between July 1, 2017, and June 30, 2022.

3. Number of Experts Used to Establish the Ground Truth for the Test Set and their Qualifications

  • Number of Experts: A single cardiologist performed the initial chart and imaging review for each HCM-Positive or HCM-Negative case to establish the ground truth.
  • Qualifications of Experts: Described as "cardiologist." No further details on their years of experience or specific board certifications are provided in the excerpt. A "second cardiologist" was used for a secondary assessment on a subset of cases to check agreement/consistency.

4. Adjudication Method for the Test Set

  • Method: A single cardiologist established the ground truth for each case through chart and imaging review based on predefined guidelines (Cornell criteria or Sokolow-Lyon criteria).
  • Consistency Check: A "secondary assessment" was performed on a selection of 60 cases (30 HCM-Positive, 30 HCM-Negative) where a second cardiologist independently truthed the cases to perform an analysis of agreement/consistency. The results of this agreement analysis are not detailed, but the method was a 1+1 adjudication for a subset. For the main test set, it was effectively a "none" (single expert review) or rather an individual expert labeling.

5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done

  • No MRMC Study was described. The provided text focuses on the standalone performance of the algorithm and does not include a comparative effectiveness study involving human readers with and without AI assistance. The device is intended to be used "in parallel to the standard of care," suggesting it provides an additional signal, not necessarily assistance to human readers interpreting ECGs.

6. If a Standalone (algorithm only without human-in-the-loop performance) was done

  • Yes, a standalone performance study was done. The entire "PERFORMANCE TESTING" section, especially "SUMMARY OF CLINICAL INFORMATION," describes the performance of the Viz HCM algorithm in identifying suspected HCM from ECGs compared directly to the clinical ground truth established by cardiologists. The reported sensitivity, specificity, and PPV are all "algorithm-only" performance metrics.

7. The Type of Ground Truth Used

  • Expert Consensus/Clinical Records Review: The ground truth for the test set was established by a cardiologist (single expert for primary truth, with a second expert for consistency check on a subset) who performed a chart and imaging review for each patient. This was based on "predefined guidelines using either the Cornell criteria or the Sokolow-Lyon criteria." ICD-10 codes were used for initial sampling, but the definitive ground truth was established by clinical review. This is a form of expert consensus/clinical documentation ground truth.

8. The Sample Size for the Training Set

  • Training Set Sample Size: 301,106 patients, encompassing 831,329 ECG exams.
    • HCM positive patients: 4,470
    • HCM negative patients: 298,394

9. How the Ground Truth for the Training Set Was Established

  • The text states: "The data for algorithm development was collected from different US and Non-US (OUS) sources. The data contains both HCM Positive (obstructive and nonobstructive) and HCM Negative examples including random ECG samples (random control) and enrichment for conditions differential for and associated with HCM (negative controls)."
  • It further clarifies that for HCM-Negative cases in the development (training and internal validation) dataset, absence of HCM was determined by the "lack of ICD-9/10 code for HCM."
  • For HCM-Positive and HCM-Negative cases with available imaging, "additional chart review and review of imaging provided more confidence into the label."

In summary, for the training set, the ground truth was established primarily through ICD-9/10 codes, supplemented by chart review and imaging review where available. This suggests a semi-automated, large-scale labeling approach for the training data, potentially with manual review for confirmation or difficult cases.

N/A