Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K231010
    Device Name
    Corvair
    Manufacturer
    Date Cleared
    2024-06-07

    (427 days)

    Product Code
    Regulation Number
    870.1025
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K081437, K110266, K173830, K232035

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AliveCor's Corvair ECG analysis system assists the healthcare professional (HCP) in measuring and interpreting resting diagnostic ECGs for rhythm and morphological information by providing an initial automated interpretation. The interpretation by the analysis program may then be confirmed, edited, or deleted by the HCP. The analysis program is intended for use in the general population ranging from healthy subjects to patients with cardiac abnormalities. Corvair is intended for use by healthcare professionals, or trained personnel in healthcare facilities (e.g. the doctor's office or hospital) and in acute settings.

    Corvair analyses should be used only as an adjunct to clinical history, symptoms, and the results of other non-invasive and/or invasive tests. Corvair analyses are considered unconfirmed and must be reviewed by a qualified physician. The provisional automated ECG analysis should not be used for clinical action if it has not been reviewed by a qualified healthcare professional capable of independently interpreting the ECG signal.

    Device Description

    Corvair is Software as a Medical Device (SaMD) intended for use by healthcare professionals to analyze a diagnostic-bandwidth ECG. Corvair analyzes a 10-second ECG and provides rhythm analysis, morphological analysis, and ECG interval estimation. Corvair provides 35 separate determinations with 14 rhythm and 21 morphology determinations. Rhythm determinations include Normal Sinus Rhythm, Atrial fibrillation, Atrial flutter, Paced Rhythm, Junctional Rhythm, and Bigeminy, with the modifiers of 1* Degree AV Block, Higher Degree AV Block (including 2nd and 3rd degree AV blocks), Sinus Arrhythmia, Marked Sinus Arrhythmia, Marked Bradycardia, Sinus Tachycardia, and PVCs. Morphology determinations include Intraventricular block (RBBB, LBBB, and Other Intraventricular Block), Hypertrophy (LVH, and RVH), Atrial Enlargement (LAE and RAE), Acute Myocardial Infarction (Anterior MI, Inferior MI, Lateral MI), Old/Previous Myocardial Infarction (Anterior Old MI, Inferior Old MI, Lateral Old MI), Ischemia (Anterior, Inferior, Lateral), Prolonged QT, Paced ECG, Other Morphological Defects (Early Repolarization, Wolff-Parkinson-White Syndrome (WPW)), and Normal or Otherwise Normal. Rhythm and morphology determinations are overlapping. i.e., an ECG could receive multiple rhythm and morphology determinations (e.g., Sinus Rhythm, Acute MI). The device also provides global ECG measurements (PR, ORS, OT, OTcB, OTcF, and Heart Rate). No beat-level analysis is provided by the device. Corvair may fail to detect or misidentify conduction system pacing and demand pacing. Corvair does not detect sinus pause. While Corvair provides PR interval estimation and does detect WPW, it does not have a separate determination of abnormally short PR intervals.

    This SaMD provides these capabilities in the form of an Application Program Interface (API) library. Any software or device ("target device") can incorporate the Corvair API library into its device software to provide users with resting ECG analytics. The input ECG is provided by the target device to Corvair, to which the various Corvair algorithms are applied, and outputs generated accordingly. Corvair has a C++ interface and a distributed binary (library), which is used by the target device to statically link to Corvair. Viewing of Corvair's ECG analysis is handled by the target device.

    Corvair is intended to be used with standard diagnostic-bandwidth, resting ECG recordings collected using 'wet' Ag/AgCl electrodes with conductive gel/paste. Corvair only requires 4 ECG leads for analysis, specifically, either Leads {I, II, V2, and V4}, or Leads {I, II, V1, and V4}. Compatible devices include resting ECGs from GE Medical Systems® (e.g., K081437, MAC 1600, K110266, MAC 5500, K173830, MAC VU360, etc.), and AliveCor's Impala (K232035). Regardless of the lead configuration, Corvair provides the same set of rhythm, morphological, and interval determinations. Corvair has two modes of operation, Symptomatic Mode, which is used when the pre-test probability for a specific rhythm or morphology is high, and Asymptomatic Mode, which optimizes the PPV, by optimizing the specificity, to detect the various rhythms and morphologies. The target device can choose which lead set and which mode of determinations to utilize based on the target clinical application and the patient's clinical presentation.

    Corvair utilizes several deep neural networks (DNNs) for its analysis. These DNNs were trained on a dataset of approximately 1 million 12-Lead ECGs acquired from about 400K clinical patients at the Emory University Hospital over several decades between 1985 and 2010. Each ECG has a physician overread confirmed diagnosis with multiple diagnostic codes. The dataset had a 52%/48% ratio of ECGs from male and female patients, respectively. The average age of the patient was 61.3 ± 16. The dataset included 56% white, 33% African American, 2.2% Asian, 9% other races/ethnicities.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    1. A table of acceptance criteria and the reported device performance

    The document states that Corvair was evaluated against a large set of ECGs and compared its analysis output against a known reference using standard ECG performance metrics. These outputs were evaluated against clinically relevant acceptance criteria. However, the specific numerical acceptance criteria for sensitivity, specificity, PPV, and error margins for interval outputs are not explicitly stated in the provided text. The text only mentions that acceptable performance was demonstrated.

    Table of Acceptance Criteria and Reported Device Performance (Summary based on text):

    Performance MetricAcceptance Criteria (Not explicitly stated numerically)Reported Device Performance
    Interpretive Outputs (Rhythm & Morphology)Clinically relevant acceptance criteria for sensitivity, specificity, and PPV.Performance demonstrated as effective and substantially equivalent to predicate.
    Interval Outputs (PR, QRS, QT)Clinically relevant acceptance criteria for mean error and standard deviation of error.Performance demonstrated as effective and substantially equivalent to predicate using CSEDB and AliveCor proprietary datasets.
    Heart Rate AccuracyClinically relevant acceptance criteria for mean absolute error.Performance demonstrated as effective and substantially equivalent to predicate using AliveCor proprietary dataset.

    2. Sample size used for the test set and the data provenance

    • Test Set Size: The document states that Corvair was evaluated "against a large set of ECGs" and mentions "additional large validation datasets" created from sites independent of the training data. For PR, QRS, QT interval estimation, the Common Standards for Quantitative Electrocardiography Standard Database (CSEDB) and an AliveCor proprietary dataset developed from ECGs collected in a clinical study at the Mayo Clinic's Genetic Heart Rhythm Clinic were used. For Heart Rate and QTcF validation, the AliveCor proprietary dataset was also used. The exact numerical sample sizes for these test sets are not explicitly provided.
    • Data Provenance:
      • Training Data: Approximately 1 million 12-Lead ECGs acquired from about 400,000 clinical patients at the Emory University Hospital over several decades (1985-2010).
      • Test Data:
        • CSEDB (Common Standards for Quantitative Electrocardiography Standard Database) - an established public database.
        • AliveCor proprietary dataset from clinical study at Mayo Clinic's Genetic Heart Rhythm Clinic.
        • Additional large validation datasets from sites independent of the training data.
      • Retrospective/Prospective: The Emory University Hospital data (1985-2010), used for training, is retrospective. The Mayo Clinic data used for the proprietary dataset likely has a prospective component if collected specifically for this study, but the text doesn't explicitly state its collection method. The "additional large validation datasets" are not detailed regarding their collection method.
      • Country of Origin: Emory University Hospital and Mayo Clinic are in the United States.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    The document mentions that the training dataset ECGs "each has a physician overread confirmed diagnosis with multiple diagnostic codes." For the test sets, the ground truth is against a "known reference." While it refers to "physician overread confirmed diagnosis" for the training set, it does not explicitly state the number or specific qualifications of experts who established the ground truth for the test set. For CSEDB, the ground truth is part of the established database, which typically involves expert consensus. For the AliveCor proprietary dataset, the text implies a clinical study setting, but details on ground truth establishment by experts are missing.

    4. Adjudication method for the test set

    The document does not explicitly state the adjudication method used for establishing the ground truth of the test set (e.g., 2+1, 3+1, none). It refers to "physician overread confirmed diagnosis" for the training data and "known reference" for the test data, implying an established ground truth, but the process of its establishment is not detailed.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and if so, what was the effect size of how much human readers improve with AI vs without AI assistance.

    The document explicitly states: "No clinical testing was required or conducted to support a determination of substantial equivalence." This indicates that an MRMC comparative effectiveness study was not performed for this submission. The device is intended to assist healthcare professionals, providing an "initial automated interpretation" that "may then be confirmed, edited, or deleted by the HCP," but its impact on human performance was not part of this specific submission's evidence.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done.

    Yes, a standalone performance evaluation was done. The nonclinical performance testing sections describe evaluating Corvair's analysis output against a known reference using standard ECG performance metrics (sensitivity, specificity, PPV, mean error, standard deviation of error, mean absolute error). This directly assesses the algorithm's performance without a human in the loop. The device provides its analysis as an API library, which integrates into other software, implying a standalone analysis capability.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    The ground truth for the training set was based on "physician overread confirmed diagnosis with multiple diagnostic codes." For the test sets, it was against a "known reference." For the CSEDB, it's generally accepted expert consensus reference data. For the proprietary dataset from Mayo Clinic, it likely involves clinical diagnoses and expert review, but the specific process (e.g., expert consensus vs. single physician overread) is not detailed.

    8. The sample size for the training set

    The training set consisted of approximately 1 million 12-Lead ECGs acquired from about 400,000 clinical patients.

    9. How the ground truth for the training set was established

    The ground truth for the training set was established through "a physician overread confirmed diagnosis with multiple diagnostic codes" for each ECG. This indicates that medical professionals reviewed and confirmed the diagnoses, which were then used as the labels for training the deep neural networks.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1