Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K240860
    Manufacturer
    Date Cleared
    2024-11-15

    (232 days)

    Product Code
    Regulation Number
    870.2200
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    EchoGo Amyloidosis (1.0)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EchoGo Amyloidosis 1.0 is an automated machine learning-based decision support system, indicated as a screening tool for adult patients aged 65 years and over with heart failure undergoing cardiovascular assessment using echocardiography.

    When utilised by an interpreting physician, this device provides information alerting the physician for referral to confirmatory investigations.

    EchoGo Amyloidosis 1.0 is indicated in adult patients aged 65 years and over with heart failure. Patient management decisions should not be made solely on the results of the EchoGo Amyloidosis 1.0 analysis.

    Device Description

    EchoGo Amyloidosis 1.0 takes a 2D echocardiogram of an apical four chamber (A4C) as its input and reports as an output a binary classification decision suggestive of the presence of Cardiac Amyloidosis (CA).

    The binary classification decision is derived from an AI algorithm developed using a convolutional neural network that was pre-trained on a large dataset of cases and controls.

    The A4C echocardiogram should be acquired without contrast and contain at least one full cardiac cycle. Independent training, tune and test datasets were used for training and performance assessment of the device.

    EchoGo Amyloidosis 1.0 is fully automated without a graphical user interface.

    The ultimate diagnostic decision remains the responsibility of the interpreting clinician using patient presentation, medical history, and the results of available diagnostic tests, one of which may be EchoGo Amyloidosis 1.0.

    EchoGo Amyloidosis 1.0 is a prescription only device.

    AI/ML Overview

    The provided text describes the acceptance criteria and a study proving that the EchoGo Amyloidosis 1.0 device meets these criteria.

    Here's a breakdown of the requested information:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are not explicitly stated as clear, quantitative thresholds in a "table" format within the provided text. Instead, the document describes the study that was conducted to demonstrate performance against generally accepted metrics for such devices (e.g., sensitivity, specificity, PPP, NPV, repeatability, reproducibility).

    However, based on the results presented in the "10.2 Essential Performance" and "10.4 Precision" sections, we can infer the achieved performance metrics. The text states: "All measurements produced by EchoGo Amyloidosis 1.0 were deemed to be substantially equivalent to the predicate device and met pre-specified levels of performance." It does not, however, explicitly list those "pre-specified levels."

    Here's a table summarizing the reported device performance:

    MetricReported Device Performance (95% CI)Notes
    Essential Performance
    Sensitivity84.5% (80.3%, 88.5%)Based on native disease proportion (36.7% prevalence)
    Specificity89.7% (87.0%, 92.4%)Based on native disease proportion (36.7% prevalence)
    Positive Predictive Value (PPV)82.7% (78.8%, 86.5%)At 36.7% prevalence
    Negative Predictive Value (NPV)90.9% (88.8%, 93.2%)At 36.7% prevalence
    PPV (Inferred)15.6% (11.0%, 20.8%)At 2.2% prevalence
    NPV (Inferred)99.6% (99.5%, 99.7%)At 2.2% prevalence
    No-classifications Rate14.0%Proportion of data for which the device returns "no classification"
    Precision
    Repeatability (Positive Agreement)100%Single DICOM clip analyzed multiple times
    Repeatability (Negative Agreement)100%Single DICOM clip analyzed multiple times
    Reproducibility (Positive Agreement)85.5% (82.4%, 88.2%)Different DICOM clips from the same individual
    Reproducibility (Negative Agreement)79.9% (76.5%, 83.2%)Different DICOM clips from the same individual

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: 1,164 patients
      • 749 controls
      • 415 cases
    • Data Provenance: Retrospective case:control study, collected from multiple sites spanning nine states in the USA. The data also included some "Non-USA" origin (as seen in the subgroup analysis table, but the overall testing data seems to be primarily US-based based on the description).

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The document does not explicitly state the number of experts or their specific qualifications (e.g., radiologists with X years of experience) used to establish the ground truth for the test set. It mentions that clinical validation was conducted to "assess agreement with reference ground truth" but does not detail how this ground truth was derived or by whom.

    4. Adjudication Method for the Test Set

    The document does not specify an adjudication method (e.g., 2+1, 3+1, none) used for the test set's ground truth establishment.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    No, the document does not describe an MRMC comparative effectiveness study where human readers improve with AI vs. without AI assistance. The study described is a standalone performance validation of the algorithm against a defined ground truth.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) Was Done

    Yes, a standalone performance study was done. The results presented (sensitivity, specificity, PPV, NPV) are for the algorithm's performance without a human-in-the-loop. The device is described as "fully automated without a graphical user interface" and is a "decision support system" that "provides information alerting the physician for referral." The performance metrics provided are directly from the algorithm's output compared to ground truth.

    7. The Type of Ground Truth Used

    The document states: "The clinical validation study was used to demonstrate consistency of the device output as well as to assess agreement with reference ground truth." However, it does not specify the nature of this "reference ground truth" (e.g., expert consensus, pathology, outcomes data).

    8. The Sample Size for the Training Set

    The training data characteristics table shows the following sample sizes:

    • Controls: 1,262 (sum of age categories: 118+197+337+388+222)
    • Cases: 1,302 (sum of age categories: 122+206+356+389+229)
    • Total Training Set Sample Size: 2,564 patients

    9. How the Ground Truth for the Training Set Was Established

    The document states: "The binary classification decision is derived from an AI algorithm developed using a convolutional neural network that was pre-trained on a large dataset of cases and controls." It mentions that "Algorithm training data was collected from collaborating centres." However, it does not explicitly describe how the ground truth labels (cases/controls) for the training set were established. It is implied that these were clinically confirmed diagnoses of cardiac amyloidosis (cases) and non-amyloidosis (controls), but the method (e.g., biopsy, clinical diagnosis based on multiple tests, expert review) is not detailed.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1