Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K230209
    Device Name
    Sonix Health
    Date Cleared
    2023-10-20

    (268 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K220975

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Sonix Health is intended for quantifying and reporting echocardiography for use by or on the order of a licensed physician. Sonix Health accepts DICOM-compliant medical images acquired from ultrasound imaging devices. Sonix Health is indicated for use in adult populations.

    Device Description

    Sonix Health comes with the following functions:

    • Checking ultrasound multiframe DICOM
    • Echocardiography multiframe DICOM classification and automatic measurement.
    • Verification of the results and making adjustments manually.
    • Providing the report for analysis

    Sonix Health will be offered as SW only, to be installed directly on customer PC hardware. Sonix Health is DICOM compliant and is used within a local network.

    Sonix Health utilizes a two-step algorithm. A single identification model identifies a view in the first step. The second step performs the deep learning according to the view. The deep learning algorithms for the second step are categorized as B-mode, and Doppler algorithms. The main algorithm of Sonix Health is to identify the view and segment the anatomy in the image.

    AI/ML Overview

    Based on the provided text, here's a detailed description of the acceptance criteria and the study that proves the device meets them:

    Device Name: Sonix Health
    Intended Use: Quantifying and reporting echocardiography for use by or on the order of a licensed physician. Accepts DICOM-compliant medical images acquired from ultrasound imaging devices. Indicated for use in adult populations. Ultrasound images are acquired via B (2D), M, Pulsed-wave Doppler, and Continuous-wave Doppler modes.


    Acceptance Criteria and Reported Device Performance

    MetricAcceptance Criteria (Implicit from "passed the test")Reported Device Performance
    View Recognition AccuracyHigh accuracy (implicitly needed to "pass the test").Average accuracy of 98.22%
    Correlation Coefficient (Manual vs. AI Measurements)High correlation (implicitly needed to "pass the test").93.98% average correlation coefficient when compared to manual measurements by participating experts.

    Study Details

    1. A table of acceptance criteria and the reported device performance: (See table above)

    2. Sample size used for the test set and the data provenance:

      • Total Test Images: 2,744 images
        • B-mode: 476 images
        • M-mode: 243 images
        • Doppler: 2,025 images
      • Data Provenance:
        • Country of Origin: 2,648 (96.5%) from American participants; 1,264 (47.7%) from a U.S. hospital and 1,384 (52.3%) from a South Korea hospital (note: the percentages for US and South Korea seem to describe the total "American participants" data, not the overall 2,744 images. There's a slight ambiguity, but the majority is clearly US-based).
        • Retrospective/Prospective: Not explicitly stated, but "collected from six centers" and "data was collected" implies retrospective collection of existing data for the test set.
        • Demographics: For American participants: 65% male and 35% female. Overall representative institution data showed 50.2% male, average BMI of 22.2, and 67.0% LVEF. The LVEF range for the validation datasets was 14% to 76%, with a mean of 58% and a standard deviation of 11%.
        • Equipment: Data acquired using equipment from four manufacturers.
    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Ground Truth Annotators: Two experienced sonographers with a Registered Diagnostic Cardiac Sonographer (RDCS) certification.
      • Supervising Experts: Two experienced cardiologists. The qualifications (e.g., years of experience) for these cardiologists are not specified beyond "experienced."
    4. Adjudication method for the test set:

      • The "consensus annotation" of the two experienced sonographers (supervised by two cardiologists) was used as the final ground truth. This implies a consensus-based adjudication, but the specific process (e.g., if initial disagreements were resolved through discussion or a third expert) is not detailed. It's essentially a (2+0) or (2+supervision) model.
    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and the effect size of how much human readers improve with AI vs. without AI assistance:

      • No MRMC comparative effectiveness study was done to assess human reader improvement with AI assistance. The performance testing focused on the standalone algorithm's accuracy and correlation with manual measurements (ground truth), not human-AI collaboration. The document explicitly states: "No clinical testing conducted in support of substantial equivalence..."
    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

      • Yes, performance metrics related to "view recognition accuracy" and "correlation coefficient when compared to manual measurements by participating experts" demonstrate a standalone evaluation of the algorithm's output against established ground truth. The device "utilizes artificial intelligence to automate previous manual quantification tasks" and then "users can review and modify the results if necessary," but the performance metrics reported are for the automated results before clinician modification.
    7. The type of ground truth used:

      • The ground truth for the test set was established through expert consensus (two experienced sonographers with RDCS certification, supervised by two experienced cardiologists). The "consensus annotation" was used as the final ground truth. This is a form of expert consensus.
    8. The sample size for the training set:

      • The exact sample size (number of images or studies) for the training data is not explicitly stated. It mentions that "The training data was collected from six centers" and indicates demographic information for "the representative institution."
    9. How the ground truth for the training set was established:

      • The document implies that the "training data" and "validation data" (test set) are distinct and independent. While it details the ground truth establishment for the test set (expert sonographer consensus supervised by cardiologists), it does not explicitly describe how the ground truth for the training set was established. It's reasonable to infer a similar process of expert annotation, but it's not confirmed in the provided text.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1