Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K241971
    Date Cleared
    2024-10-11

    (98 days)

    Product Code
    Regulation Number
    892.1550
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K230084, K241302, K231965

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The diagnostic ultrasound system and probes are designed to obtain ultrasound images and analyze body fluids.

    The clinical applications include: Fetal/Obstetrics, Abdominal, Gynecology, Intra-operative, Pediatric, Small Organ, Neonatal Cephalic, Adult Cephalic, Trans-vaginal, Muscular-Skeletal (Conventional, Superficial), Urology, Cardiac Adult, Cardiac Pediatric, Thoracic, Trans-esophageal (Cardiac) and Peripheral vessel.

    It is intended for use by, or by the order of, and under the supervision of, an appropriately trained healthcare professional who is qualified for direct use of medical devices. It can be used in hospitals, private practices, clinics and similar care environment for clinical diagnosis of patients.

    Modes of Operation: 2D mode. Color Doppler mode. Pulsed Wave (PW) Doppler mode. Continuous Wave (CW) Doppler mode, Tissue Doppler Imaging (TDI) mode, Tissue Doppler Wave (TDW) mode, Power Doppler (PD) mode, ElastoScan™ Mode, MV-Flow Mode, Multi Image mode (Dual Quad) Combined modes 3D/AD model

    Device Description

    The HERA Z20, R20, HERA Z30, R30 diagnostic ultrasound system are a general purpose, mobile, software controlled, diagnostic ultrasound system. Their function is to acquire ultrasound data and to display the data as 2D mode, Color Doppler mode, Power Doppler (PD) mode, M mode, Pulsed Wave (PW) Doppler mode, Continuous Wave (CW) Doppler mode, Tissue Doppler Imaging (TDI) mode, Tissue Doppler Wave (TDW) mode, ElastoScan Mode, Combined modes, MV-Flow mode, Multi-Image mode(Dual, Quad), 3D/4D mode.

    The HERA Z20, R20, HERA Z30, R30 diagnostic ultrasound system also give the operator the ability to measure anatomical structures and offer analysis packages that provide information that is used to make a diagnosis by competent health care professionals. The HERA Z20, R20, HERA Z30, R30 diagnostic ultrasound system have a real time acoustic output display with two basic indices, a mechanical index and a thermal index, which are both automatically displayed.

    AI/ML Overview

    Here's a summary of the acceptance criteria and the studies conducted for the AI-powered features of the HERA Z20, R20, HERA Z30, R30 Diagnostic Ultrasound System, based on the provided text.

    This document describes several AI-powered features: Live ViewAssist, EzVolume, UterineContour, ViewAssist, HeartAssist, and BiometryAssist. Each feature has its own acceptance criteria and study findings.


    1. A table of acceptance criteria and the reported device performance

    Note: Some performance metrics were not explicitly stated as "acceptance criteria" but rather as "summary test statistics or other test results," indicating the device's measured performance against implicit or internal targets.

    AI FeatureMetricAcceptance CriteriaReported Device Performance
    Live ViewAssistQuality assessment (Cohen's kappa)Threshold 0.7Average Cohen's kappa coefficient: 0.818
    Time/duration (Frames per Second - FPS)Threshold 20 FPSAverage speed: 30.06 FPS
    EzVolumeAcceptance Rate (segmentation)Higher than 70% for each label1st Trimester: Fluid 98%, Fetus 96%, Umbilical-cord 80%, Placenta 86%, Uterus 89%
    2nd/3rd Trimester: Fluid 92%, Head 94%, Body 84%, Limbs 83%, Umbilical-cord 82%, Placenta 85%, Uterus 87%
    Mean DSC (segmentation)(No explicit numerical criterion provided, but correlation with acceptance rate indicates adequacy)1st Tri (Accepted): Fluid 0.96, Fetus 0.91, Umbilical-cord 0.68, Placenta 0.74, Uterus 0.93
    1st Tri (Rejected): Fluid 0.17, Fetus 0.55, Umbilical-cord 0.37, Placenta 0.33, Uterus 0.32

    2nd/3rd Tri (Accepted): Fluid 0.78, Head 0.94, Body 0.68, Umbilical-cord 0.67, Limbs 0.66, Placenta 0.75, Uterus 0.80
    2nd/3rd Tri (Rejected): Fluid 0.25, Head 0.46, Body 0.29, Umbilical-cord 0.38, Limbs 0.39, Placenta 0.32, Uterus 0.30 |
    | UterineContour| Segmentation (uterus Dice-score) | Not explicitly stated as acceptance criteria | Average dice-score of uterus: 96% |
    | | Segmentation (endometrium Dice-score) | Not explicitly stated as acceptance criteria | Average dice-score of endometrium: 92% |
    | | 3D coronal view adaptation | Proportion of appropriateness evaluated as clinically diagnosable, with over 90% of all cases | Over 90% of all cases |
    | ViewAssist | View recognition accuracy | Threshold 89% | Average recognition accuracy: 94.50% |
    | | Anatomy annotation (Dice-score) | Threshold 0.8 | Average Dice-score: 0.892 |
    | HeartAssist | View recognition accuracy | Threshold 89% | Average recognition accuracy: 95.00% |
    | | Segmentation (Dice-score) | Threshold 0.8 | Average Dice-score: 0.876 |
    | | Size measurement (Area error rate) | Not explicitly stated as acceptance criteria | 8% or less |
    | | Size measurement (Angle error rate) | Not explicitly stated as acceptance criteria | 4% or less |
    | | Size measurement (Circumference error rate) | Not explicitly stated as acceptance criteria | 11% or less |
    | | Size measurement (Diameter error rate) | Not explicitly stated as acceptance criteria | 11% or less |
    | BiometryAssist| Segmentation (Dice-score) | Threshold 0.8 | Average Dice-score: 0.928 |
    | | Size measurement (Circumference error rate) | Not explicitly stated as acceptance criteria | 8% or less |
    | | Size measurement (Distance error rate) | Not explicitly stated as acceptance criteria | 4% or less |
    | | Size measurement (NT, NB, IT error rate)| Not explicitly stated as acceptance criteria | 1mm or less |


    2. Sample size used for the test set and the data provenance

    AI FeatureTest Set Sample SizeData Provenance
    Live ViewAssist3,900 fetal ultrasound imagesMix of retrospective and prospective data collection in clinical practice from Americans and Koreans (gender: female, reproductive age; BMI 17-45.4)
    EzVolume200 test volumes (100 in 1st trimester, 100 in 2nd/3rd trimester)Mix of retrospective and prospective data collection in clinical practice from Koreans, Americans, Italians, and British (gender: female, reproductive age)
    UterineContour450 sagittal uterus images (for segmentation) and 30 sagittal images (for 3D coronal view)Mix of retrospective and prospective data collection in clinical practice from three hospitals in Korea (gender: female, reproductive age)
    ViewAssist1,600 fetal ultrasound and fetal biometry imagesMix of retrospective and prospective data collection in clinical practice from two hospitals in America and Korea (gender: female, reproductive age; BMI 17-45.4)
    HeartAssist440 fetal heart imagesMix of retrospective and prospective data collection in clinical practice from America and Korea (gender: female, reproductive age; BMI 17-45.4)
    BiometryAssist360 fetal biometry imagesMix of retrospective and prospective data collection in clinical practice from two hospitals in America and Korea (gender: female, reproductive age; BMI 17-45.4)

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    AI FeatureNumber of ExpertsQualifications of Experts
    Live ViewAssist3 primary experts, 1 supervising expertAn obstetrician with more than 20 years of experience (primary). Two sonographers with more than 10 years of experience, all in fetal cardiology (primary). Another obstetrician with more than 25 years of experience (supervisor).
    EzVolume4 primary experts, 1 supervising expertAn obstetrician with more than 20 years of experience (primary). Three examiners (clinical experts) with more than 10 years of experience, all in fetal diagnosis (primary). Another obstetrician with more than 25 years of experience (supervisor).
    UterineContour3 OB/GYN expertsThree participating OB/GYN experts with more than 10 years' experience.
    ViewAssist3 primary experts, 1 supervising expertAn obstetrician with more than 20 years of experience (primary). Two sonographers with more than 10 years of experience, all in fetal cardiology (primary). Another obstetrician with more than 25 years of experience (supervisor).
    HeartAssist3 primary experts, 1 supervising expertAn obstetrician with more than 20 years of experience (primary). Two sonographers with more than 10 years of experience, all in fetal cardiology (primary). Another obstetrician with more than 25 years of experience (supervisor).
    BiometryAssist3 primary experts, 1 supervising expertAn obstetrician with more than 20 years of experience (primary). Two sonographers with more than 10 years of experience, all in fetal cardiology (primary). Another obstetrician with more than 25 years of experience (supervisor).

    4. Adjudication method for the test set

    AI FeatureAdjudication Method
    Live ViewAssistGround truth established by consensus of 3 experts, supervised by 1. Exact method (e.g., 2+1, 3+1) not explicitly detailed, but implied by "manual drawing" and "classified into acceptable and not-acceptable views by three participating experts."
    EzVolumeGround truths were drawn manually by four participating clinical experts, supervised by one.
    UterineContourEach of the 3 experts delineated structures. Conflicts were resolved by a consensus of the three experts ("fixed the wrong part with consensus").
    ViewAssistGround truth established by consensus of 3 experts, supervised by 1.
    HeartAssistGround truth established by consensus of 3 experts, supervised by 1.
    BiometryAssistGround truth established by consensus of 3 experts, supervised by 1.

    5. If a multi-reader, multi-case (MRMC) comparative effectiveness study was done

    A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not explicitly stated as performed to compare human readers with and without AI assistance. The studies described focus on the standalone performance of the AI algorithms.


    6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done

    Yes, standalone performance studies of the AI algorithms were done for all the described features: Live ViewAssist, EzVolume, UterineContour, ViewAssist, HeartAssist, and BiometryAssist. The reported metrics like Cohen's kappa, FPS, acceptance rates, Dice scores, and error rates are all measures of the algorithm's performance without human intervention during the assessment phase (though human experts were used to establish ground truth).


    7. The type of ground truth used

    For all features, the ground truth was established by expert consensus based on manual classification, delineation, or drawing by qualified clinical experts (obstetricians, sonographers, and examiners).


    8. The sample size for the training set

    The document explicitly states that "Data used for training, tuning and validation purpose are completely separated from the ones during training process and there is no overlap among the three." However, the exact sample size for the training set itself is not provided for any of the features. The sample sizes listed in Section 2 are for the test/validation sets.


    9. How the ground truth for the training set was established

    For all features, the ground truth for the training set (and validation/evaluation sets) was established through manual classification, delineation, or drawing by the same groups of qualified clinical experts mentioned in section 3, following similar expert consensus processes as described for the test sets. For UterineContour, initial delineations by 3 experts were then reviewed and fixed with consensus for unmatched results.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1