Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    Why did this record match?
    Reference Devices :

    K240516

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The diagnostic ultrasound system and probes are designed to obtain ultrasound images and analyze body fluids. The clinical applications include: Fetal/Obstetrics, Abdominal, Gynecology, Intraoperative. Pediatric, Small Organ, Neonatal Cephalic, Trans-rectal, Trans-rectal, Trans-vaginal, Muscular-Skeletal (Conventional, Superficial), Urology, Cardiac Adult, Cardiac Pediatric, Thoracic, Trans-esophageal (Cardiac) and Peripheral vessel. It is intended for use by, or by the order of, and under the supervision of, an appropriately trained healthcare professional who is qualified for direct use of medical devices. It can be used in hospitals, private practices, clinics and similar care environment for clinical diagnosis of patients. Modes of Operation: 2D mode. Color Doppler mode, Power Doppler (PD) mode, M mode, Pulsed Wave (PW) Doppler mode, Continuous Wave (CW) Doppler mode. Tissue Doppler Imaging (TDI) mode. Tissue Doppler Wave (TDW) mode, ElastoScan Mode, Combined modes, Multi-Image mode(Dual, Quad), 3D/4D mode.

    Device Description

    The V8/cV8, V7/cV7, V6/cV6 are a general purpose, mobile, software controlled, diagnostic ultrasound system. Their function is to acquire ultrasound data and to display the data as 2D mode, Color Doppler mode, Power Doppler (PD) mode, M mode, Pulsed Wave (PW) Doppler mode, Continuous Wave (CW) Doppler mode, Tissue Doppler Imaging (TDI) mode, Tissue Doppler Wave (TDW) mode, ElastoScan Mode, Combined modes, Multi-Image mode(Dual, Quad), 3D/4D mode. The V8/cV8, V7/cV7, V6/cV6 also give the operator the ability to measure anatomical structures and offer analysis packages that provide information that is used to make a diagnosis by competent health care professionals. The V8/cV8, V7/cV7, V6/cV6 have a real time acoustic output display with two basic indices, a mechanical index and a thermal index, which are both automatically displayed.

    AI/ML Overview

    The provided text describes the acceptance criteria and study proving the device meets those criteria, specifically for the 'EzNerveMeasure' functionality of the V8/cV8, V7/cV7, V6/cV6 Diagnostic Ultrasound System.

    Here's a breakdown of the requested information:

    1. A table of acceptance criteria and the reported device performance

    MetricAcceptance Criteria (Implicit)Reported Device Performance
    Flattening Ratio (FR) Error RateNot explicitly stated an acceptance criterion, but implicitly that the performance is acceptable for clinical use.Average: 8.31% (95% CI: [7.29, 9.34])
    Standard Deviation: 5.22
    Cross-Sectional Area (CSA) Error RateNot explicitly stated an acceptance criterion, but implicitly that the performance is acceptable for clinical use.Average: 13.12% (95% CI: [10.90, 15.34])
    Standard Deviation: 11.33

    Note: The document states, "We tested on the flattening ratio (FR) and cross-sectional area (CSA) of NerveTrack EzNerveMeasure." and then presents the average error rates. While explicit acceptance criteria values are not given (e.g., "FR error

    Ask a Question

    Ask a specific question about this device

    K Number
    K242444
    Date Cleared
    2024-11-27

    (103 days)

    Product Code
    Regulation Number
    892.1550
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    RS85 Diagnostic Ultrasound System (K240516)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Diagnostic Ultrasound System and transducers are intended for diagnostic ultrasound imaging and fluid analysis of the human body.

    The clinical applications include: Fetal/Obstetrics, Abdominal, Gynecology, Pediatric, Small Organ, Neonatal Cephalic, Adult Cephalic, Trans-rectal, Trans-vaginal, Muscular-Skeletal (Conventional, Superficial), Urology, Cardiac Adult, Cardiac Pediatric and Peripheral vessel.

    It is intended for use by, or by the order of, and under the supervision of, an appropriately trained healthcare professional who is qualified for direct use of medical devices. It can be used in hospitals, private practices, clinics and similar care environment for clinical diagnosis of patients.

    Modes of Operation: 2D mode, Color Doppler mode, Power Doppler (PD) mode, M mode, Pulsed Wave (PW) Doppler mode, Continuous Wave (CW) Doppler mode, Tissue Doppler Imaging (TDI) mode, Tissue Doppler Wave (TDW) mode, ElastoScan+™ Mode, Combined modes, Multi-Image modes (Dual, Quad), 3D/4D modes.

    Device Description

    The HERA W9/ HERA W10 are general purpose, mobile, software controlled, diagnostic ultrasound system. Its function is to acquire ultrasound data and to display the data as Bmode, M-mode, Pulsed wave (PW) Doppler, Continuous wave (CW) Doppler, Color Doppler, Tissue Doppler Imaging (TDI), Tissue Doppler Wave (TDW), Power Amplitude Doppler, Pulse Inversion Harmonic Imaging (S- Harmonic), Directional Power Doppler (S-Flow), Color M-Mode, 3D Imaging Mode, 4D Imaging Mode, Elastoscan+ Mode, Tissue Harmonic Imaging, MV-Flow Mode or as a combination of these modes.

    The HERA W9/HERA W10 also give the operator the ability to measure anatomical structures and offers analysis packages that provide information that is used to make a diagnosis by competent health care professionals. The HERA W9/HERA W10 have real time acoustic output display with two basic indices, a mechanical index and a thermal index, which are both automatically displayed.

    AI/ML Overview

    The provided FDA 510(k) summary (K242444) describes the acceptance criteria and study proving the device meets these criteria for three AI-powered features: HeartAssist, BiometryAssist, and ViewAssist, as well as a non-AI feature, SonoSync.

    Here's a breakdown of the requested information for each AI feature:

    HeartAssist

    1. Acceptance Criteria and Reported Device Performance

    TestAcceptance CriteriaReported Device Performance
    View recognition accuracy89%96.07%
    Segmentation dice-score0.80.88
    Size measurement (area) error rate8% or less8% or less
    Size measurement (angle) error rate4% or less4% or less
    Size measurement (circumference) error rate11% or less11% or less
    Size measurement (diameter) error rate11% or less11% or less

    2. Sample size used for the test set and data provenance

    • Individuals: 69
    • Static Images: 315 (at least 1 static image per view location per individual)
    • Provenance: Data collected at two hospitals in the United States and South Korea. Mixed retrospective and prospective data collection.

    3. Number of experts used to establish the ground truth for the test set and their qualifications

    • Number of Experts: Three active participating experts for initial classification and manual drawing, supervised by one additional expert.
    • Qualifications:
      • One obstetrician with more than 20 years of experience (primary classification/drawing).
      • Two sonographers with more than 10 years of experience in fetal cardiology (primary classification/drawing).
      • One obstetrician with more than 25 years of experience (supervising the entire process).

    4. Adjudication method for the test set

    Not explicitly stated. The process mentions "All acquired images for training, tuning and validation were first classified into the correct views by three participating experts. Afterwards, corresponding anatomy areas were manually drawn for each of the images." It doesn't specify if there was a consensus mechanism or independent review and adjudication if the experts disagreed.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done

    No, an MRMC comparative effectiveness study was not done. This study focuses on the standalone performance of the AI algorithm.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

    Yes, the provided data describes the standalone performance of the HeartAssist algorithm against established ground truth.

    7. The type of ground truth used

    Expert consensus based on manual classification of views and manual drawing of corresponding anatomy areas.

    8. The sample size for the training set

    Not explicitly stated for HeartAssist, but it is mentioned that "Data used for training, tuning and validation purpose are completely separated from the ones during training process and there is no overlap among the three."

    9. How the ground truth for the training set was established

    Not explicitly detailed for the training set, but it is implied to be the same method as for the validation set: "All acquired images for training, tuning and validation were first classified into the correct views by three participating experts. Afterwards, corresponding anatomy areas were manually drawn for each of the images."


    BiometryAssist

    1. Acceptance Criteria and Reported Device Performance

    TestAcceptance CriteriaReported Device Performance
    Segmentation dice-score0.80.91
    Size measurement (circumference) error rate8% or less8% or less
    Size measurement (distance) error rate4% or less4% or less
    Size measurement (NT, NB, IT) error rate1mm or less1mm or less

    2. Sample size used for the test set and data provenance

    • Individuals: 33
    • Static Images: 360 (at least 1 static image per view location per individual)
    • Provenance: Data collected at two hospitals in South Korea and the United States. Mixed retrospective and prospective data collection.

    3. Number of experts used to establish the ground truth for the test set and their qualifications

    • Number of Experts: Three active participating experts for initial classification and manual drawing, supervised by one additional expert.
    • Qualifications:
      • One obstetrician with more than 20 years of experience (primary classification/drawing).
      • Two sonographers with more than 10 years of experience in fetal cardiology (primary classification/drawing).
      • One obstetrician with more than 25 years of experience (supervising the entire process).

    4. Adjudication method for the test set

    Not explicitly stated. The process mentions "All acquired images for training, tuning and validation were first classified into the correct views by three participating experts. Afterwards, corresponding anatomy areas were manually drawn for each of the image." It doesn't specify if there was a consensus mechanism or independent review and adjudication if the experts disagreed.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done

    No, an MRMC comparative effectiveness study was not done. This study focuses on the standalone performance of the AI algorithm.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

    Yes, the provided data describes the standalone performance of the BiometryAssist algorithm against established ground truth.

    7. The type of ground truth used

    Expert consensus based on manual classification of views and manual drawing of corresponding anatomy areas.

    8. The sample size for the training set

    Not explicitly stated for BiometryAssist, but it is mentioned that "Data used for training, tuning and validation purpose are completely separated from the ones during training process and there is no overlap between the three."

    9. How the ground truth for the training set was established

    Not explicitly detailed for the training set, but it is implied to be the same method as for the validation set: "All acquired images for training, tuning and validation were first classified into the correct views by three participating experts. Afterwards, corresponding anatomy areas were manually drawn for each of the image."


    ViewAssist

    1. Acceptance Criteria and Reported Device Performance

    TestAcceptance CriteriaReported Device Performance
    View recognition accuracy89%94.92%
    Anatomy annotation (segmentation) dice-score0.80.89

    2. Sample size used for the test set and data provenance

    • Individuals: 98
    • Static Images: 1,485 (at least 1 static image per view location per individual)
    • Provenance: Data collected at two hospitals in South Korea and the United States. Mixed retrospective and prospective data collection.

    3. Number of experts used to establish the ground truth for the test set and their qualifications

    • Number of Experts: Three active participating experts for initial classification and manual drawing, supervised by one additional expert.
    • Qualifications:
      • One obstetrician with more than 20 years of experience (primary classification/drawing).
      • Two sonographers with more than 10 years of experience in fetal cardiology (primary classification/drawing).
      • One obstetrician with more than 25 years of experience (supervising the entire process).

    4. Adjudication method for the test set

    Not explicitly stated. The process mentions "All acquired images for training, tuning and validation were first classified into the correct views by three participating experts. Afterwards, corresponding anatomy areas were manually drawn for each of the image." It doesn't specify if there was a consensus mechanism or independent review and adjudication if the experts disagreed.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done

    No, an MRMC comparative effectiveness study was not done. This study focuses on the standalone performance of the AI algorithm.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

    Yes, the provided data describes the standalone performance of the ViewAssist algorithm against established ground truth.

    7. The type of ground truth used

    Expert consensus based on manual classification of views and manual drawing of corresponding anatomy areas.

    8. The sample size for the training set

    Not explicitly stated for ViewAssist, but it is mentioned that "Data used for training, tuning and validation purpose are completely separated from the ones during training process and there is no overlap between the three."

    9. How the ground truth for the training set was established

    Not explicitly detailed for the training set, but it is implied to be the same method as for the validation set: "All acquired images for training, tuning and validation were first classified into the correct views by three participating experts. Afterwards, corresponding anatomy areas were manually drawn for each of the image."


    General Notes applicable to all AI features:

    • No Multi-Reader Multi-Case (MRMC) Study: The document explicitly states that "The subject of this premarket submission, HERA W9/ HERA W10, did not require clinical studies to demonstrate the substantial equivalence." The studies described are technical performance evaluations of the AI algorithms, not comparative effectiveness studies with human readers.
    • Ground Truth Consistency: For all three AI features, the ground truth establishment process is described identically, relying on a small panel of experienced experts.
    • Independence of Data: For all three AI features, it is stated that "Data used for training, tuning and validation purpose are completely separated from the ones during training process and there is no overlap among the three."
    • Demographics: For all three AI features, the demographic distribution of the validation dataset indicates Female Gender, Reproductive age (specific age not collected), and Ethnicity/Country as Not Available / United States and South Korea. ICUOG and AIUM guidelines were used to divide fetal ultrasound images into views. BMI and Gestational Age distributions are also provided.
    • Equipment: Data was acquired with SAMSUNG MEDISON's ultrasound systems (HERA W9/HERA W10) to secure diversity.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1