Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K252433

    Validate with FDA (Live)

    Manufacturer
    Date Cleared
    2026-03-16

    (227 days)

    Product Code
    Regulation Number
    892.1550
    Age Range
    0.21 - 0.79
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Sonio Detect is intended to analyze fetal ultrasound images and clips using machine learning techniques to automatically detect views, detect anatomical structures within the views and verify quality criteria and characteristics of the views.

    The device is intended for use as a concurrent reading aid during the acquisition and interpretation of fetal ultrasound images.

    Device Description

    Sonio Detect is a Software as a Service SaaS solution that aims at helping sonographers, OB/GYN MFMs and Fetal surgeons (all three designated as healthcare professionals i.e. HCP in the following) to perform their routine fetal ultrasound examinations in real-time. Sonio Detect can be used by Healthcare Professionals HCPs during fetal ultrasound exams for Trimester 1, Trimester 2 and Trimester 3 of the fetus (GA: from 11 weeks to 41 weeks). The software is intended to assist HCPs in assuring during and after their examination that the examination is complete and all images were collected according to their protocol.

    Sonio Detect requires the following:

    • Edge Software (described below) to install on a server on the same network as the Ultrasound Machine;
    • SaaS accessibility from any internet browser (recommended browser: Google Chrome).

    Sonio's Edge Software is a light-weight application that runs on a server (computer) connected to the same network as the Ultrasound Machine. Sonio Edge Software is installed on the HCP server (computer) and network and the main purpose is to receive DICOM instances from the Ultrasound Machine and upload them to Sonio's Cloud to be used by Sonio Detect.

    Sonio Detect receives fetal ultrasound images and clips from the ultrasound machine, that are submitted through the edge software by the performing healthcare professional, in real-time and performs the following:

    • Automatically detect views;
    • Automatically detect anatomical structures within the supported views;
    • Automatically verify quality criteria and characteristics of the supported views by checking whether they conform to standardized quality criteria
    • Automatically output bounding box for views and structures

    Quality criteria are related to:

    • The presence of an anatomical structure;
    • The absence of an anatomical structure;

    Characteristics are related to other items than quality criteria:

    • Location of the placenta
    • Fetus sex

    Sonio Detect then automatically associates the image to its detected view. It also highlights in yellow the view and/or the corresponding quality criteria or characteristics if there are unverified items: quality criteria or characteristics not verified or view not detected.

    The end user can interact with the software to override the Sonio Detect's outputs (reassign the image to another view or unassign it or assign it if it was not assigned, changes the status of a quality criteria from verified to unverified or from unverified to verified) and manually set the characteristics of the views. The user has the ability to review and edit/override the matching at any time during or at the end of the exam.

    The list of views, anatomical structures, quality criteria and characteristics that can be automatically detected and verified by the software are detailed in tables 1, 2, 3 and 4 below.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves Sonio Detect (v3) meets them, based on the provided FDA 510(k) clearance letter:


    Sonio Detect (v3) - Acceptance Criteria and Performance Study

    1. Table of Acceptance Criteria and Reported Device Performance

    The FDA clearance letter does not explicitly state pre-defined acceptance criteria for each metric (e.g., "Sensitivity must be > X"). Instead, it presents the reported performance values from a standalone bench study. The "acceptance" is implied by the FDA's substantial equivalence determination based on these reported results.

    However, to create a table highlighting what was reported as acceptable performance, we can extract the sensitivity and specificity values for various detection tasks, and mIoU for localization tasks.

    Item (Fetal Ultrasound Views, Anatomical Structures, or Characteristics Automatically Detected/Localized)Reported Sensitivity PE (95%CI)Reported Specificity PE (95%CI)Reported mIoU (95%CI)
    Automatic detection of 14 T1 fetal ultrasound images0.886 (0.876-0.898)0.981 (0.979-0.982)N/A
    Automatic detection of 40 T2/T3 fetal ultrasound images0.901 (0.896-0.905)0.993 (0.993-0.994)N/A
    Automatic localization of 1 view (T1)N/AN/A0.777 (0.743-0.811)
    Automatic detection of 1 fetal brain anatomical structure on the view "Transthalamic" at T10.815 (0.751-0.871)0.938 (0.901-0.973)N/A
    Automatic detection of 7 fetal brain anatomical structures on the views "Transthalamic", "Transventricular", "Transcerebellar" at T2/T30.941 (0.934-0.947)0.951 (0.943-0.958)N/A
    Automatic detection of 8 fetal thorax and heart anatomical structures on the views "4 chambers", "LVOT", "RVOT", "Three vessels", "Three vessels and trachea", "Abdominal Circumference", "Axial view of the kidneys", "Diaphragm", "Abdominal cord insertion" at T10.872 (0.848-0.892)0.921 (0.913-0.928)N/A
    Automatic detection of 24 fetal thorax and heart anatomical structures on the views "Four chambers", "LVOT", "RVOT", "Three vessels", "Three vessels and trachea", "Abdominal Circumference", "Axial view of the kidneys", "Diaphragm", "Abdominal cord insertion" at T2/T30.914 (0.906-0.920)0.963 (0.961-0.965)N/A
    Automatic detection of 3 fetal placenta anatomical structures on the view "Placenta / Cervix" at T2/T30.887 (0.874-0.899)0.962 (0.950-0.972)N/A
    Automatic detection of 6 fetal Sagittal Fetus anatomical structures on the views "Crown Rump Length", "Profile" at T10.869 (0.849-0.896)0.848 (0.822-0.875)N/A
    Automatic detection of 1 fetal Sagittal Fetus anatomical structures on the views "Profile" at T2/T30.883 (0.852-0.913)0.800 (0.754-0.842)N/A
    Automatic detection of 5 fetal Coronal Face anatomical structures on the views "Lips and nose", "Orbits", "Coronal face" at T2/T30.922 (0.897-0.947)0.901 (0.883-0.923)N/A
    Automatic detection of 3 fetal Spine anatomical structures on the view "Sagittal Spine" at T2/T30.839 (0.818-0.862)0.852 (0.829-0.873)N/A
    Automatic localization of 1 brain anatomical structures (on T1)N/AN/A0.683 (0.632-0.734)
    Automatic localization of 3 thorax and heart anatomical structures (on T1)N/AN/A0.679 (0.653-0.705)
    Automatic detection of the Anterior placenta location for the view "Placenta / Cervix" at T2/T30.925 (0.894-0.949)0.918 (0.885-0.948)N/A
    Automatic detection of the Posterior placenta location for the views "Placenta / Cervix" at T2/T30.918 (0.885-0.948)0.925 (0.894-0.949)N/A
    Automatic detection of the "Female sex" for fetal sex for the view "External Genitalia" at T2/T31.000 (1.000-1.000)0.985 (0.969-1.000)N/A
    Automatic detection of the "Male sex" for fetal sex for the view "External Genitalia" at T2/T30.985 (0.969-1.000)1.000 (1.000-1.000)N/A

    2. Sample Size and Data Provenance for the Test Set

    • Sample Size (Test Set): 22,496 fetal ultrasound images.
    • Data Provenance (Test Set):
      • Country of Origin: Not explicitly stated in the provided document.
      • Retrospective or Prospective: Not explicitly stated in the provided document.
      • Independence: The dataset was "independent of the data used during model development (training/fine tuning/internal validation) and establishment of device operating points."
      • Subgroup Validation: Performance was also validated for subgroups including: Ultrasound machine manufacturer (GE, Canon, Philips, Samsung were specified as supported manufacturers), BMI, maternal age, confounding cases, Image quality, geography, gestational age and race/ethnicity (when appropriate).

    3. Number of Experts and Qualifications for Test Set Ground Truth

    The document does not explicitly state the number of experts used to establish the ground truth for the test set, nor their specific qualifications (e.g., "radiologist with 10 years of experience"). It mentions "ground truth" and "independent of the data used during model development," implying expert labeling, but the details are missing from this executive summary.

    4. Adjudication Method for the Test Set

    The adjudication method (e.g., 2+1, 3+1) for establishing the ground truth of the test set is not explicitly stated in the provided document.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No MRMC comparative effectiveness study was done. The document explicitly states: "Clinical Study: Not applicable. Clinical studies are not necessary to establish the substantial equivalence of this device." This means there is no information on how human readers improve with AI vs. without AI assistance.

    6. Standalone (Algorithm Only) Performance

    Yes, a standalone performance study was done. The document states: "Sonio conducted a standalone performance testing on a dataset of 22496 fetal ultrasound images." The results presented in Table 6 are all for the algorithm's standalone performance (sensitivity, specificity for detection tasks, and mIoU for localization tasks).

    7. Type of Ground Truth Used

    The specific type of ground truth (e.g., expert consensus, pathology, outcomes data) for the test set is not explicitly detailed. However, given the nature of fetal ultrasound image analysis and the listed performance metrics (sensitivity, specificity, mIoU), it is highly probable that the ground truth was established by expert consensus or individual expert annotations on the images, which were then compared against the device's output. The document itself doesn't provide this specific detail.

    8. Sample Size for the Training Set

    The sample size for the training set is not explicitly stated in the provided document. The document refers to "data used during model development (training/fine tuning/internal validation)," but does not provide the specific numbers of images or cases.

    9. How the Ground Truth for the Training Set Was Established

    The method for establishing the ground truth for the training set is not explicitly stated in the provided document. Similar to the test set, it is inferred to be through expert annotation, but no details regarding the number or qualifications of experts or the adjudication process are given.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1