Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K243614
    Device Name
    Sonio Suspect
    Manufacturer
    Date Cleared
    2025-02-21

    (91 days)

    Product Code
    Regulation Number
    892.2060
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Sonio

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Sonio Suspect is intended to assist interpreting physicians, during or after fetal ultrasound examinations, by automatically identifying and characterizing abnormal fetal ultrasound findings on detected views, using machine learning techniques.

    The device is intended for use as a concurrent reading aid on acquired images, during and/or after fetal ultrasound examinations.

    The device provides information on abnormal findings that may be useful in rendering potential diagnosis.

    Patient management decisions should not be made solely on the results of the Sonio Suspect analysis.

    Device Description

    Sonio Suspect is a Software as a Service (SaaS) solution that aims at helping interpreting physicians (designated as healthcare professionals i.e. HCP in the following) to identify abnormal fetal ultrasound findings during and/or after fetal ultrasound examinations.

    Sonio Suspect is a web application accessible from any device connected to the internet. It can be accessed on a tablet, computer or any other support capable of providing access to a web application.

    Sonio Suspect can be used by HCPs as a concurrent reading aid on acquired images, to assist them during and/or after fetal ultrasound examinations of gestational age (GA): from 11 weeks to 41 weeks. A concurrent read by the users means a read in which the device output is available during and/or after the fetal ultrasound examination.

    The way Sonio Suspect is built allows the HCP to use it at any moment. The software can process any Ultrasound image file uploaded by the HCP, at any time.

    Sonio Suspect can be connected through API to external devices (as an ultrasound machine) to receive images.

    Sonio Suspect workflow goes through the following steps:

    As soon as an image is automatically received, it is automatically detected and associated with a view (and can be manually re-associated by the HCP). Then abnormal fetal ultrasound findings linked to the view are evaluated and displayed, individually, with one of the following status:

    • Suspected (abnormal findings identified on the image);
    • . Not Suspected (abnormal findings not identified on the image);
    • . Can't be analyzed (abnormal findings not evaluated due to one or several structures not detected or if the fetal position selected is "other or unknown" while it's required to evaluate the abnormal finding).

    Each abnormal finding status can be manually overridden to Present or Not Present by the user.

    AI/ML Overview

    Here's a summary of the acceptance criteria and the study proving the device meets them, based on the provided text:

    Acceptance Criteria and Reported Device Performance

    DescriptionAcceptance Criteria (Implicit from validation studies)Reported Device Performance
    Standalone Performance (Algorithm only)Sensitivity: High sensitivity desired for detecting abnormal findings.
    Specificity: High specificity desired to minimize false positives.Average Sensitivity: 93.2% (95% CI: 91.6%-94.6%)
    Average Specificity: 90.8% (95% CI: 89.5%-92.0%)
    (Individual abnormal finding performance detailed in Table 3)
    Clinical Performance (Human reader with AI assistance vs. without)Reader Accuracy Improvement: The performance of readers assisted by Sonio Suspect should be superior to their performance when unassisted.AUC Improvement: AUC in "Unassisted" setting: 68.9%. AUC in "Assisted" setting: 90.0%. Significant difference of 21.9%.
    (ROC curves (Figure 1) and AUC for individual findings (Table 4) confirm consistent improvement)

    Detailed Study Information:

    1. Sample size used for the test set and the data provenance:

      • Standalone Test Set: 8,745 fetal ultrasound images from 1,115 exams.
      • Clinical Test Set: 750 fetal ultrasound images (between 11 and 41 weeks) evaluated by each reader, from 287 distinct exams.
      • Data Provenance: The standalone test set included data from 75 sites, with 64 located in the United States. The clinical test set included data from 47 sites, with 37 located in the United States. This indicates a mix of US and OUS (Outside US) data, explicitly representing the intended use population. The study was retrospective.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • The document implies that ground truth for the clinical study was based on expert consensus, as it refers to a "fully-crossed multiple case (MRMC) retrospective reader study" where readers provide a "binary determination of the presence or absence of an abnormal finding." However, the exact number of experts explicitly establishing the ground truth for the test set (as opposed to participating as readers) or their specific qualifications for ground truth establishment are not explicitly stated in the provided text. The readers themselves were:
        • 13 readers: 5 MFM (Maternal-Fetal Medicine), 6 OB/GYN (Obstetrician-Gynecologists), and 2 Diagnostic radiologists.
        • Experience: 1-30+ years' experience.
    3. Adjudication method for the test set:

      • The document states that in the clinical study, "For each image, each reader was required to provide a binary determination of the presence or absence of an abnormal finding and to provide a score representing their confidence in their annotation." It also mentions "two independent reading sessions separated by a washout period." While this describes the reader process, it does not explicitly describe an adjudication method (like 2+1 or 3+1) used to establish a definitive ground truth from multiple expert opinions. It implies that the ground truth was pre-established for the images used in the reader study.
    4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • Yes, an MRMC comparative effectiveness study was done.
      • Effect Size: The study demonstrated a significant improvement in reader accuracy. The Area Under the Curve (AUC) for readers:
        • Without AI assistance ("Unassisted"): 68.9%
        • With AI assistance ("Assisted"): 90.0%
        • This represents a significant difference (effect size) of 21.9% in AUC.
    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • Yes, a standalone performance testing was conducted.
      • The results are detailed in Table 3, showing an average sensitivity of 93.2% and specificity of 90.8% for abnormal finding detection.
    6. The type of ground truth used:

      • Implicitly, expert consensus or pre-established clinical diagnosis. For the standalone study, the robust sensitivity and specificity metrics suggest comparison against a definitive "ground truth" for the presence or absence of abnormal findings. For the clinical study, readers compared their findings against this ground truth. The document does not specify if pathology or outcomes data were directly used to define the ground truth for every case, but it's common for such studies to rely on a panel of experts or established clinical reports to define the ground truth for imaging-based diagnoses.
    7. The sample size for the training set:

      • The sample size for the training set is not explicitly stated. The document mentions that the global validation dataset for standalone testing "was independent of the data used during model development (training/internal validation) and the establishment of device operating points," implying a separate training set existed, but its size is not provided.
    8. How the ground truth for the training set was established:

      • This information is not explicitly provided. It can be inferred that similar methods to the test set (e.g., expert review and consensus) would have been used, but no specifics are given in the text.
    Ask a Question

    Ask a specific question about this device

    K Number
    K240406
    Device Name
    Sonio Detect
    Manufacturer
    Date Cleared
    2024-04-26

    (77 days)

    Product Code
    Regulation Number
    892.1550
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Sonio

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Sonio Detect is intended to analyze fetal ultrasound images and clips using machine learning techniques to automatically detect views, detect anatomical structures within the views and verify quality criteria and characteristics of the views.

    The device is intended for use as a concurrent reading aid during the acquisition and interpretation of fetal ultrasound images.

    Device Description

    Sonio Detect is a Software as a Service SaaS solution that aims at helping sonographers, OB/GYN MFMs and Fetal surgeons (all three designated as healthcare professionals i.e. HCP in the following) to perform their routine fetal ultrasound examinations in real-time. Sonio Detect can be used by Healthcare Professionals HCPs during fetal ultrasound exams for Trimester 1, Trimester 2 and Trimester 3 of the fetus (GA: from 11 weeks to 37 weeks). The software is intended to assist HCPs in assuring during and after their examination that the examination is complete and all images were collected according to their protocol.

    Sonio Detect requires the following:

    • Edge Software (described below) to install on a server on the same network as the ● Ultrasound Machine;
    • . SaaS accessibility from any internet browser (recommended browser: Google Chrome).

    Sonio's Edge Software is a light-weight application that runs on a server (computer) connected to the same network as the Ultrasound Machine. Sonio Edge Software is installed on the HCP server (computer) and network and the main purpose is to receive DICOM instances from the Ultrasound Machine and upload them to Sonio's Cloud to be used by Sonio Detect.

    Sonio Detect receives fetal ultrasound images and clips from the ultrasound machine, that are submitted through the edge software by the performing healthcare professional, in real-time and performs the following:

    • Automatically detect views; ●
    • Automatically detect anatomical structures within the supported views; .
    • Automatically verify quality criteria and characteristics of the supported views by checking whether they conform to standardized quality criteria

    Quality criteria are related to:

    • The presence of an anatomical structure; ●
    • . The absence of an anatomical structure:

    Characteristics are related to other items than quality criteria:

    • . Location of the placenta
    • . Fetus sex

    Sonio Detect then automatically associates the image to its detected view. It also highlights in yellow the view and/or the corresponding quality criteria or characteristics if there are unverified items: quality criteria or characteristics not verified or view not detected.

    The end user can interact with the software to override the Sonio Detect's outputs (reassign the image to another view or unassign it or assign it if it was not assigned, changes the status of a quality criteria from verified to unverified or from unverified to verified) and manually set the characteristics of the views. The user has the ability to review and edit/override the matching at any time during or at the end of the exam.

    AI/ML Overview

    Here's a detailed breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) summary for Sonio Detect:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria for "Sonio Detect" are primarily reflected in the performance metrics presented in Table 6, specifically Sensitivity and Specificity for various detection tasks. The document does not explicitly state pre-defined thresholds for these metrics as "acceptance criteria" but rather reports the achieved performance. However, for the purpose of this response, we infer the reported performance values as the demonstrated capability that met FDA's requirements for substantial equivalence.

    Acceptance Criterion (Implicitly, the reported performance)Reported Device Performance (Point Estimate)95% Wilson CI (Lower Bound)95% Wilson CI (Upper Bound)
    Automatic detection of 3D fetal ultrasound images (Sensitivity)0.8920.8360.931
    Automatic detection of Doppler fetal ultrasound images (Sensitivity)0.9730.9370.988
    Automatic detection of fetal ultrasound views through reading of annotations on images (Sensitivity)0.9130.8520.951
    Automatic detection of 7 T1 fetal ultrasound images (Sensitivity)0.9140.9060.921
    Automatic detection of 18 T2/T3 fetal ultrasound images (Sensitivity)0.9370.9330.940
    Automatic detection of 8 fetal brain anatomical structures on the views "Transthalamic", "Transventricular", "Transcerebellar" at T2/T3 (Sensitivity)0.9340.9250.943
    Automatic detection of 8 fetal brain anatomical structures on the views "Transthalamic", "Transventricular", "Transcerebellar" at T2/T3 (Specificity)0.9490.9420.955
    Automatic detection of 6 fetal thorax and heart anatomical structures on the views "Four chambers", "LVOT", “RVOT", "Three vessels or Three vessels and trachea", "Abdominal Circumference", "Axial view of the kidneys" at T1 (Sensitivity)0.8610.8410.878
    Automatic detection of 6 fetal thorax and heart anatomical structures on the views "Four chambers", "LVOT", “RVOT", "Three vessels or Three vessels and trachea", "Abdominal Circumference", "Axial view of the kidneys" at T1 (Specificity)0.9380.9260.948
    Automatic detection of 21 fetal thorax and heart anatomical structures on the views "Four chambers", "LVOT", “RVOT”, "Three vessels or Three vessels and trachea", "Abdominal Circumference”, “Axial view of the kidneys" at T2/T3 (Sensitivity)0.9190.9130.924
    Automatic detection of 21 fetal thorax and heart anatomical structures on the views "Four chambers", "LVOT", “RVOT”, "Three vessels or Three vessels and trachea", "Abdominal Circumference”, “Axial view of the kidneys" at T2/T3 (Specificity)0.9760.9740.978
    Automatic detection of 4 fetal placenta anatomical structures on the views "Placenta insertion", "Placenta location" at T2/T3 (Sensitivity)0.9670.9550.975
    Automatic detection of 4 fetal placenta anatomical structures on the views "Placenta insertion", "Placenta location" at T2/T3 (Specificity)0.8560.8380.871
    Automatic detection of 8 fetal CRL/NT/Profile anatomical structures on the views "Crown Rump Length", “Nuchal Translucency”, “Profile” at T1 (Sensitivity)0.8980.8850.910
    Automatic detection of 8 fetal CRL/NT/Profile anatomical structures on the views "Crown Rump Length", “Nuchal Translucency”, “Profile” at T1 (Specificity)0.8620.8450.878
    Automatic detection of 6 fetal CRL/NT/Profile anatomical structures on the views "Crown Rump Length", “Nuchal Translucency”, “Profile” at T2/T3 (Sensitivity)0.8930.8790.906
    Automatic detection of 6 fetal CRL/NT/Profile anatomical structures on the views "Crown Rump Length", “Nuchal Translucency”, “Profile” at T2/T3 (Specificity)0.9560.9490.962
    Automatic detection of the Anterior placenta location for the views "Placenta insertion", "Placenta location" at T2/T3 (Sensitivity)0.9590.9180.980
    Automatic detection of the Anterior placenta location for the views "Placenta insertion", "Placenta location" at T2/T3 (Specificity)0.9660.9240.986
    Automatic detection of the Posterior placenta location for the views "Placenta insertion", "Placenta location" at T2/T3 (Sensitivity)0.9660.9240.986
    Automatic detection of the Posterior placenta location for the views "Placenta insertion", "Placenta location" at T2/T3 (Specificity)0.9590.9180.980
    Automatic detection of the "Female sex" for fetal sex for the view "External Genitalia" (Sensitivity)0.9770.9420.991
    Automatic detection of the "Female sex" for fetal sex for the view "External Genitalia" (Specificity)0.9870.9630.996
    Automatic detection of the "Male sex" for fetal sex for the view "External Genitalia" (Sensitivity)0.9870.9630.996
    Automatic detection of the "Male sex" for fetal sex for the view "External Genitalia" (Specificity)0.9770.9420.991

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: 36,769 fetal ultrasound images.
    • Data Provenance: The document states this was a "global validation dataset." While specific countries are not mentioned, the use of "global" implies a diverse set of origins. It is also noted that the data was independent of that used for model development (training/fine-tuning/internal validation). The document does not explicitly state if the data was retrospective or prospective. Given it's a "validation dataset" of "images," it's typically retrospective, collected prior to the full validation study.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Those Experts

    The document does not explicitly state the number of experts or their specific qualifications (e.g., "radiologist with 10 years of experience") used to establish the ground truth for the test set.

    4. Adjudication Method for the Test Set

    The document does not describe the adjudication method (e.g., 2+1, 3+1, none) used for establishing the ground truth for the test set.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? No, the document explicitly states: "Clinical Study: Not applicable. Clinical studies are not necessary to establish the substantial equivalence of this device." This indicates that no MRMC comparative effectiveness study was conducted to assess the improvement of human readers with AI assistance. The performance reported is a standalone (algorithm only) performance.

    • Effect size of human readers improving with AI vs. without AI assistance: Not applicable, as no MRMC study was performed.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    • Was a standalone study done? Yes. The document clearly states: "Sonio conducted a standalone performance testing on a dataset of 36 769 fetal ultrasound images."

    7. Type of Ground Truth Used

    The ground truth for the test set was established through "reading of annotations on images" (as mentioned in Table 6). While the specific method of establishing these annotations (e.g., single expert, expert consensus, pathology, outcomes data) is not detailed, it would inherently involve expert review to create the "annotations." Given the nature of ultrasound image interpretation, it is highly likely based on expert consensus or expert-reviewed annotations, but this is not explicitly stated. It is inferred to be expert-derived given the context of medical image analysis.

    8. Sample Size for the Training Set

    The document states that the global validation dataset (36,769 images) was "independent of the data used during model development (training/fine tuning/internal validation)." However, it does not provide the specific sample size of the training set.

    9. How the Ground Truth for the Training Set Was Established

    The document mentions "model development (training/fine tuning/internal validation)," which implies that ground truth was established for these datasets to train and validate the AI models. However, it does not explicitly describe the method for establishing this ground truth (e.g., number of experts, qualifications, adjudication method). It can be inferred that a similar process involving expert annotations or review would have been used as for the test set, but this is not detailed.

    Ask a Question

    Ask a specific question about this device

    K Number
    K230365
    Device Name
    Sonio Detect
    Manufacturer
    Date Cleared
    2023-07-25

    (165 days)

    Product Code
    Regulation Number
    892.1550
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Sonio

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Sonio Detect is intended to analyze fetal ultrasound images and clips using machine learning techniques to automatically detect views, detect anatomical structures within the views and verify quality criteria of the views.
    The device is intended for use as a concurrent reading aid during the acquisition and interpretation of fetal ultrasound images.

    Device Description

    Sonio Detect is a Software as a Service SaaS solution that aims at helping sonographers, OB/GYNs, MFMs and Fetal surgeons (all three designated as healthcare professionals i.e. HCP) to perform their routine fetal ultrasound examinations in real-time. Sonio Detect can be used by Healthcare Professionals HCPs during fetal ultrasound exams for Trimester 1, Trimester 2 and Trimester 3 of the fetus (Gestational Age: from 11 weeks to 37 weeks). The software is intended to assist HCPs in assuring during and after their examination that the examination is complete and all images were collected according to their protocol.
    Sonio Detect receives fetal ultrasound images and clips from the ultrasound machine, that are submitted through the edge software by the performing healthcare professional, in real-time and performs the following:

    • . Automatically detect views;
    • Automatically detect anatomical structures within the supported views; .
    • Automatically verify quality criteria of the supported views by checking whether they . conform to standardized quality criteria.
      Quality criteria are related to:
    • the presence or absence of an anatomical structure; ●
    • the zoom level for some views.
      Sonio Detect then automatically associates the image to its detected view. It also highlights in yellow the view and/or the corresponding quality criteria if there are unverified items : quality criteria not verified or view not detected.
      The end user can interact with the software to override the Sonio Detect's outputs (reassign the image to another view or unassign it or assign it if it was not assigned, change the status of a quality criteria from verified to unverified or from unverified to verified). The user has the ability to review and edit/override the matching at any time during or at the end of the exam.
    AI/ML Overview

    Sonio Detect Acceptance Criteria and Study Details

    1. Acceptance Criteria and Reported Device Performance

    The acceptance criteria for Sonio Detect are implicitly defined by the reported performance metrics, which the FDA has deemed sufficient for substantial equivalence. The reported performance is presented as sensitivities, specificities, and proportions of correctly read annotations.

    Performance MetricAcceptance Criteria (Implied)Reported Device Performance
    3D Fetal Ultrasound Image Detection SensitivityHigh sensitivity0.980 (95% Wilson's CI: 0.930, 0.994)
    Doppler Fetal Ultrasound Image Detection SensitivityHigh sensitivity0.963 (95% Wilson's CI: 0.908, 0.985)
    Fetal Ultrasound Views Detection Proportion CorrectHigh proportion0.923 (95% Wilson's CI: 0.905, 0.938)
    T1 Fetal Ultrasound Views Detection SensitivityHigh sensitivity0.942 (Point estimate)
    T2/T3 Fetal Ultrasound Views Detection SensitivityHigh sensitivity0.919 (Point estimate)
    T2/T3 Fetal Brain Anatomical Structure Detection SensitivityHigh sensitivity0.857 (Point estimate)
    T2/T3 Fetal Brain Anatomical Structure Detection SpecificityHigh specificity0.963 (Point estimate)
    T2/T3 Fetal Heart Anatomical Structure Detection SensitivityHigh sensitivity0.900 (Point estimate)
    T2/T3 Fetal Heart Anatomical Structure Detection SpecificityHigh specificity0.982 (Point estimate)
    Zoom Level Verification Sensitivity (Brain Views)High sensitivity0.952 (95% Wilson's CI: 0.909-0.976)
    Zoom Level Verification Specificity (Brain Views)High specificity0.906 (95% Wilson's CI: 0.758-0.968)

    2. Sample Size and Data Provenance for Test Set

    • Sample Size: 17,885 fetal ultrasound images.
    • Data Provenance: The data was collected from 7 clinical sites in the United States, France, and Germany. This indicates a multi-national dataset. The data was retrospective as it was "independent of the data used during model development (training/fine tuning/internal validation) and establishment of device operating points."

    3. Number of Experts and Qualifications for Ground Truth (Test Set)

    The document does not explicitly state the number of experts used or their specific qualifications (e.g., years of experience) for establishing the ground truth of the test set. However, it indicates that the device automatically detects fetal ultrasound views "through reading of annotations on images." This implies that human experts (presumably sonographers, OB/GYNs, MFMs, or Fetal surgeons, as these are the intended users) provided the initial annotations that served as the ground truth.

    4. Adjudication Method (Test Set)

    The document does not specify an adjudication method (e.g., 2+1, 3+1, none) for establishing the ground truth of the test set.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was performed. The document explicitly states: "Clinical Study: Not applicable. Clinical studies are not necessary to establish the substantial equivalence of this device." Therefore, there is no reported effect size of human readers improving with AI vs. without AI assistance.

    6. Standalone Performance Study

    Yes, a standalone performance study was done. The document states: "Sonio conducted a standalone performance testing on a dataset of 17885 fetal ultrasound images..." This indicates the algorithm's performance was evaluated without human intervention in the loop during the assessment of the test set.

    7. Type of Ground Truth Used (Test Set)

    The ground truth for the test set was established through "reading of annotations on images." This suggests the ground truth was based on expert annotations or labeling of the ultrasound images, likely by the qualified healthcare professionals who generated the initial data.

    8. Sample Size for Training Set

    The document does not explicitly state the sample size for the training set. It refers to "data used during model development (training/fine tuning/internal validation)" but does not provide a specific number of images or cases for this phase.

    9. How Ground Truth for Training Set was Established

    The method for establishing the ground truth for the training set is not explicitly detailed. However, given that the test set's ground truth was based on annotations, it is highly probable that the training set's ground truth was established through a similar process of expert annotation or labeling of the fetal ultrasound images and clips.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1