Search Results
Found 2 results
510(k) Data Aggregation
(77 days)
Sonio Detect
Sonio Detect is intended to analyze fetal ultrasound images and clips using machine learning techniques to automatically detect views, detect anatomical structures within the views and verify quality criteria and characteristics of the views.
The device is intended for use as a concurrent reading aid during the acquisition and interpretation of fetal ultrasound images.
Sonio Detect is a Software as a Service SaaS solution that aims at helping sonographers, OB/GYN MFMs and Fetal surgeons (all three designated as healthcare professionals i.e. HCP in the following) to perform their routine fetal ultrasound examinations in real-time. Sonio Detect can be used by Healthcare Professionals HCPs during fetal ultrasound exams for Trimester 1, Trimester 2 and Trimester 3 of the fetus (GA: from 11 weeks to 37 weeks). The software is intended to assist HCPs in assuring during and after their examination that the examination is complete and all images were collected according to their protocol.
Sonio Detect requires the following:
- Edge Software (described below) to install on a server on the same network as the ● Ultrasound Machine;
- . SaaS accessibility from any internet browser (recommended browser: Google Chrome).
Sonio's Edge Software is a light-weight application that runs on a server (computer) connected to the same network as the Ultrasound Machine. Sonio Edge Software is installed on the HCP server (computer) and network and the main purpose is to receive DICOM instances from the Ultrasound Machine and upload them to Sonio's Cloud to be used by Sonio Detect.
Sonio Detect receives fetal ultrasound images and clips from the ultrasound machine, that are submitted through the edge software by the performing healthcare professional, in real-time and performs the following:
- Automatically detect views; ●
- Automatically detect anatomical structures within the supported views; .
- Automatically verify quality criteria and characteristics of the supported views by checking whether they conform to standardized quality criteria
Quality criteria are related to:
- The presence of an anatomical structure; ●
- . The absence of an anatomical structure:
Characteristics are related to other items than quality criteria:
- . Location of the placenta
- . Fetus sex
Sonio Detect then automatically associates the image to its detected view. It also highlights in yellow the view and/or the corresponding quality criteria or characteristics if there are unverified items: quality criteria or characteristics not verified or view not detected.
The end user can interact with the software to override the Sonio Detect's outputs (reassign the image to another view or unassign it or assign it if it was not assigned, changes the status of a quality criteria from verified to unverified or from unverified to verified) and manually set the characteristics of the views. The user has the ability to review and edit/override the matching at any time during or at the end of the exam.
Here's a detailed breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) summary for Sonio Detect:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria for "Sonio Detect" are primarily reflected in the performance metrics presented in Table 6, specifically Sensitivity and Specificity for various detection tasks. The document does not explicitly state pre-defined thresholds for these metrics as "acceptance criteria" but rather reports the achieved performance. However, for the purpose of this response, we infer the reported performance values as the demonstrated capability that met FDA's requirements for substantial equivalence.
Acceptance Criterion (Implicitly, the reported performance) | Reported Device Performance (Point Estimate) | 95% Wilson CI (Lower Bound) | 95% Wilson CI (Upper Bound) |
---|---|---|---|
Automatic detection of 3D fetal ultrasound images (Sensitivity) | 0.892 | 0.836 | 0.931 |
Automatic detection of Doppler fetal ultrasound images (Sensitivity) | 0.973 | 0.937 | 0.988 |
Automatic detection of fetal ultrasound views through reading of annotations on images (Sensitivity) | 0.913 | 0.852 | 0.951 |
Automatic detection of 7 T1 fetal ultrasound images (Sensitivity) | 0.914 | 0.906 | 0.921 |
Automatic detection of 18 T2/T3 fetal ultrasound images (Sensitivity) | 0.937 | 0.933 | 0.940 |
Automatic detection of 8 fetal brain anatomical structures on the views "Transthalamic", "Transventricular", "Transcerebellar" at T2/T3 (Sensitivity) | 0.934 | 0.925 | 0.943 |
Automatic detection of 8 fetal brain anatomical structures on the views "Transthalamic", "Transventricular", "Transcerebellar" at T2/T3 (Specificity) | 0.949 | 0.942 | 0.955 |
Automatic detection of 6 fetal thorax and heart anatomical structures on the views "Four chambers", "LVOT", “RVOT", "Three vessels or Three vessels and trachea", "Abdominal Circumference", "Axial view of the kidneys" at T1 (Sensitivity) | 0.861 | 0.841 | 0.878 |
Automatic detection of 6 fetal thorax and heart anatomical structures on the views "Four chambers", "LVOT", “RVOT", "Three vessels or Three vessels and trachea", "Abdominal Circumference", "Axial view of the kidneys" at T1 (Specificity) | 0.938 | 0.926 | 0.948 |
Automatic detection of 21 fetal thorax and heart anatomical structures on the views "Four chambers", "LVOT", “RVOT”, "Three vessels or Three vessels and trachea", "Abdominal Circumference”, “Axial view of the kidneys" at T2/T3 (Sensitivity) | 0.919 | 0.913 | 0.924 |
Automatic detection of 21 fetal thorax and heart anatomical structures on the views "Four chambers", "LVOT", “RVOT”, "Three vessels or Three vessels and trachea", "Abdominal Circumference”, “Axial view of the kidneys" at T2/T3 (Specificity) | 0.976 | 0.974 | 0.978 |
Automatic detection of 4 fetal placenta anatomical structures on the views "Placenta insertion", "Placenta location" at T2/T3 (Sensitivity) | 0.967 | 0.955 | 0.975 |
Automatic detection of 4 fetal placenta anatomical structures on the views "Placenta insertion", "Placenta location" at T2/T3 (Specificity) | 0.856 | 0.838 | 0.871 |
Automatic detection of 8 fetal CRL/NT/Profile anatomical structures on the views "Crown Rump Length", “Nuchal Translucency”, “Profile” at T1 (Sensitivity) | 0.898 | 0.885 | 0.910 |
Automatic detection of 8 fetal CRL/NT/Profile anatomical structures on the views "Crown Rump Length", “Nuchal Translucency”, “Profile” at T1 (Specificity) | 0.862 | 0.845 | 0.878 |
Automatic detection of 6 fetal CRL/NT/Profile anatomical structures on the views "Crown Rump Length", “Nuchal Translucency”, “Profile” at T2/T3 (Sensitivity) | 0.893 | 0.879 | 0.906 |
Automatic detection of 6 fetal CRL/NT/Profile anatomical structures on the views "Crown Rump Length", “Nuchal Translucency”, “Profile” at T2/T3 (Specificity) | 0.956 | 0.949 | 0.962 |
Automatic detection of the Anterior placenta location for the views "Placenta insertion", "Placenta location" at T2/T3 (Sensitivity) | 0.959 | 0.918 | 0.980 |
Automatic detection of the Anterior placenta location for the views "Placenta insertion", "Placenta location" at T2/T3 (Specificity) | 0.966 | 0.924 | 0.986 |
Automatic detection of the Posterior placenta location for the views "Placenta insertion", "Placenta location" at T2/T3 (Sensitivity) | 0.966 | 0.924 | 0.986 |
Automatic detection of the Posterior placenta location for the views "Placenta insertion", "Placenta location" at T2/T3 (Specificity) | 0.959 | 0.918 | 0.980 |
Automatic detection of the "Female sex" for fetal sex for the view "External Genitalia" (Sensitivity) | 0.977 | 0.942 | 0.991 |
Automatic detection of the "Female sex" for fetal sex for the view "External Genitalia" (Specificity) | 0.987 | 0.963 | 0.996 |
Automatic detection of the "Male sex" for fetal sex for the view "External Genitalia" (Sensitivity) | 0.987 | 0.963 | 0.996 |
Automatic detection of the "Male sex" for fetal sex for the view "External Genitalia" (Specificity) | 0.977 | 0.942 | 0.991 |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: 36,769 fetal ultrasound images.
- Data Provenance: The document states this was a "global validation dataset." While specific countries are not mentioned, the use of "global" implies a diverse set of origins. It is also noted that the data was independent of that used for model development (training/fine-tuning/internal validation). The document does not explicitly state if the data was retrospective or prospective. Given it's a "validation dataset" of "images," it's typically retrospective, collected prior to the full validation study.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Those Experts
The document does not explicitly state the number of experts or their specific qualifications (e.g., "radiologist with 10 years of experience") used to establish the ground truth for the test set.
4. Adjudication Method for the Test Set
The document does not describe the adjudication method (e.g., 2+1, 3+1, none) used for establishing the ground truth for the test set.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
-
Was an MRMC study done? No, the document explicitly states: "Clinical Study: Not applicable. Clinical studies are not necessary to establish the substantial equivalence of this device." This indicates that no MRMC comparative effectiveness study was conducted to assess the improvement of human readers with AI assistance. The performance reported is a standalone (algorithm only) performance.
-
Effect size of human readers improving with AI vs. without AI assistance: Not applicable, as no MRMC study was performed.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study
- Was a standalone study done? Yes. The document clearly states: "Sonio conducted a standalone performance testing on a dataset of 36 769 fetal ultrasound images."
7. Type of Ground Truth Used
The ground truth for the test set was established through "reading of annotations on images" (as mentioned in Table 6). While the specific method of establishing these annotations (e.g., single expert, expert consensus, pathology, outcomes data) is not detailed, it would inherently involve expert review to create the "annotations." Given the nature of ultrasound image interpretation, it is highly likely based on expert consensus or expert-reviewed annotations, but this is not explicitly stated. It is inferred to be expert-derived given the context of medical image analysis.
8. Sample Size for the Training Set
The document states that the global validation dataset (36,769 images) was "independent of the data used during model development (training/fine tuning/internal validation)." However, it does not provide the specific sample size of the training set.
9. How the Ground Truth for the Training Set Was Established
The document mentions "model development (training/fine tuning/internal validation)," which implies that ground truth was established for these datasets to train and validate the AI models. However, it does not explicitly describe the method for establishing this ground truth (e.g., number of experts, qualifications, adjudication method). It can be inferred that a similar process involving expert annotations or review would have been used as for the test set, but this is not detailed.
Ask a specific question about this device
(165 days)
Sonio Detect
Sonio Detect is intended to analyze fetal ultrasound images and clips using machine learning techniques to automatically detect views, detect anatomical structures within the views and verify quality criteria of the views.
The device is intended for use as a concurrent reading aid during the acquisition and interpretation of fetal ultrasound images.
Sonio Detect is a Software as a Service SaaS solution that aims at helping sonographers, OB/GYNs, MFMs and Fetal surgeons (all three designated as healthcare professionals i.e. HCP) to perform their routine fetal ultrasound examinations in real-time. Sonio Detect can be used by Healthcare Professionals HCPs during fetal ultrasound exams for Trimester 1, Trimester 2 and Trimester 3 of the fetus (Gestational Age: from 11 weeks to 37 weeks). The software is intended to assist HCPs in assuring during and after their examination that the examination is complete and all images were collected according to their protocol.
Sonio Detect receives fetal ultrasound images and clips from the ultrasound machine, that are submitted through the edge software by the performing healthcare professional, in real-time and performs the following:
- . Automatically detect views;
- Automatically detect anatomical structures within the supported views; .
- Automatically verify quality criteria of the supported views by checking whether they . conform to standardized quality criteria.
Quality criteria are related to: - the presence or absence of an anatomical structure; ●
- the zoom level for some views.
Sonio Detect then automatically associates the image to its detected view. It also highlights in yellow the view and/or the corresponding quality criteria if there are unverified items : quality criteria not verified or view not detected.
The end user can interact with the software to override the Sonio Detect's outputs (reassign the image to another view or unassign it or assign it if it was not assigned, change the status of a quality criteria from verified to unverified or from unverified to verified). The user has the ability to review and edit/override the matching at any time during or at the end of the exam.
Sonio Detect Acceptance Criteria and Study Details
1. Acceptance Criteria and Reported Device Performance
The acceptance criteria for Sonio Detect are implicitly defined by the reported performance metrics, which the FDA has deemed sufficient for substantial equivalence. The reported performance is presented as sensitivities, specificities, and proportions of correctly read annotations.
Performance Metric | Acceptance Criteria (Implied) | Reported Device Performance |
---|---|---|
3D Fetal Ultrasound Image Detection Sensitivity | High sensitivity | 0.980 (95% Wilson's CI: 0.930, 0.994) |
Doppler Fetal Ultrasound Image Detection Sensitivity | High sensitivity | 0.963 (95% Wilson's CI: 0.908, 0.985) |
Fetal Ultrasound Views Detection Proportion Correct | High proportion | 0.923 (95% Wilson's CI: 0.905, 0.938) |
T1 Fetal Ultrasound Views Detection Sensitivity | High sensitivity | 0.942 (Point estimate) |
T2/T3 Fetal Ultrasound Views Detection Sensitivity | High sensitivity | 0.919 (Point estimate) |
T2/T3 Fetal Brain Anatomical Structure Detection Sensitivity | High sensitivity | 0.857 (Point estimate) |
T2/T3 Fetal Brain Anatomical Structure Detection Specificity | High specificity | 0.963 (Point estimate) |
T2/T3 Fetal Heart Anatomical Structure Detection Sensitivity | High sensitivity | 0.900 (Point estimate) |
T2/T3 Fetal Heart Anatomical Structure Detection Specificity | High specificity | 0.982 (Point estimate) |
Zoom Level Verification Sensitivity (Brain Views) | High sensitivity | 0.952 (95% Wilson's CI: 0.909-0.976) |
Zoom Level Verification Specificity (Brain Views) | High specificity | 0.906 (95% Wilson's CI: 0.758-0.968) |
2. Sample Size and Data Provenance for Test Set
- Sample Size: 17,885 fetal ultrasound images.
- Data Provenance: The data was collected from 7 clinical sites in the United States, France, and Germany. This indicates a multi-national dataset. The data was retrospective as it was "independent of the data used during model development (training/fine tuning/internal validation) and establishment of device operating points."
3. Number of Experts and Qualifications for Ground Truth (Test Set)
The document does not explicitly state the number of experts used or their specific qualifications (e.g., years of experience) for establishing the ground truth of the test set. However, it indicates that the device automatically detects fetal ultrasound views "through reading of annotations on images." This implies that human experts (presumably sonographers, OB/GYNs, MFMs, or Fetal surgeons, as these are the intended users) provided the initial annotations that served as the ground truth.
4. Adjudication Method (Test Set)
The document does not specify an adjudication method (e.g., 2+1, 3+1, none) for establishing the ground truth of the test set.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was performed. The document explicitly states: "Clinical Study: Not applicable. Clinical studies are not necessary to establish the substantial equivalence of this device." Therefore, there is no reported effect size of human readers improving with AI vs. without AI assistance.
6. Standalone Performance Study
Yes, a standalone performance study was done. The document states: "Sonio conducted a standalone performance testing on a dataset of 17885 fetal ultrasound images..." This indicates the algorithm's performance was evaluated without human intervention in the loop during the assessment of the test set.
7. Type of Ground Truth Used (Test Set)
The ground truth for the test set was established through "reading of annotations on images." This suggests the ground truth was based on expert annotations or labeling of the ultrasound images, likely by the qualified healthcare professionals who generated the initial data.
8. Sample Size for Training Set
The document does not explicitly state the sample size for the training set. It refers to "data used during model development (training/fine tuning/internal validation)" but does not provide a specific number of images or cases for this phase.
9. How Ground Truth for Training Set was Established
The method for establishing the ground truth for the training set is not explicitly detailed. However, given that the test set's ground truth was based on annotations, it is highly probable that the training set's ground truth was established through a similar process of expert annotation or labeling of the fetal ultrasound images and clips.
Ask a specific question about this device
Page 1 of 1