Search Results
Found 1 results
510(k) Data Aggregation
(182 days)
Clarius OB Al is intended to assist in measurements of fetal biometric parameters (i.e., head circumference, abdominal circumference, femur length, bi-parietal diameter, crown rump length) on ultrasound data acquired by the Clarius Ultrasound Scanner (i.e., curvilinear scanner). The user shall be a healthcare professional trained and qualified in ultrasound. The user retains the responsibility of confirming the validity of the measurements based on standard practices and clinical judgment. Clarius OB Al is indicated for use in adult patients only.
Clarius OB Al is a machine learning algorithm that is incorporated into the Clarius App software as part of the complete Clarius Ultrasound Scanner system for use in obstetric (OB) ultrasound imaging applications. Clarius OB Al is intended for use by trained healthcare practitioners for non-invasive measurements of fetal biometric parameters on ultrasound data acquired by the Clarius Ultrasound Scanner system (i.e., curvilinear scanner) using a deep learning image segmentation algorithm.
During the ultrasound imaging procedure, the anatomical site is selected through a preset software selection (i.e., OB, Early OB) within the Clarius App in which Clarius OB Al will engage to segment the fetal anatomy and place calipers for measurement of fetal biometric parameters.
Clarius OB Al operates by performing the following tasks:
- Automatic detection and measurement of head circumference (HC)
- Automatic detection and measurement of abdominal circumference (AC)
- Automatic detection and measurement of femur length (FL)
- Automatic detection and measurement of bi-parietal diameter (BPD)
- Automatic detection and measurement of crown rump length (CRL)
Clarius OB Al operates by performing automatic measurements of fetal biometric parameters. The user has the option to manually adjust the measurements made by Clarius OB Al by moving the caliper crosshairs. Clarius OB Al does not perform any functions that could not be accomplished manually by a trained and qualified user. Clarius OB AI is intended for use in B-Mode only.
Clarius OB Al is an assistive tool intended to inform clinical management and is not intended to replace The clinician retains the ultimate responsibility of ascertaining the clinical decision-making. measurements based on standard practices and clinical judgment. Clarius OB Al is indicated for use in adult patients only.
Clarius OB Al is incorporated into the Clarius App software, which is compatible with iOS and Android operating systems two versions prior to the latest iOS or Android stable release build and is intended for use with the following Clarius Ultrasound Scanner system transducer (previously 510(k)-cleared in K213436). Clarius OB Al is not a stand-alone software device.
Here's a summary of the acceptance criteria and study details for the Clarius OB AI device, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
| Parameter | Acceptance Criteria (Implicit) | Clarius OB AI Reported Performance |
|---|---|---|
| Fetal Biometric Measurements (HC, AC, FL, BPD, CRL) | Non-inferiority to manual measurements performed by qualified experts | Clarius OB AI was found to be non-inferior to human experts with statistically significant p-values (<2.2e-16) for all fetal biometric measurements. |
| Agreement with Expert Measurements | Strong agreement | Strong agreement shown between Clarius OB AI measurements and the mean of expert clinicians' measurements for all fetal biometrics. Strong agreements also shown with individual expert measurements. |
| Inter-rater Reliability (ICC) | High correlation (implied for both device-expert and expert-expert) | ICC across all fetal biometrics between Clarius OB AI and the reviewers was calculated to be 0.99 (95% CI 0.994—0.997). |
| Segmentation Performance (Dice Score) | High score | Range of average Dice scores (for all anatomical structures) between Clarius OB AI and reviewers was 0.84 (95% CI 0.83—0.87) to 0.97 (95% CI 0.96—0.97). |
| Segmentation Performance (Jaccard Score) | High score | Range of average Jaccard scores (for all anatomical structures) between Clarius OB AI and reviewers was 0.73 (95% CI 0.72—0.74) to 0.94 (95% CI 0.93—0.94). |
| Clinical Usability / Performance as Intended | Device performs as intended in a representative user environment, meets product requirements, is clinically usable, and meets users' needs for semi-automated fetal biometric measurements. | Validation study showed consistent results among all users, meeting pre-defined acceptance criteria. Users successfully activated Clarius OB AI, obtained images, performed live segmentation, automatic measurements, manual adjustments, and saved measurements. |
2. Sample Size for Test Set and Data Provenance
- Sample Size for Test Set: 347 subjects
- Data Provenance: Retrospective analysis of anonymized ultrasound images from 25 clinical sites in the United States, Canada, Philippines, Australia, Kenya, Belgium, and Malaysia. The data represented different ethnic groups and ages (15-45 years). The test data was explicitly stated to be independent from the training and validation (tuning) datasets.
3. Number of Experts and Qualifications for Ground Truth
- Number of Experts: 3 reviewers (clinical truthers) per image.
- Qualifications of Experts: Qualified experts with relevant (i.e., OB/fetal) ultrasound experience.
4. Adjudication Method for Test Set
- Adjudication Method: Each image had fetal biometric measurements performed by 3 reviewers. Each reviewer was blinded to the Clarius OB AI output and the other reviewers' annotations. The reported performance metrics (e.g., ICC, Dice, Jaccard) compare the Clarius OB AI output against the mean of the expert clinicians' measurements, indicating that the mean of the three expert measurements served as the ground truth. This is a form of consensus, where the average of multiple independent readings establishes the reference.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
The provided text describes a study where device performance was compared to human experts, but it does not describe a comparative effectiveness study designed to measure the effect size of how much human readers improve with AI vs. without AI assistance (i.e., human-in-the-loop performance). The study focuses on the standalone performance of the AI compared to human experts.
6. If a Standalone (Algorithm Only) Performance Study was done
- Yes, a standalone performance study was done. The "Summary of the Verification Study" specifically states that the primary objective was to verify that Clarius OB AI automeasurements are non-inferior to manual measurements performed by expert clinicians, and each reviewer was blinded to the Clarius OB AI output. This indicates an evaluation of the algorithm's performance independent of human interaction.
7. Type of Ground Truth Used
- Expert Consensus: The ground truth for the test set was established by manual measurements and boundary outlines (segmentation) performed by 3 qualified expert clinicians, with the mean of these measurements serving as the reference for comparison with the AI.
8. Sample Size for the Training Set
The document mentions that the Clarius OB Al deep neural network (DNN) model was trained using three data sets: training, validation (tuning), and testing. It states that the validation (tuning) data was 10% of the training data. However, the exact sample size for the training set is not explicitly provided. It only states that validation data was 10% of training data, and the test set was independent.
9. How the Ground Truth for the Training Set was Established
The document states: "...anonymized ultrasound images from 25 clinical sites in the United States, Philippines, Australia, Kenya, Belgium, Malaysia, and Canada, representing various ethnicities and ages." And "The Clarius OB Al deep neural network (DNN) model was trained using three data sets: training, validation (tuning), and testing. The validation (tuning) data was 10% of the training data, while the test data was independent and labelled by experts."
While it confirms experts labeled the test data, it does not explicitly describe how the ground truth for the training set was established. It only generally mentions "clinical and/or artificial data intended for non-invasive analysis (i.e., quantitative and/or qualitative) of ultrasound data." and that the data was from "anonymized multi-center database". It's implied that this training data would similarly be labeled for the AI to learn from, but the specific process (e.g., number of experts, qualifications, adjudication for training data) is not detailed.
Ask a specific question about this device
Page 1 of 1