Search Filters

Search Results

Found 4 results

510(k) Data Aggregation

    K Number
    K250226
    Date Cleared
    2025-05-08

    (101 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    Reference Devices :

    K213436

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Clarius Median Nerve AI is intended for segmentation and semi-automatic non-invasive measurements of the median nerve cross-sectional area on ultrasound data acquired by the Clarius Ultrasound Scanner (i.e., linear array scanners). The user shall be a healthcare professional trained and qualified in ultrasound. The user retains the responsibility of confirming the validity of the measurements based on standard practices and clinical judgment. Clarius Median Nerve Al is indicated for use in adult patients only.

    Device Description

    Clarius Median Nerve AI is a machine learning algorithm that is integrated into the Clarius App software as part of the complete Clarius Ultrasound Scanner system for use in musculoskeletal ultrasound applications, specifically intended for segmentation and measurement of the cross-sectional area of the median nerve. Clarius Median Nerve AI is intended for use by trained healthcare practitioners for measurement of the cross-sectional area (CSA) of the median nerve on ultrasound data acquired by the Clarius Ultrasound Scanner system (i.e., linear array scanners) using a deep learning image segmentation algorithm.

    During the ultrasound imaging procedure, the anatomical site is selected through a preset software selection (i.e., Hand/Wrist) from the Clarius App in which Clarius Median Nerve AI will segment the median nerve in transverse view (with a segmentation mask placed on the ultrasound image) and engage to automatically place calipers on the segmentation mask to measure the median nerve's cross-sectional area.

    Clarius Median Nerve AI operates by performing the following tasks:
    • Automatic detection and measurement of the median nerve in transverse view

    Clarius Median Nerve AI operates by identifying and segmenting the median nerve in the forearm and wrist and performs automatic measurements of the median nerve's cross-sectional area. The user has the option to manually adjust the measurements made by Clarius Median Nerve AI by moving the caliper crosshairs. Clarius Median Nerve AI does not perform any functions that could not be accomplished manually by a trained and qualified user.

    Clarius Median Nerve AI is an assistive tool intended to inform clinical management and is not intended to replace clinical decision-making. The clinician retains the ultimate responsibility of ascertaining the measurements based on standard practices and clinical judgment. Clarius Median Nerve AI is indicated for use in adult patients only.

    Clarius Median Nerve AI is integrated into the Clarius App software, which is compatible with iOS and Android operating systems two versions prior to the latest iOS or Android stable release build and is intended for use with the following Clarius Ultrasound Scanner system transducers (previously 510(k)-cleared in K213436). Clarius Median Nerve AI is not a stand-alone software device.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) clearance document for Clarius Median Nerve AI:


    Acceptance Criteria and Device Performance

    1. A table of acceptance criteria and the reported device performance:

    Metric / ObjectiveAcceptance CriteriaReported Device Performance
    Primary Objective: Non-inferiority of Clarius Median Nerve AI measurements to manual expert measurements.The magnitude of the difference (absolute difference/error) between Clarius Median Nerve AI and mean reviewer (human expert) measurements should not be greater than the magnitude of the mean difference (mean absolute difference/error) between the reviewers themselves. Equivalence/error margin: 3 mm²Non-inferiority demonstrated:
    Clinical Performance - Cross-sectional Area (CSA) Measurementp-value for non-inferiority < 0.05- p-value: 6.497e-47 (97.5% CI: -inf, 0.3285)
    Mean difference between human experts and AI (relative to difference between human experts)- Mean difference: -0.065 mm² (This value indicates that the mean difference between AI and expert measurements was smaller than the mean difference between experts themselves, by 0.065 mm², fulfilling the non-inferiority condition).
    Intraclass Correlation Coefficient (ICC) of AI vs. Mean of Reviewers CSA- ICC: 0.81 (95% CI: 0.74, 0.87). This indicates strong agreement.
    Secondary Objective: Correlation of Clarius Median Nerve AI segmentation with human expert segmentation.Accurately identify the median nerve in transverse view at the level of the wrist or mid forearm. (Implicit acceptance of reasonable Jaccard scores compared to inter-reviewer agreement).Jaccard Scores for Segmentation Masks: - Reviewer 1 vs Clarius Median Nerve AI: 0.62 [95%CI: 0.62, 0.68] - Reviewer 2 vs Clarius Median Nerve AI: 0.71 [95%CI: 0.69, 0.74] - Reviewer 3 vs Clarius Median Nerve AI: 0.68 [95%CI: 0.65, 0.71] Inter-reviewer Jaccard Scores: - Reviewer 1 vs Reviewer 2: 0.76 [95%CI: 0.74, 0.78] - Reviewer 1 vs Reviewer 3: 0.72 [95%CI: 0.70, 0.75] - Reviewer 2 vs Reviewer 3: 0.77 [95%CI: 0.75, 0.79] The AI's segmentation Jaccard scores are within a reasonable range compared to inter-reviewer variability, indicating accurate identification and segmentation.
    Clinical Validation Study: Device performs as intended in a representative user environment and meets user needs.Consistent results among all users, ability to activate AI, image, perform live segmentation, automate measurements, manually adjust, change opacity, display CSA, and save measurements.All predefined acceptance criteria were met. Users were able to successfully perform all listed functions.

    2. Sample size used for the test set and the data provenance:

    • Test Set Sample Size: 182 images collected from 126 subjects. Some subjects had images collected at both forearm and wrist levels, accounting for the image count exceeding subject count.
    • Data Provenance: Retrospective analysis of anonymized ultrasound images obtained from a multi-center database.
      • Countries of Origin: United States (majority - 130 images), Canada, Brazil, United Kingdom, Australia, Belgium, Germany, South Africa, Dominican Republic, Poland, The Netherlands, and Philippines.
      • Retrospective/Prospective: Retrospective. Data was previously collected and stored on a cloud platform.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Number of Experts: 3 expert reviewers.
    • Qualifications of Experts: Qualified experts with relevant (i.e., musculoskeletal) ultrasound experience. Specific details on years of experience or exact specializations (e.g., radiologist, sonographer, etc.) are not provided in the document, but it states they were "experienced ultrasound reviewers/clinicians."

    4. Adjudication method for the test set:

    • Adjudication Method: "To aggregate measurements from different truthers, the mean of the three values was taken and was treated as one reviewer mean." This suggests a form of consensus ground truth based on averaging individual expert measurements.
    • Each reviewer was blinded to the Clarius Median Nerve AI output and the other reviewers' annotations.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • The study described is a standalone (algorithm only) performance evaluation against human expert measurements, not a multi-reader multi-case (MRMC) comparative effectiveness study assessing human reader improvement with AI assistance.
    • Therefore, no effect size for human reader improvement with AI assistance is provided or applicable from this document.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • Yes, a standalone performance evaluation was done. The "Clinical Performance Evaluation Summary" and "Summary of the Clinical Verification Study" describe the AI model's measurements being compared directly against manual measurements from human experts without the experts using the AI as an assistive tool during their measurement process. The experts were explicitly "blinded to the Clarius Median Nerve AI output."

    7. The type of ground truth used:

    • Expert Consensus / Expert Manual Measurement: The ground truth for the test set measurements was established by manual measurements performed individually by three qualified human experts, and then aggregated by taking the mean of their three values.

    8. The sample size for the training set:

    • The document states that the Clarius Median Nerve AI Deep Neural Network (DNN) model was developed and trained using three data sets: training, tuning (validation), and internal testing.
    • However, the exact sample size for the training set is NOT explicitly stated in the provided document. It mentions that data for model development was "collected from the Clarius Cloud and/or partner clinics" and partitioned, but it doesn't quantify the size of the training partition.

    9. How the ground truth for the training set was established:

    • The document states that the "internal test data was fully independent of the training/tuning dataset and was labelled by experts."
    • By inference, the training and tuning (validation) data sets would also have had their ground truth established by experts' labeling, similar to the internal test set. However, the specific method (e.g., number of experts, adjudication) for the training data's ground truth is not detailed, only that it was "labelled by experts."
    Ask a Question

    Ask a specific question about this device

    K Number
    K241029
    Device Name
    SpineUs™ System
    Manufacturer
    Date Cleared
    2024-10-07

    (175 days)

    Product Code
    Regulation Number
    892.1560
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    Reference Devices :

    K213436

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The SpineUs™ system is a software-based, tracked, ultrasound imaging system and accessories, intended for diagnostic imaging. It is indicated for diagnostic ultrasound imaging in the following applications: musculoskeletal (conventional, superficial). The system is intended for use by trained chiropractors and radiologists in a hospital or medical clinic.

    The SpineUs™ system is intended for assisting trained chiropractors and radiologists in acquiring, viewing, and measuring ultrasound images of the spine in both clinic and hospital settings. The SpineUs™ system is intended to be used as an adjunct to conventional imaging method that allows trained chiropractors and radiologists to measure spine-related anatomical components on images (e.g., intervertebral angles and spine curvature). The system also allows the review and management of patient measurement data. Clinical judgment of anatomy and experience are required to properly use the SpineUs™ system.

    Patient management decisions should not be made based solely on the results of the SpineUs™ computer application. The user shall retain the ultimate responsibility of ascertaining the measurements based on standard practices and clinical judgement.

    Device Description

    The SpineUs™ System is a diagnostic ultrasound system, which consists of the FDA cleared Clarius Ultrasound Scanner C3 HD3 (K213436), a consumer PC, a tracking system with OptiTrack cameras connected to a POE switch and active LED markers, and the SpineUs™ computer application.

    The SpineUs™ computer application, installed in the consumer PC, processes the ultrasound imaging data received from the Clarius Ultrasound Scanner and the tracking data received from the tracking system. The SpineUs™ computer application allows the operator to view ultrasound images of the spine, segment the ultrasound images using artificial intelligence, generate and visualize 3D reconstructions of the surface of the spine in real-time, measure spine-related anatomical components (e.g., intervertebral angles and spine curvature), review and manage patient measurement data, and generate and export printable reports.

    The SpineUs™ system comprises the following:
    Transducer / Scanner: Clarius Ultrasound Scanner, model C3 HD3 (K213436)
    Software: SpineUs™ computer application
    Tracking system: Motive Software, OptiTrack Cameras, SpineUs™ Active LED Markers, Power over Ethernet (PoE) switch
    Accessories: USB-C charging cables, USB-C charging block, Wall mounted camera holders/covers, Tracker Reference (includes belt), Consumer PC

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the SpineUs™ System, based on the provided FDA 510(k) summary:

    1. Table of Acceptance Criteria and Reported Device Performance (Segmentation AI)

    Acceptance CriteriaReported Device Performance
    Average Percentage of Transverse Processes Identified: > 80%100.0% [100.0% - 100.0%]
    Average Inference Time: > 25 frames per second140.05 frames per second
    Pixel-Based Metrics (Reference only, no specific threshold provided as acceptance criteria in the document):Sensitivity: 41.80%
    Specificity: 99.19%
    Precision: 38.70%
    Dice Coefficient: 0.4019
    Balanced accuracy: 70.49%
    95th Percentile Hausdorff Distance: 12.91 mm

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: 31 patients for both Non-Clinical and Clinical Performance Testing.
    • Data Provenance: The data used in the Testing Datasets was obtained from clinical sites that are independent from those included in the Development dataset. While the specific countries are not mentioned, the gender, age, BMI, and ethnicity demographics suggest a diverse patient population, likely from multiple regions. The data is retrospective, as it involves recorded ultrasound sequences that were subsequently analyzed.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Number of Experts: At least two clinical experts for the Segmentation AI outputs.
    • Qualifications of Experts: Described as "trained clinical experts." No further specific qualifications (e.g., years of experience, specialty) are provided in the document for the test set ground truth.

    4. Adjudication Method for the Test Set

    • "At least two clinical experts" annotated the Segmentation AI outputs.
    • "All annotations were reviewed by a separate annotator." This suggests a form of 2+1 or similar adjudication, where two experts make initial annotations, and a third (or a different "separate annotator") reviews them, potentially resolving disagreements or confirming consistency.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • No MRMC comparative effectiveness study was explicitly mentioned for AI-assisted versus without AI assistance.
    • The clinical performance testing involved "three different observers measuring scoliosis angle on the SpineUs™ system images and X-ray images of the same patients." This was a comparison between the SpineUs™ system measurements and X-ray measurements, essentially using X-ray as a reference standard, not a comparison of human readers with and without AI assistance on the ultrasound images. Therefore, no effect size of human readers improving with AI vs. without AI assistance is provided.

    6. Standalone Performance (Algorithm Only without Human-in-the-Loop Performance)

    • Yes, standalone performance testing was done for the Segmentation AI.
    • The section "Standalone Segmentation AI performance testing" explicitly states its purpose was "to assess the ability of the Segmentation AI to delineate between bone surfaces and background on ultrasound images from a recorded ultrasound sequence." The Non-Clinical Performance Testing summary directly reports the AI's performance on its own outputs against the established ground truth.

    7. Type of Ground Truth Used

    • Expert Consensus: For the Segmentation AI in both development and testing, image-level annotations were performed by trained clinical experts to label bone surface structures. These annotations, reviewed by a separate annotator, served as the ground truth for pixel-based metrics.
    • X-ray Measurements: For the clinical performance testing assessing scoliosis angle, X-ray images of the same patients were used as the reference standard for comparison with SpineUs™ system measurements.

    8. Sample Size for the Training Set

    • Training Set Sample Size (Development Data): 81 ultrasound image sequences from 45 patients, totaling 17,684 images.

    9. How the Ground Truth for the Training Set Was Established

    • Expert Consensus with CT Confirmation: Image-level annotations were performed on a per-frame basis by a team of trained clinical experts, who labeled bone surface structures.
    • Radiological Confirmation: "When available, corresponding thoracic CT imaging served as a ground truth to assist in the annotation process." This indicates that CT scans were used as a definitive reference to guide and confirm the expert annotations where possible.
    • Adjudication: "All annotations were reviewed by a separate annotator." This ensures consistency and quality of the ground truth labels.
    Ask a Question

    Ask a specific question about this device

    K Number
    K233955
    Device Name
    Clarius OB AI
    Date Cleared
    2024-06-14

    (182 days)

    Product Code
    Regulation Number
    892.1550
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    Reference Devices :

    K213436

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Clarius OB Al is intended to assist in measurements of fetal biometric parameters (i.e., head circumference, abdominal circumference, femur length, bi-parietal diameter, crown rump length) on ultrasound data acquired by the Clarius Ultrasound Scanner (i.e., curvilinear scanner). The user shall be a healthcare professional trained and qualified in ultrasound. The user retains the responsibility of confirming the validity of the measurements based on standard practices and clinical judgment. Clarius OB Al is indicated for use in adult patients only.

    Device Description

    Clarius OB Al is a machine learning algorithm that is incorporated into the Clarius App software as part of the complete Clarius Ultrasound Scanner system for use in obstetric (OB) ultrasound imaging applications. Clarius OB Al is intended for use by trained healthcare practitioners for non-invasive measurements of fetal biometric parameters on ultrasound data acquired by the Clarius Ultrasound Scanner system (i.e., curvilinear scanner) using a deep learning image segmentation algorithm.

    During the ultrasound imaging procedure, the anatomical site is selected through a preset software selection (i.e., OB, Early OB) within the Clarius App in which Clarius OB Al will engage to segment the fetal anatomy and place calipers for measurement of fetal biometric parameters.

    Clarius OB Al operates by performing the following tasks:

    • Automatic detection and measurement of head circumference (HC)
    • Automatic detection and measurement of abdominal circumference (AC)
    • Automatic detection and measurement of femur length (FL)
    • Automatic detection and measurement of bi-parietal diameter (BPD)
    • Automatic detection and measurement of crown rump length (CRL)

    Clarius OB Al operates by performing automatic measurements of fetal biometric parameters. The user has the option to manually adjust the measurements made by Clarius OB Al by moving the caliper crosshairs. Clarius OB Al does not perform any functions that could not be accomplished manually by a trained and qualified user. Clarius OB AI is intended for use in B-Mode only.

    Clarius OB Al is an assistive tool intended to inform clinical management and is not intended to replace The clinician retains the ultimate responsibility of ascertaining the clinical decision-making. measurements based on standard practices and clinical judgment. Clarius OB Al is indicated for use in adult patients only.

    Clarius OB Al is incorporated into the Clarius App software, which is compatible with iOS and Android operating systems two versions prior to the latest iOS or Android stable release build and is intended for use with the following Clarius Ultrasound Scanner system transducer (previously 510(k)-cleared in K213436). Clarius OB Al is not a stand-alone software device.

    AI/ML Overview

    Here's a summary of the acceptance criteria and study details for the Clarius OB AI device, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    ParameterAcceptance Criteria (Implicit)Clarius OB AI Reported Performance
    Fetal Biometric Measurements (HC, AC, FL, BPD, CRL)Non-inferiority to manual measurements performed by qualified expertsClarius OB AI was found to be non-inferior to human experts with statistically significant p-values (<2.2e-16) for all fetal biometric measurements.
    Agreement with Expert MeasurementsStrong agreementStrong agreement shown between Clarius OB AI measurements and the mean of expert clinicians' measurements for all fetal biometrics. Strong agreements also shown with individual expert measurements.
    Inter-rater Reliability (ICC)High correlation (implied for both device-expert and expert-expert)ICC across all fetal biometrics between Clarius OB AI and the reviewers was calculated to be 0.99 (95% CI 0.994—0.997).
    Segmentation Performance (Dice Score)High scoreRange of average Dice scores (for all anatomical structures) between Clarius OB AI and reviewers was 0.84 (95% CI 0.83—0.87) to 0.97 (95% CI 0.96—0.97).
    Segmentation Performance (Jaccard Score)High scoreRange of average Jaccard scores (for all anatomical structures) between Clarius OB AI and reviewers was 0.73 (95% CI 0.72—0.74) to 0.94 (95% CI 0.93—0.94).
    Clinical Usability / Performance as IntendedDevice performs as intended in a representative user environment, meets product requirements, is clinically usable, and meets users' needs for semi-automated fetal biometric measurements.Validation study showed consistent results among all users, meeting pre-defined acceptance criteria. Users successfully activated Clarius OB AI, obtained images, performed live segmentation, automatic measurements, manual adjustments, and saved measurements.

    2. Sample Size for Test Set and Data Provenance

    • Sample Size for Test Set: 347 subjects
    • Data Provenance: Retrospective analysis of anonymized ultrasound images from 25 clinical sites in the United States, Canada, Philippines, Australia, Kenya, Belgium, and Malaysia. The data represented different ethnic groups and ages (15-45 years). The test data was explicitly stated to be independent from the training and validation (tuning) datasets.

    3. Number of Experts and Qualifications for Ground Truth

    • Number of Experts: 3 reviewers (clinical truthers) per image.
    • Qualifications of Experts: Qualified experts with relevant (i.e., OB/fetal) ultrasound experience.

    4. Adjudication Method for Test Set

    • Adjudication Method: Each image had fetal biometric measurements performed by 3 reviewers. Each reviewer was blinded to the Clarius OB AI output and the other reviewers' annotations. The reported performance metrics (e.g., ICC, Dice, Jaccard) compare the Clarius OB AI output against the mean of the expert clinicians' measurements, indicating that the mean of the three expert measurements served as the ground truth. This is a form of consensus, where the average of multiple independent readings establishes the reference.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done

    The provided text describes a study where device performance was compared to human experts, but it does not describe a comparative effectiveness study designed to measure the effect size of how much human readers improve with AI vs. without AI assistance (i.e., human-in-the-loop performance). The study focuses on the standalone performance of the AI compared to human experts.

    6. If a Standalone (Algorithm Only) Performance Study was done

    • Yes, a standalone performance study was done. The "Summary of the Verification Study" specifically states that the primary objective was to verify that Clarius OB AI automeasurements are non-inferior to manual measurements performed by expert clinicians, and each reviewer was blinded to the Clarius OB AI output. This indicates an evaluation of the algorithm's performance independent of human interaction.

    7. Type of Ground Truth Used

    • Expert Consensus: The ground truth for the test set was established by manual measurements and boundary outlines (segmentation) performed by 3 qualified expert clinicians, with the mean of these measurements serving as the reference for comparison with the AI.

    8. Sample Size for the Training Set

    The document mentions that the Clarius OB Al deep neural network (DNN) model was trained using three data sets: training, validation (tuning), and testing. It states that the validation (tuning) data was 10% of the training data. However, the exact sample size for the training set is not explicitly provided. It only states that validation data was 10% of training data, and the test set was independent.

    9. How the Ground Truth for the Training Set was Established

    The document states: "...anonymized ultrasound images from 25 clinical sites in the United States, Philippines, Australia, Kenya, Belgium, Malaysia, and Canada, representing various ethnicities and ages." And "The Clarius OB Al deep neural network (DNN) model was trained using three data sets: training, validation (tuning), and testing. The validation (tuning) data was 10% of the training data, while the test data was independent and labelled by experts."

    While it confirms experts labeled the test data, it does not explicitly describe how the ground truth for the training set was established. It only generally mentions "clinical and/or artificial data intended for non-invasive analysis (i.e., quantitative and/or qualitative) of ultrasound data." and that the data was from "anonymized multi-center database". It's implied that this training data would similarly be labeled for the AI to learn from, but the specific process (e.g., number of experts, qualifications, adjudication for training data) is not detailed.

    Ask a Question

    Ask a specific question about this device

    K Number
    K232257
    Date Cleared
    2023-11-13

    (108 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    Why did this record match?
    Reference Devices :

    K213436,K200232

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Clarius Bladder AI is intended for semi-automatic non-invasive measurements of bladder volume on ultrasound data acquired by the Clarius Ultrasound Scanner (i.e., curvilinear and phased array scanners). The user shall be a healthcare professional trained and qualified in ultrasound. The ultimate responsibility of ascertaining the measurements based on standard practices and clinical judgment. Clarius Bladder AI is indicated for use in adult patients only.

    Device Description

    Clarius Bladder AI is a radiological (ultrasound) image processing software application which implements artificial intelligence (Al), utilizing non-adaptive machine learning algorithms, and is incorporated into the Clarius App software for use as part of the complete Clarius Ultrasound Scanner system product offering in bladder ultrasound imaging applications. Clarius Bladder Al is intended for use by trained healthcare practitioners for non-invasive measurements of bladder volume on ultrasound data acquired by the Clarius Ultrasound Scanner system (i.e., curvilinear and phased array scanners) using an artificial intelligence (AI) image segmentation algorithm.

    During the ultrasound imaging procedure, the anatomical site (bladder) is selected through a preset software selection (i.e., bladder) within the Clarius App in which Clarius Bladder Al will engage to segment the bladder and place calipers for calculation of bladder volume.

    Clarius Bladder Al operates by performing the following automations:

    • . Automatic detection and measurement of bladder depth
    • . Automatic detection and measurement of bladder width
    • . Automatic detection and measurement of bladder height
    • . Automatic detection of the corresponding image view (sagittal vs. transverse)

    Clarius Bladder Al operates by performing automatic measurements of bladder height, width, and length, and calculates bladder volume. The user has the option to manually adjust the measurements made by Clarius Bladder Al by moving the caliper crosshairs. Clarius Bladder Al does not perform any functions that could not be accomplished manually by a trained and qualified user. Clarius Bladder Al is intended for use in B-Mode only.

    Clarius Bladder AI is an assistive tool intended to inform clinical management and is not intended to replace clinical decision-making. The clinician retains the ultimate responsibility of ascertaining the measurements based on standard practices and clinical judgment. Clarius Bladder Al is indicated for use in adult patients only.

    Clarius Bladder AI is incorporated into the Clarius App software, which is compatible with iOS and Android operating systems two versions prior to the latest iOS or Android stable release build and is intended for use with the following Clarius Ultrasound Scanner system transducers (previously 510(k)-cleared in K213436). Clarius Bladder Al is not a stand-alone software device.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the Clarius Bladder AI device, based on the provided FDA 510(k) summary:

    1. Table of Acceptance Criteria and Reported Device Performance

    The core acceptance criterion for Clarius Bladder AI's automated measurements was non-inferiority to manual measurements performed by qualified experts, with an equivalence margin of 25% for the mean difference between percentage differences of bladder volume measurements.

    Acceptance CriteriaReported Device Performance
    Quantitative Performance: Automatic bladder volume measurement found to be non-inferior to manual measurements by expert clinicians, with a mean difference between percentage differences no greater than 25% of the measured bladder volume.Retrospective Study: p-value of 1.87e-22 (confirming non-inferiority). Mean difference between percent differences of clinical expert mean and Bladder AI mean was 0.0548 (95% CI 0.010, 0.099). Prospective Study: p-value of 1.36e-14 (confirming non-inferiority). Mean difference between percent differences of clinical expert mean and Bladder AI mean was -0.0228 (95% CI -0.074, 0.028).
    Agreement with Experts: Strong agreement between Clarius Bladder AI measurements and the mean of expert clinicians' measurements, and with individual expert measurements.Both retrospective and prospective studies reported strong agreement between Clarius Bladder AI and expert measurements, as well as high inter-rater reliability (Intraclass Correlation Coefficients for inter-rater reliability were calculated and found to be strong). Average Dice scores and Jaccard index were also calculated, indicating good segmentation agreement.
    Clinical Usability: Performs as intended in a representative user environment, meets product requirements, is clinically usable, and meets user needs for semi-automated bladder volume measurements.Clinical validation study results showed consistent results among all users, meeting pre-defined acceptance criteria, demonstrating that Clarius Bladder AI performs as intended and meets user needs. Users were able to activate, image, perform live segmentation, automatic measurements, manual adjustments, and save measurements.

    2. Sample Size and Data Provenance for Test Set

    Retrospective Study:

    • Sample Size: 66 subjects (10 female, 38 male, gender of remaining unknown)
    • Data Provenance: Anonymized multi-center database of images from predominantly the United States. Institutions included in the model training and tuning datasets were excluded from this study. Retrospective.

    Prospective Study:

    • Sample Size: 58 subjects (40 female, 18 male)
    • Data Provenance: Conducted at a healthcare institution in the United States. Images were obtained prospectively.

    3. Number of Experts and Qualifications for Ground Truth

    • Number of Experts: 3 reviewers (referred to as "clinical truthers" or "clinical experts") for both the retrospective and prospective studies.
    • Qualifications of Experts: Described as "qualified experts with relevant (i.e., bladder) ultrasound experience."
      • For the retrospective study: "qualified experts with relevant (i.e., bladder) ultrasound experience."
      • For the prospective study: "qualified experts with clinical experience in bladder ultrasound."

    4. Adjudication Method for the Test Set

    The ground truth for bladder volume in both retrospective and prospective studies was established as the mean bladder volume measurement among the three clinical experts. Each reviewer was blinded to the Clarius Bladder AI output and the other reviewers' annotations.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    The provided information does not explicitly describe a traditional MRMC comparative effectiveness study designed to measure the effect size of human readers improving with AI vs. without AI assistance.

    Instead, the studies focused on demonstrating the non-inferiority of the AI device's standalone measurements compared to the mean of multiple human expert measurements. While comparisons were made between reviewer pairs (inter-rater reliability), and between the AI output and individual/mean expert measurements, the studies did not seem to directly evaluate human performance with the AI assistance versus human performance without it in a controlled MRMC setting to quantify a "human improvement" effect size.

    6. Standalone Performance Study (Algorithm Only)

    Yes, a standalone (algorithm only) performance study was done. The core of both the retrospective and prospective verification studies was to evaluate the Clarius Bladder AI's automated measurements directly against expert manual measurements, demonstrating its performance without human intervention (other than initial image acquisition and potential later manual adjustment by the user, which was a separate feature). The non-inferiority claims are based on this standalone performance.

    7. Type of Ground Truth Used

    The ground truth used was expert consensus, specifically defined as the mean bladder volume measurement among three clinical experts.

    8. Sample Size for the Training Set

    • Training Dataset: 1352 subjects (353 female, 999 male).
      • Note: This also includes a validation (tuning) dataset which was 10% of the training data.

    9. How the Ground Truth for the Training Set Was Established

    The deep neural network (DNN) model was trained using the raw training dataset. The summary states that the test data (which was independent) was "labelled by experts." While it doesn't explicitly detail the ground truth establishment for the training set itself, it can be inferred that similar expert labeling or a robust annotation process would have been used to generate the ground truth for the images used in training. The summary highlights that the validation (tuning) data was independent, and the test data was "labelled by experts," suggesting expert annotation for ground truth across relevant datasets. The overall context points to expert-derived ground truth for model development.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1