Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K241582
    Date Cleared
    2024-09-12

    (101 days)

    Product Code
    Regulation Number
    892.1550
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Diagnostic Ultrasound System Aplio i900 Model TUS-A1900, Aplio i800 Model TUS-A1800 and Aplio i700 Model TUS-AI700 are indicated for the visualization of structures, and dynamic processes with the human body using ultrasound and to provide image information for diagnosis in the following clinical applications: fetal, abdominal, intra-operative (abdominal), pediatric, small organs (thyroid, breast and testicle), trans-rectal, neonatal cephalic, adult cephalic, cardiac (both adult and pediatic), peripheral vascular, transesophageal, musculo-skeletal (both conventional and superficial), laparoscopic and Thoracic/Pleural. This system provides high-quality ultrasound images in the following modes B mode, M mode, Continuous Wave, Color Doppler, Pulsed Wave Doppler and Combination Dopler, as well as Speckle-tracking, Tissue Harmonic Imaging, Combined Modes, Shear wave, Elastography, and Acoustic attenuation mapping. This system is suitable for use in hospital and clinical settings by physicians or legally qualified persons who have received the appropriate training.

    Device Description

    The Aplio i900 Model TUS-AI900, Aplio i800 Model TUS-AI800 and Aplio i700 Model TUS-AI700, V7.0 are mobile diagnostic ultrasound systems. These systems are Track 3 devices that employ a wide array of probes including flat linear array, convex, and sector array with frequency ranges between approximately 2MHz to 33MHz.

    AI/ML Overview

    The document describes the validation of several AI/ML-based features within the Aplio i900/i800/i700 Diagnostic Ultrasound System, Software V7.0. The study aims to demonstrate that these new features are substantially equivalent to existing functionalities and improve workflow.

    Here's an analysis of the acceptance criteria and the study that proves the device meets them:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document describes several AI/ML-based features. While the format isn't a single table, I can synthesize the information for each feature:

    Feature: Auto Plane Detection

    Acceptance CriteriaReported Device Performance
    > 90% agreement with sonographer-selected cardiac chamber views for A4C/A3C/A2C/SAXAchieved 97% average pass rate across the four views

    Feature: Quick Strain

    Acceptance CriteriaReported Device Performance
    Reduced operation time with significance level of 5%Achieved an average 68% reduction in operation time.
    All ICC(2,1) values > 0.75 (indicating minimal inter-operator variability for EDV, ESV, EF, GLS)Demonstrated minimal inter-operator variability by adoption of two-way random effects, absolute agreement, single rater/measurement for ICC. The exact ICC values are not given, but it is stated they passed the criteria.
    Calculated NRMSE for EDV, ESV, EF, and GLS < 10% (compared to conventional workflow)Calculated NRMSE results for EDV, ESV, EF, and GLS were within 10% of the results using existing workflow.

    Feature: Auto LVOT

    Acceptance CriteriaReported Device Performance
    Reduced operation time with significance level of 5%Demonstrated an average 78% reduction in operation time (for 3 consecutive heart cycles).
    All ICC(2,1) values > 0.75 (indicating minimal inter-operator variability)Demonstrated minimal inter-operator variability by two-way random effects, absolute agreement, single rater/measurement for ICC. The exact ICC values are not given, but it is stated they passed the criteria.
    Calculated NRMSE results by three clinical sonographers < 10% (compared to manual tracing)Calculated NRMSE results by each of the three sonographers were within 10% of the results using existing workflow.

    Feature: Auto AoV

    Acceptance CriteriaReported Device Performance
    Reduced operation time with significance level of 5%Demonstrated an average 71% reduction in operation time (for 3 consecutive heart cycles).
    All ICC(2,1) values > 0.75 (indicating minimal inter-operator variability)Demonstrated minimal inter-operator variability by two-way random effects, absolute agreement, single rater/measurement for ICC. The exact ICC values are not given, but it is stated they passed the criteria.
    Calculated Doppler trace measurement results by three clinical sonographers < 10% (compared to manual tracing)Calculated NRMSE results by each of the three sonographers were within 10% of the results using existing workflow.

    2. Sample Size Used for the Test Set and Data Provenance

    • Data Provenance: All data used for performance testing was "entirely independent and sequestered from the data used for training and was acquired from U.S. clinical patients with the predicate device, identical to the subject device in terms of data acquisition functionality." The data was collected prospectively ("acquired over a two-month period at a U.S. clinical site") for these validation studies, or selected from previously acquired data, which implies it was retrospective in nature for the selection process. The data was from the USA.
    • Sample Sizes for Test Sets:
      • Auto Plane Detection: 50 patients (images from 239 demographically diverse patients were acquired over a two-month period, but 50 were selected for this specific study.)
      • Quick Strain: 50 patients (same data acquisition pool of 239 patients, 50 selected for this study).
      • Auto LVOT: 45 patients (same data acquisition pool of 239 patients, 45 selected for this study).
      • Auto AoV: 45 patients (same data acquisition pool of 239 patients, 45 selected for this study).

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    • For Auto Plane Detection: Three clinical sonographers. Their qualifications are described as "qualifications and expertise representative of U.S. intended users."
    • For Quick Strain: Three licensed sonographers.
    • For Auto LVOT: Three licensed sonographers.
    • For Auto AoV: Three licensed sonographers.

    The document states generally for all features that "Ground Truth was established by three clinical sonographers with qualifications and clinical experience representative of intended users of these features in the U.S." This implies they are experienced and licensed professionals in sonography.

    4. Adjudication Method for the Test Set

    • Auto Plane Detection: "A licensed sonographer selected representative images for each of the four evaluated chamber views (A4C/A3C/A2C/SAX) and two different licensed sonographers independently identified the cardiac view for all selected images, with any discrepancies resolved by consensus among the three." This is a 2+1 consensus method.
    • Quick Strain, Auto LVOT, Auto AoV: For these features, ground truth was established by the "median of manual measurement results taken by three licensed sonographers." This is a 3-expert median method. No explicit mention of an adjudication process if there were significant outliers, but the use of the median inherently provides a robust central tendency.

    5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study Was Done

    No, an MRMC comparative effectiveness study was not explicitly stated in the traditional sense of comparing human readers with vs. without AI assistance. The studies focused on comparing the AI/ML-based features' performance (e.g., accuracy, time savings, inter-operator variability) against the existing predicate functionality (manual methods).

    The benefit derived is a workflow improvement (time savings) while maintaining equivalent performance, rather than an explicit improvement in diagnostic accuracy of the human reader with AI assistance. The AI features are described as automating or assisting parts of the process that were previously manual measurements or selections.

    6. If a Standalone (i.e. algorithm only without human-in-the loop performance) Was Done

    Yes, the studies clearly evaluate the standalone performance of the AI algorithms for Auto Plane Detection, Quick Strain, Auto LVOT, and Auto AoV. While the ground truth is established by human experts, the algorithms themselves are performing the tasks (e.g., selecting views, tracing waveforms, calculating metrics) and their output is compared against human expert derived ground truth or existing manual workflows.

    7. The Type of Ground Truth Used

    • Expert Consensus:
      • Auto Plane Detection: Expert consensus (3 sonographers) on the correct cardiac views.
      • Quick Strain, Auto LVOT, Auto AoV: Expert consensus measurements (median of 3 sonographers' manual measurements using the predicate method). This acts as the "gold standard" for quantitative comparison.

    No pathology or outcomes data was used for ground truth.

    8. The Sample Size for the Training Set

    The document explicitly states: "The data used for the performance testing of these improved features was entirely independent and sequestered from the data used for training..." However, the sample size for the training set is not provided in the given text.

    9. How the Ground Truth for the Training Set Was Established

    The document states that the testing data was sequestered from the training data, but it does not provide information on how the ground truth for the training set was established.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1