Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K121076
    Date Cleared
    2012-10-09

    (183 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    ULTRAEXTEND FX, ULTRASOUND WORKSTATION PACKAGE

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The UltraExtend FX (TUW-U001S) is designed to allow the user to observe images and perform analysis using the examination data acquired with specified diagnostic ultrasound systems Aplio 500, Aplio 400 and Aplio 300.

    Device Description

    UltraExtend FX is a software package that can be installed in a general-purpose personal computer (PC) to enable data acquired from Aplio diagnostic ultrasound system (Aplio 300, Aplio 400 and Aplio 500), to be loaded onto the PC for image processing with other application software product. UltraExtend FX is a post-processing software that implements functionality and operability equivalent to that of the diagnostic ultrasound system, providing a seamless image reading environment from examination using the diagnostic ultrasound system to diagnosis using the PC.

    AI/ML Overview

    The provided document does not contain details about specific acceptance criteria, a study proving the device meets those criteria, or quantitative performance metrics. It primarily focuses on the device description, regulatory information, and a substantial equivalence determination to a predicate device.

    Therefore, many of the requested sections (Table of acceptance criteria and reported device performance, Sample size used for the test set, Number of experts used, Adjudication method, MRMC study details, Standalone performance, Type of ground truth, Training set sample size, and Ground truth establishment for training set) cannot be extracted from this document.

    However, based on the provided text, here's what can be inferred or stated:

    • Device Name: UltraExtend FX, TUW-U001S
    • Device Type: Software package for post-processing ultrasound images.
    • Purpose: To enable observation and analysis of images acquired from specified diagnostic ultrasound systems (Aplio 300, Aplio 400, and Aplio 500).

    Here's a response addressing the available information and noting the missing details for the requested categories:


    Based on the provided K121076 510(k) summary for the Toshiba America Medical Systems UltraExtend FX, TUW-U001S:

    This submission focuses on establishing substantial equivalence to a predicate device rather than presenting a detailed study with specific quantitative acceptance criteria and performance data for a novel algorithm or diagnostic aid. The device is a post-processing software for ultrasound images, and the validation activities appear to relate to ensuring the software functions as intended and is safe and effective in its role as an image viewer and analysis tool, in line with its predicate.

    1. A table of acceptance criteria and the reported device performance:

    Specific quantitative acceptance criteria and corresponding reported device performance metrics (e.g., sensitivity, specificity, accuracy, precision) for a diagnostic output are not provided in this document. The document primarily describes the functionality and comparative aspects for substantial equivalence.

    2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective):

    Details regarding the sample size, type, or provenance of any test sets (e.g., number of images, patient cases) used for formal performance evaluation are not explicitly mentioned in this document. The document states "Verification and validations tests were conducted on the subject device through bench testing to confirm device safety and effectiveness," which generally refers to software functionality testing, but not typically to clinical image-based performance studies in the way a diagnostic algorithm would be evaluated.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience):

    Information regarding experts, ground truth establishment, or their qualifications for any clinical test set is not present in the provided text.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    No information on adjudication methods for a test set is available in this document.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    A Multi-Reader Multi-Case (MRMC) comparative effectiveness study and any reported effect sizes for human reader improvement are not mentioned in this document. This submission pertains to an image processing workstation, not a device providing diagnostic assistance via AI.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    The document describes the UltraExtend FX as a "post-processing software that implements functionality and operability equivalent to that of the diagnostic ultrasound system, providing a seamless image reading environment from examination using the diagnostic ultrasound system to diagnosis using the PC." It's an analysis tool for images with human interpretation, not a standalone algorithm making diagnoses. Therefore, a standalone performance evaluation in the context of an autonomous AI algorithm is not applicable and not reported.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    Given the nature of the device as an image processing workstation, the concept of "ground truth" for diagnostic accuracy (e.g., pathology) is not directly applicable to its evaluation in the manner typically seen for AI diagnostic algorithms. For functional validation, the "ground truth" would likely relate to the correct display and measurement capabilities compared to the original system or expected outputs. Specifics are not detailed.

    8. The sample size for the training set:

    As this is an image processing software and not an AI/machine learning model, the concept of a "training set" in that context is not applicable, and no information is provided.

    9. How the ground truth for the training set was established:

    Not applicable, as there is no mention of an AI/machine learning model or training set.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1