Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K201632

    Validate with FDA (Live)

    Device Name
    TOMTEC-ARENA
    Date Cleared
    2020-08-14

    (59 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Indications for use of TOMTEC-ARENA software are quantification and reporting of cardiovascular, fetal, and abdominal structures and function of patients with suspected disease to support the physicians in the diagnosis.

    Device Description

    TOMTEC-ARENA (TTA2) is a clinical software package for reviewing, quantifying and reporting digital medical data. The software is compatible with different IMAGE-ARENA platforms and third party platforms.

    Platforms enhance the workflow by providing the database, import, export and other services. All analyzed data and images will be transferred to the platform for archiving, reporting and statistical quantification purposes.

    TTA2 consists of the following optional modules:

    • TOMTEC-ARENA SERVER & CLIENT .
    • . IMAGE-COM/ECHO-COM
    • REPORTING ●
    • AutoStrain (LV, LA, RV) ●
    • . 2D CARDIAC-PERFORMANCE ANALYSIS (Adult and Fetal)
    • . 4D LV-ANALYSIS
    • 4D RV-FUNCTION
    • . 4D CARDIO-VIEW
    • 4D MV-ASSESSMENT
    • 4D SONO-SCAN .
    AI/ML Overview

    The provided text is a 510(k) summary for the TOMTEC-ARENA software. It details the device's substantial equivalence to predicate devices and outlines non-clinical performance data. However, it explicitly states "No clinical testing conducted in support of substantial equivalence when compared to the predicate devices."

    Therefore, I cannot provide information on acceptance criteria or a study that proves the device meets those criteria from the given text as no clinical study was performed.

    Here's a breakdown of what can be extracted or inferred based on the document's content:

    1. A table of acceptance criteria and the reported device performance:
    Not applicable, as no clinical performance data or acceptance criteria for clinical performance are reported in this document. The document states that the device was tested to meet design and performance requirements through non-clinical methods.

    2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):
    Not applicable, as no clinical test set was used. Non-clinical software verification was performed.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):
    Not applicable, as no clinical test set requiring expert ground truth was mentioned.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
    Not applicable, as no clinical test set requiring adjudication was mentioned.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
    No MRMC comparative effectiveness study was done, as explicitly stated: "No clinical testing conducted in support of substantial equivalence". The device is a "Picture archiving and communications system" and advanced analysis tools; the document does not indicate it's an AI-assisted diagnostic tool that would typically undergo such a study.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
    While the document describes various "Auto" modules (AutoStrain, Auto LV, Auto LA) which imply algorithmic processing, it does not detail standalone performance studies for these algorithms. The context is generally about reviewing, quantifying, and reporting digital medical data to support physicians, not to replace interpretation. The comparison tables highlight that for certain features (e.g., 4D RV-Function, 4D MV-Assessment), the subject device uses machine learning algorithms for 3D surface model creation, with the user able to edit, accept, or reject the contours/landmarks. This indicates a human-in-the-loop design rather than a standalone algorithm for final diagnosis.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc):
    Not applicable for clinical ground truth, as no clinical studies were performed. For the software verification, the "ground truth" would be the predefined design and performance requirements.

    8. The sample size for the training set:
    Not applicable for clinical training data, as no clinical studies were performed. While some modules utilize "machine learning algorithms" (e.g., for 3D surface model creation), the document does not disclose the training set size or its characteristics.

    9. How the ground truth for the training set was established:
    Not applicable for clinical training data. The document mentions machine learning algorithms are used (e.g., in 4D RV-FUNCTION and 4D MV-ASSESSMENT for creating 3D surface models), but it does not describe how the training data for these algorithms, or their ground truth, was established.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1