Search Filters

Search Results

Found 20 results

510(k) Data Aggregation

    K Number
    K213544
    Device Name
    TOMTEC-ARENA
    Date Cleared
    2022-01-06

    (59 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    TOMTEC Imaging Systems GmbH

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Indications for use of TOMTEC-ARENA software are quantification and reporting of cardiovascular, fetal, and abdominal structures and function of patients with suspected disease to support the physician in the diagnosis.

    Device Description

    TOMTEC-ARENA is a clinical software package for reviewing, quantifying and reporting digital medical data. The software can be integrated into third party platforms. Platforms enhance the workflow by providing the database, import, export and other services. All analyzed data and images will be transferred to the platform for archiving, reporting and statistical quantification purposes.
    TTA2 consists of the following optional modules:

    • IMAGE-COM
    • I REPORTING
    • AutoStrain LV / SAX / RV / LA I
    • 2D CPA
    • FETAL 2D CPA ■
    • 4D LV-ANALYSIS
    • . 4D RV-FUNCTION
    • I 4D CARDIO-VIEW
    • I 4D MV-ASSESSMENT
    • I 4D SONO-SCAN
    • TOMTEC DATACENTER (incl. STUDY LIST, DATA MAINTENANCE, WEB ■ REVIEW)
      The purpose of this traditional 510(k) pre-market notification is to introduce semi-automated cardiac measurements based on an artificial intelligence and machine learning (AI/ML) algorithm. The Al/ML algorithm is a Convolutional Network (CNN) developed using a Supervised Learning approach. This Al/ML algorithm enables TOMTEC-ARENA to produce semi-automated and editable echocardiographic measurements on BMODE and DOPPLER datasets. The algorithm was developed using a controlled internal process that defines activities from the inspection of input data to the training and deployment of the algorithm: The training process begins with the model observing, and optimizing its parameters based on the training pool data. The model's prediction and performance are then evaluated against the test pool. The test pool data is set aside at the beginning of the project. During the training process, the Al/ML algorithm learned to predict measurements by being presented with a large number of echocardiographic data manually generated by qualified healthcare professionals. The echocardiographic studies were randomly assigned to be either used for training (approx. 2,800 studies) or testing (approx. 500 studies). A semi-automated measurement consists of a cascade of detection steps. It starts with a rough geometric estimate, which is subsequently refined more and more: The user selects a frame on which the semi-automated measurements shall be performed in TOMTEC-ARENA. Image- & metadata, e.g. pixel spacing, are transferred to the semi-automated measurement detector. The semi-automated measurement detector predicts the position of start and end caliper in the pixel coordinate system. These co-coordinates are transferred back to the CalcEngine, which converts the received data back into real world coordinates (e.g. mm) and creates the graphical overlay. This superimposed line can be edited by the user. The end user can edit, accept, or reject the measurement(s). This feature does not introduce any new measurements, but allows the end user to perform semi-automated measurements. The end user can also still perform manual measurements and it is not mandatory to use the semi-automated measurements. The semi-automated measurements are licensed separately.
    AI/ML Overview

    Here's an analysis of the acceptance criteria and study details for the TOMTEC-ARENA device, based on the provided FDA 510(k) summary:

    The 510(k) summary describes the TOMTEC-ARENA software, which introduces semi-automated cardiac measurements based on an AI/ML algorithm. The primary focus of the non-clinical performance data is on software verification, risk analysis, and usability evaluation, as no clinical testing was conducted.

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided document does not explicitly list quantitative acceptance criteria for the AI/ML algorithm's performance in terms of accuracy or precision of the semi-automated measurements. Instead, it states that "Completion of all verification activities demonstrated that the subject device meets all design and performance requirements." and "Testing performed demonstrated that the proposed TOMTEC-ARENA (TTA2.50) meets defined requirements and performance claims." These are general statements rather than specific, measurable performance metrics.

    Similarly, there are no reported quantitative device performance metrics (e.g., accuracy, sensitivity, specificity, or error rates) for the AI/ML algorithm's measurements mentioned in this summary. The summary focuses on the functional equivalence and safety of the AI-powered feature compared to existing manual measurements and predicate devices.

    However, the document does imply a core "acceptance criterion":

    Acceptance Criteria (Implied)Reported Device Performance
    Functional Equivalence/Accuracy: The semi-automated measurements (BMODE and DOPPLER) should provide measurement suggestions that are comparable in principle/technology to those included in the reference device and can be edited, accepted, or rejected by the user."Support of additional semi-automated measurements compared to reference device. Additional measurements rely on same principle/technology (e.g. line detection, single-point) as those included in reference device."
    "The measurement suggestion can be edited. Manual measurements as with TTA2.40.00 are still possible."
    Safety and Effectiveness: The introduction of semi-automated measurements should not adversely affect the safety and effectiveness of the device."No impact to the safety or effectiveness of the device."
    "Verification activities performed confirmed that the differences in the design did not adversely affect the safety and effectiveness of the subject device."
    Usability: The device is safe and effective for intended users, uses, and environments."TOMTEC-ARENA has been found to be safe and effective for the intended users, uses, and use environments."
    Compliance: Adherence to relevant standards (IEC 62304, IEC 62366-1) and internal processes."Software verification was performed according to the standard IEC 62304..."
    "A Summative Usability Evaluation was performed... according to the standard IEC 62366-1..."
    "The proposed modifications were tested in accordance with TOMTEC's internal processes."

    Without specific quantitative metrics for the AI's measurement accuracy, it's challenging to provide a detailed performance table. The provided information focuses on the design validation process rather than specific benchmark results for the AI's performance.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: Approximately 500 studies.
    • Data Provenance: The document does not specify the country of origin of the data. It states, "The echocardiographic studies were randomly assigned to be either used for training (approx. 2,800 studies) or testing (approx. 500 studies)." It does not explicitly state if the data was retrospective or prospective. Given that these are "studies" used for training and testing an algorithm, it is highly probable that they are retrospective data sets, collected prior to the algorithm's deployment.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Number of Experts: Not specified. The document states "a large number of echocardiographic data manually generated by qualified healthcare professionals." This implies multiple professionals but does not quantify them.
    • Qualifications of Experts: "qualified healthcare professionals." Specific qualifications (e.g., radiologist with X years of experience, sonographer, cardiologist) are not provided.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not specified. The ground truth was "manually generated by qualified healthcare professionals," but the process for resolving discrepancies among multiple professionals (if multiple were involved per case) is not described.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? No. The summary explicitly states: "No clinical testing conducted in support of substantial equivalence when compared to the predicate devices." The nature of the AI algorithm as providing semi-automated, editable measurements, rather than a diagnostic output, likely informed this decision. The user is always in the loop and can accept, edit, or reject the AI's suggestions.
    • Effect size of human readers improvement with AI vs. without AI assistance: Not applicable, as no MRMC study was performed.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    • Was a standalone study done? Not explicitly detailed in terms of quantitative performance metrics. While the algorithm "predicts the position of start and end caliper in the pixel coordinate system" and this prediction is mentioned as being evaluated against the test pool, the results are not presented as a standalone performance metric. The nature of the device, where the user can "edit, accept, or reject the measurement(s)", strongly implies that standalone performance is not the primary focus for regulatory purposes, as it is always intended to be used with human oversight. The comparison is generally with the predicate device's manual measurement workflow and a reference device's semi-automated features.

    7. Type of Ground Truth Used

    • Type of Ground Truth: "manually generated by qualified healthcare professionals." This suggests expert consensus or expert-derived measurements serving as the reference standard for the algorithm's training and testing.

    8. Sample Size for the Training Set

    • Sample Size for Training Set: Approximately 2,800 studies.

    9. How the Ground Truth for the Training Set Was Established

    • How Ground Truth Was Established: "The Al/ML algorithm learned to predict measurements by being presented with a large number of echocardiographic data manually generated by qualified healthcare professionals." This indicates that human experts manually performed the measurements on the training data, and these manual measurements served as the ground truth for the supervised learning model.
    Ask a Question

    Ask a specific question about this device

    K Number
    K201632
    Device Name
    TOMTEC-ARENA
    Date Cleared
    2020-08-14

    (59 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    TOMTEC Imaging Systems GmbH

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Indications for use of TOMTEC-ARENA software are quantification and reporting of cardiovascular, fetal, and abdominal structures and function of patients with suspected disease to support the physicians in the diagnosis.

    Device Description

    TOMTEC-ARENA (TTA2) is a clinical software package for reviewing, quantifying and reporting digital medical data. The software is compatible with different IMAGE-ARENA platforms and third party platforms.

    Platforms enhance the workflow by providing the database, import, export and other services. All analyzed data and images will be transferred to the platform for archiving, reporting and statistical quantification purposes.

    TTA2 consists of the following optional modules:

    • TOMTEC-ARENA SERVER & CLIENT .
    • . IMAGE-COM/ECHO-COM
    • REPORTING ●
    • AutoStrain (LV, LA, RV) ●
    • . 2D CARDIAC-PERFORMANCE ANALYSIS (Adult and Fetal)
    • . 4D LV-ANALYSIS
    • 4D RV-FUNCTION
    • . 4D CARDIO-VIEW
    • 4D MV-ASSESSMENT
    • 4D SONO-SCAN .
    AI/ML Overview

    The provided text is a 510(k) summary for the TOMTEC-ARENA software. It details the device's substantial equivalence to predicate devices and outlines non-clinical performance data. However, it explicitly states "No clinical testing conducted in support of substantial equivalence when compared to the predicate devices."

    Therefore, I cannot provide information on acceptance criteria or a study that proves the device meets those criteria from the given text as no clinical study was performed.

    Here's a breakdown of what can be extracted or inferred based on the document's content:

    1. A table of acceptance criteria and the reported device performance:
    Not applicable, as no clinical performance data or acceptance criteria for clinical performance are reported in this document. The document states that the device was tested to meet design and performance requirements through non-clinical methods.

    2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):
    Not applicable, as no clinical test set was used. Non-clinical software verification was performed.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):
    Not applicable, as no clinical test set requiring expert ground truth was mentioned.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
    Not applicable, as no clinical test set requiring adjudication was mentioned.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
    No MRMC comparative effectiveness study was done, as explicitly stated: "No clinical testing conducted in support of substantial equivalence". The device is a "Picture archiving and communications system" and advanced analysis tools; the document does not indicate it's an AI-assisted diagnostic tool that would typically undergo such a study.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
    While the document describes various "Auto" modules (AutoStrain, Auto LV, Auto LA) which imply algorithmic processing, it does not detail standalone performance studies for these algorithms. The context is generally about reviewing, quantifying, and reporting digital medical data to support physicians, not to replace interpretation. The comparison tables highlight that for certain features (e.g., 4D RV-Function, 4D MV-Assessment), the subject device uses machine learning algorithms for 3D surface model creation, with the user able to edit, accept, or reject the contours/landmarks. This indicates a human-in-the-loop design rather than a standalone algorithm for final diagnosis.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc):
    Not applicable for clinical ground truth, as no clinical studies were performed. For the software verification, the "ground truth" would be the predefined design and performance requirements.

    8. The sample size for the training set:
    Not applicable for clinical training data, as no clinical studies were performed. While some modules utilize "machine learning algorithms" (e.g., for 3D surface model creation), the document does not disclose the training set size or its characteristics.

    9. How the ground truth for the training set was established:
    Not applicable for clinical training data. The document mentions machine learning algorithms are used (e.g., in 4D RV-FUNCTION and 4D MV-ASSESSMENT for creating 3D surface models), but it does not describe how the training data for these algorithms, or their ground truth, was established.

    Ask a Question

    Ask a specific question about this device

    K Number
    K150122
    Date Cleared
    2015-02-13

    (24 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    TOMTEC IMAGING SYSTEMS, GMBH

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Indications for use of TomTec-Arena software are quantification and reporting of cardiovascular, fetal, and abdominal structures and function of patients with suspected disease to support the physicians in the diagnosis.

    Device Description

    TomTec-Arena™ is a clinical software package for reviewing, quantifying and reporting digital medical data. The software is compatible with different TomTec Image-Arena™ platforms and TomTec-Arena Server®, their derivatives or third party platforms.

    Platforms enhance the workflow by providing the database, import, export and other services. All analyzed data and images will be transferred to the platform for archiving, reporting and statistical quantification purposes.

    TomTec-Arena™ TTA2 consists of the following optional modules:

    • Image-Com
    • 4D LV-Analysis and 4D LV-Function
    • 4D RV-Function
    • 4D Cardio-View
    • 4D MV-Assessment
    • Echo-Com
    • 2D Cardiac-Performance Analysis
    • 2D Cardiac-Performance Analysis MR
    • 4D Sono-Scan
    • Reporting
    • Worksheet
    • TomTec-Arena Client
    AI/ML Overview

    The provided text does not contain detailed acceptance criteria or a study that explicitly proves the device meets those criteria. Instead, it describes a substantial equivalence submission for the TomTec Arena TTA2, a picture archiving and communications system.

    The document focuses on demonstrating that the new device is substantially equivalent to previously marketed predicate devices (TomTec-Arena 1.0 and Image-Arena 4.5). It outlines changes made to the device, primarily bug fixes, operability enhancements, and feature changes (repackaging or new appearance of existing technology).

    It explicitly states: "Substantial equivalence determination of this subject device was not based on clinical data or studies." This means that a detailed clinical performance study with defined acceptance criteria for the device's diagnostic performance was not conducted as part of this submission for determining substantial equivalence.

    While non-clinical performance data (software testing and validation) was performed according to internal company procedures, the acceptance criteria for this testing are not explicitly stated in a quantifiable manner within the provided text, beyond "expected results and acceptance (pass/fail) criteria have been defined in all test protocols."

    Therefore, most of the requested information regarding acceptance criteria, study details, sample sizes, expert qualifications, and ground truth establishment cannot be extracted from the provided text.

    Here is a summary of what can be extracted:

    1. A table of acceptance criteria and the reported device performance:

      • Acceptance Criteria: Not explicitly stated in quantifiable terms for the device's diagnostic performance. The document mentions "expected results and acceptance (pass/fail) criteria have been defined in all test protocols" for internal software testing.
      • Reported Device Performance:
        • All automated tests were reviewed and passed.
        • Feature complete test completed without deviations.
        • Functional tests are completed.
        • Measurement verification is completed without deviations.
        • All non-verified bugs have been evaluated and are rated as minor deviations and deferred to the next release.
        • The overall product concept was clinically accepted and supports the conclusion that the device is as safe as effective, and performs as well as or better than the predicate device.
        • The Risk-Benefit Assessment concludes that the benefit is superior to the risk, and the risk is low.
        • The data are sufficient to demonstrate compliance with essential requirements covering safety and performance.
        • The claims made in the device labeling are substantiated by clinical data (via literature review).
    2. Sample size used for the test set and the data provenance: Not applicable, as no clinical study with a test set was detailed. Non-clinical software testing involved various test cases but the sample size (number of test cases) and their provenance are not specified.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable, as no clinical study with a test set requiring expert ground truth was detailed.

    4. Adjudication method for the test set: Not applicable.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: Not applicable. The device is a "Picture archiving and communications system" and "Image Review and Quantification Software," not explicitly an AI-assisted diagnostic device, and no MRMC study was mentioned.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Not applicable. The focus is on software functionality and equivalence to predicate devices, not AI algorithm performance.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.): For non-clinical software testing, the "ground truth" would be the expected output of the software functions based on established specifications and requirements. For the "clinical acceptance" mentioned, it refers to a literature review, implying published clinical data served as the basis for concluding safety and effectiveness relative to predicate devices and general medical standards.

    8. The sample size for the training set: Not applicable, as this is not an AI/ML device with a training set.

    9. How the ground truth for the training set was established: Not applicable.

    Ask a Question

    Ask a specific question about this device

    K Number
    K132544
    Date Cleared
    2013-11-25

    (104 days)

    Product Code
    Regulation Number
    892.2050
    Why did this record match?
    Applicant Name (Manufacturer) :

    TOMTEC IMAGING SYSTEMS, GMBH

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Indications for use of TomTec-Arena software are diagnostic review, quantification and reporting of cardiovascular, fetal and abdominal structures and function of patients with suspected disease.

    Device Description

    TomTec-Arena is a clinical software package for reviewing, quantifying and reporting digital medical data. TomTec-Arena runs on high performance PC platforms based on Microsoft Windows operating system standards. The software is compatible with different TomTec Image-Arena™ platforms, their derivatives or third party platforms. Platforms enhance the workflow by providing the database, import, export and other services. All analyzed data and images will be transferred to the platform for archiving, reporting and statistical quantification purposes. Tom Tec-Arena consists of the following optional clinical application packages: Image-Com, 4D LV-Analysis/Function, 4D RV-Function, 4D Cardio-View, 4D MV-Assessment, Echo-Com, 2D Cardiac-Performance Analysis, 2D Cardiac-Performance Analysis MR, 4D Sono-Scan.

    AI/ML Overview

    The provided document does not contain detailed acceptance criteria or a study proving the device meets specific performance criteria. Instead, it is a 510(k) summary for a software package, TomTec-Arena 1.0, and focuses on demonstrating substantial equivalence to predicate devices.

    Here's a breakdown of what is and is not available in the provided text, in response to your requested information:

    1. A table of acceptance criteria and the reported device performance

    • Not available. The document states that "Testing was performed according to internal company procedures. Software testing and validation were done at the module and system level according to written test protocols established before testing was conducted." However, it does not provide the specific acceptance criteria or the quantitative reported device performance against those criteria. It only provides a high-level summary of tests passed.

    2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)

    • Not available. The document explicitly states: "Substantial equivalence determination of this subject device was not based on clinical data or studies." Therefore, there is no test set in the sense of a clinical performance study with patient data. The "tests" mentioned are software validation and verification.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)

    • Not applicable. As indicated above, no clinical test set with patient data was used for substantial equivalence determination. Ground truth establishment by experts for clinical performance is not mentioned.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

    • Not applicable. No clinical test set.

    5. If a multi-reader, multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • Not applicable. No MRMC study was conducted or mentioned. The device is a software package for review, quantification, and reporting, and its substantial equivalence was not based on clinical performance data demonstrating impact on human readers.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

    • Not explicitly detailed as a standalone performance study in the context of clinical accuracy. The document confirms that "measurement verification is completed without deviations" as part of non-clinical performance testing. This suggests that the algorithm's measurements were verified, but the specifics of this verification (e.g., what measurements, against what standard, sample size, etc.) are not provided. It's a software verification, not a clinical standalone performance study.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • Not applicable for clinical ground truth. For the non-clinical "measurement verification," the ground truth would likely be a known or calculated value for the data being measured, but the specific type of ground truth against which software measurements were verified is not described.

    8. The sample size for the training set

    • Not applicable. The document describes "TomTec-Arena" as a clinical software package for reviewing, quantifying, and reporting existing digital medical data. It is not an AI/ML device that requires a training set in the typical sense for learning patterns. Its functionality is based on established algorithms for image analysis and quantification.

    9. How the ground truth for the training set was established

    • Not applicable. No training set for an AI/ML model is mentioned.

    Summary of available information regarding performance:

    The document states that:

    • "Testing was performed according to internal company procedures."
    • "Software testing and validation were done at the module and system level according to written test protocols established before testing was conducted."
    • "Test results were reviewed by designated technical professionals before software proceeded to release."
    • "All requirements have been verified by tests or other appropriate methods."
    • "The incorporated OTS Software is considered validated either by particular tests or implied by the absence of OTS SW related abnormalities during all other V&V activities."
    • The summary conclusions indicate:
      • "all automated tests were reviewed and passed"
      • "feature complete test completed without deviations"
      • "functional tests are completed"
      • "measurement verification is completed without deviations"
      • "multilanguage tests are completed without deviations"
    • "Substantial equivalence determination of this subject device was not based on clinical data or studies."
    • A "clinical evaluation following the literature route based on the assessment of benefits, associated with the use of the device, was performed." This literature review supported the conclusion that the device is "as safe as effective, and performs as well as or better than the predicate devices."

    In essence, TomTec-Arena 1.0 was cleared based on non-clinical software verification and validation, comparison to predicate devices, and a literature review, rather than a prospective clinical performance study with explicit acceptance criteria for diagnostic accuracy.

    Ask a Question

    Ask a specific question about this device

    K Number
    K120135
    Date Cleared
    2012-04-13

    (87 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    TOMTEC IMAGING SYSTEMS, GMBH

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The clinical application package 2D Cardiac Performance Analysis MR is indicated for cardiac quantification based on digital magnetic resonance images. It provides measurements of myocardial function (displacement, velocity and strain) that is used for clinical diagnosis purposes of patients with suspected heart disease.

    Device Description

    2D Cardiac Performance Analysis MR (=2D CPA MR) is a clinical application package for high performance PC platforms based on Microsoft® Windows® operating system standards. 2D CPA MR is a software for the analysis, storage and retrieval of digitized magnetic resonance (MR) images.
    The data can be acquired by cardiac MR machines. The digital 2D data can be used for comprehensive functional assessment of the myocardial function.
    2D CPA MR is designed to run with a TomTec Data Management Platform (Image-Arena™, their derivatives or any other platform that provides and supports the Generic CAP Interface. The Generic CAP Interface is used to connect clinical application packages (=CAPs) to platforms to exchange digital medical data.
    The TomTec Data Management Platform enhances the workflow by providing the database, import, export and other advanced high-level research functionalities.
    2D CPA MR is designed for the 2-dimensional functional analysis of myocardial deformation. Based on two dimensional datasets a feature tracking algorithm supports the calculation of a 2D contour model that represents the endocardial and epicardial border. From these contours the corresponding velocities, displacement and strain can be derived.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and study information for the 2D Cardiac Performance Analysis MR 1.0 device, based on the provided text:

    1. Acceptance Criteria and Reported Device Performance

    The provided 510(k) summary does not explicitly state quantitative acceptance criteria (e.g., specific accuracy thresholds, sensitivity, or specificity values). Instead, the acceptance criteria appear to be qualitative, focusing on equivalence to predicate devices and confirmation through internal testing and literature review.

    Acceptance Criteria (Implicit from the document):

    • Safety and Effectiveness: The device must be as safe and effective as the predicate devices.
    • Performance Equivalence: The device must perform as well as or better than the predicate devices regarding myocardial function analysis (displacement, velocity, strain, strain rate).
    • Clinical Acceptance: The overall product concept must be clinically accepted.
    • Risk-Benefit Assessment: The benefit of using the device must be superior to the risk (with risk being low).
    • Published Data Relevance: Published data must be relevant and applicable to the device characteristics and intended medical procedure.
    • Claim Substantiation: Claims made in the device labeling must be substantiated by clinical data.
    • Software Verification: All software requirements are tested and meet required pass/fail criteria.

    Reported Device Performance:

    Acceptance Criteria (Implicit)Reported Device Performance
    As safe and effective as predicate devices."The conclusion states that: ... The overall product concept was clinically accepted and the clinical test results support the conclusion that the Subject Device is as safe as effective, and performs as well as the Predicate Devices."
    "Test results support the conclusion, that the Subject Device is as safe as effective, and performs as well as or better than the Predicate Devices."
    Performs as well as or better than predicate devices regarding myocardial function analysis."The Subject Device provides measurements to analyze the myocardial function on cardiac magnetic resonance images like Predicate Device 1 (K090461) and Predicate Device 2 (K100352)."
    "The tracking technology of the Subject Device is sensitive enough to track the grey value patterns of regular MRI, thus eliminating the need of additional acquisition of tagged images, which are usually the basis for Predicate Device 2 (K100352)."
    "The tracking of the Subject Device delivers contours of different regions of the myocardium like in Predicate Device 1 (K090461) and Predicate Device 2 (K100352)."
    "Based on the tracking results regional measurements like strain can be derived like in Predicate Device 1 (K090461) and Predicate Device 2 (K100352)."
    "The 2D feature tracking method based on 2D MR image data is already published. The use is as accurate as standard procedures such as HARP (for MR) or 2D speckle tracking (for echo) and it is feasible for clinical practice."
    Overall product concept is clinically accepted."The overall product concept was clinically accepted and the clinical test results support the conclusion that the Subject Device is as safe as effective, and performs as well as the Predicate Devices."
    Benefits superior to risks."The Risk-Benefit Assessment shows that the benefit is superior to the risk (whereas the risk is low)."
    Published data is relevant and applicable."The clinical evaluation shows that the published data are relevant and applicable to the relevant characteristics of the device under assessment and the medical procedure for which the device is intended."
    Claims made in device labeling are substantiated."The claims made in the device labelling are substantiated by the clinical data."
    All software requirements are tested and meet required pass/fail criteria (Non-clinical performance)."Test results meet the required pass/fail criteria."
    "All software requirements are tested or otherwise verified."

    2. Sample Size Used for the Test Set and Data Provenance

    The document refers to "clinical performance data testing" and a "clinical evaluation following the literature route." However, it does not specify a sample size for a dedicated test set in the traditional sense of a clinical trial. Instead, it relies on a review of published literature and a comparison to predicate devices.

    • Sample Size: Not specified for a dedicated test set. The clinical evaluation was based on a "literature route."
    • Data Provenance: Not explicitly stated (e.g., country of origin). The data provenance is implied to be from published literature. The study is retrospective in the sense that it reviews existing published data.

    3. Number of Experts and Qualifications for Ground Truth

    The document does not specify the number or qualifications of experts used to establish ground truth for a test set. Since the clinical evaluation relied on a literature review, the "ground truth" would implicitly come from the studies and methods described in the published literature, which would have their own expert-derived ground truths.

    4. Adjudication Method for the Test Set

    As there is no described dedicated "test set" with expert review for adjudication, no adjudication method is mentioned or applicable in the context of this 510(k) summary.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No MRMC comparative effectiveness study is described that measures how much human readers improve with AI vs. without AI assistance. The submission focuses on the standalone performance of the device and its comparability to predicate devices and established techniques, not on human-in-the-loop performance.

    6. Standalone (Algorithm Only) Performance Study

    Yes, a standalone performance assessment was done. The document states:

    • "The 2D feature tracking method based on 2D MR image data is already published. The use is as accurate as standard procedures such as HARP (for MR) or 2D speckle tracking (for echo) and it is feasible for clinical practice."
    • This implies that the algorithm's performance (its accuracy) was evaluated against established "standard procedures" (like HARP for MR) in published literature. While no specific study details are given within this document, the reliance on a "literature route" for clinical evaluation suggests that earlier standalone performance studies of this 2D feature tracking method were reviewed and deemed acceptable.

    7. Type of Ground Truth Used

    The type of ground truth is indirectly pathology or expert consensus, derived from the "standard procedures" used in the referenced literature. The document notes that the device is "as accurate as standard procedures such as HARP (for MR)." The ground truth for these standard procedures typically comes from:

    • Expert Consensus/Manual Contours: For imaging-based measurements, manual contouring by expert clinicians is often the gold standard. The device itself requires a manually drawn contour as a prerequisite for its calculations.
    • Pathology/Outcomes Data: While not directly mentioned for the device's validation, the "diagnostic purposes of patients with suspected heart disease" implies that, ultimately, the clinical utility of the measurements would relate to patient outcomes or confirmed pathology.

    8. Sample Size for the Training Set

    The document does not specify a sample size for any training set. Given that the core technology (2D feature tracking) is described as "already published," it's highly probable that such a method would have been developed and trained using various datasets. However, these details are not provided in this 510(k) summary.

    9. How Ground Truth for the Training Set Was Established

    Since no training set is described, the method for establishing its ground truth is also not provided in this document.

    Ask a Question

    Ask a specific question about this device

    K Number
    K110746
    Date Cleared
    2011-05-24

    (68 days)

    Product Code
    Regulation Number
    870.1425
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    TOMTEC IMAGING SYSTEMS, GMBH

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    4D LV-Analysis is intended to retrieve, analyze, and store digital ultrasound images for computerized dynamic 3-dimensional image analysis. 4D LV-Analysis reads certain digital 3D/4D image file formats for reprocessing to a proprietary 3D/4D image file format for analysis. It is intended as a digital 4D ultrasound image processing tool for cardiology.

    4D LV-Analysis 3.0 is intended as software for analysis of the left ventricle in heart failure patients.

    Device Description

    The 4D LV-Analysis® 3.0 software is a clinical application package for high performance PC platforms based on Microsoft Windows operating system standards. 4D LV-Analysis is software for the retrieval, reconstruction, rendering and analysis of digitized ultrasound B-mode images. 4D LV-Analysis is compatible with different TomTec Image-Arena™ platforms, their derivatives or any other platform that provides and supports the Generic CAP Interface. Platforms enhance the workflow by providing the database, import, export and other functionalities. All analyzed data and images will be transferred to the platform for reporting and statistical quantification purposes. 4D LV-Analysis is designed for 2- and 3-dimensional morphological and functional analyses of the left ventricle. Based on three dimensional datasets a semi-automatic 3D surface model finding algorithm supports the calculation of a 4D model that represents the cavity of the LV.

    From that model, global as well as regional volumetric changes can be derived. By looking at the timing of regional contractions, dyssynchrony of a ventricle can be quantified and visualized. For visualization, parametric maps are used that indicate areas with a delayed contraction.

    Thus 4D LV-Analysis improves the functional analysis of the LV and presentation of findings to cardiologists and electro-physiologists and visualizes the contraction pattern of the LV to assess dyssynchrony.

    AI/ML Overview

    The submission K110746 for TomTec Imaging Systems GmbH's "4D LV-Analysis 3.0" device provides minimal details regarding specific acceptance criteria and detailed study results. The document largely defers to internal company procedures and a general statement of clinical acceptance.

    Here's an analysis based on the provided text, highlighting what is presented and what is missing:

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria (Inferred)Reported Device PerformanceComments
    Device is safe and effective"safe as effective, and performs as well as the predicate devices."Vague and lacking specific metrics or thresholds.
    Performs as well as or better than predicate devices"performs as well as or better than the predicate devices."No specific performance metrics or statistical comparisons are provided.
    Compliance with internal software testing and validation protocols"Software testing and validation were done at the module and system level according to written test protocols established before testing was conducted."This describes the process, not the outcome or specific acceptance criteria met.
    Clinical acceptance of the overall product concept"The overall product concept was clinically accepted"This indicates a general positive reception but lacks quantifiable clinical endpoints or acceptance thresholds.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: Not specified. The document states "clinical test results support the conclusion," but no details about the number of cases or patients in the clinical performance testing are provided.
    • Data Provenance: Not specified. There is no information regarding the country of origin of the data or whether the clinical data was retrospective or prospective.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Number of Experts: Not specified.
    • Qualifications of Experts: Not specified.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not specified.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? No, not explicitly mentioned. The document does not describe any study comparing human readers' performance with and without AI assistance.
    • Effect Size of Improvement: Not applicable, as no such study is described.

    6. Standalone (Algorithm Only) Performance Study

    • Was a standalone study done? Yes, implied. The device itself is software for analysis, and the claims about its performance relative to predicate devices would inherently involve evaluating its algorithmic output. However, no specific standalone performance metrics (e.g., accuracy, sensitivity, specificity for specific clinical endpoints) are provided.

    7. Type of Ground Truth Used

    • Type of Ground Truth: Not specified. The document refers to "overall product concept was clinically accepted" and "clinical test results," suggesting some form of clinical ground truth, but the specific nature (e.g., expert consensus, pathology, long-term outcomes, invasive measurements) is not detailed.

    8. Sample Size for the Training Set

    • Sample Size for Training Set: Not specified. The document mentions the device uses a "semi-automatic 3D surface model finding algorithm," which implies machine learning or model training, but gives no details about the training data.

    9. How the Ground Truth for the Training Set Was Established

    • How Ground Truth Was Established: Not specified.

    Summary of Deficiencies in Reporting:

    The 510(k) summary for K110746, typical for submissions from that era and device type, is high-level and defers significant detail to internal documentation (Chapter 16: Software, Verification and Validation Documentation). It lacks specific, quantifiable acceptance criteria and detailed reporting of clinical performance data. Key information such as sample sizes, expert qualifications, ground truth methods, and statistical performance metrics which are common in more recent AI/ML device submissions, are not present in this public summary. The claims are generalized statements about safety and effectiveness in comparison to predicate devices, without providing the underlying evidence in this document.

    Ask a Question

    Ask a specific question about this device

    K Number
    K110667
    Date Cleared
    2011-04-08

    (30 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    TOMTEC IMAGING SYSTEMS, GMBH

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Image-Arena software platform is intended to import, export, store; retrieve and report digital studies. The Image-Arena software is based on a SQL - database and is intended as an image management system. The Image-Arena software can import certain digital 2D or 3D image file formats of different modalities.

    Image-Arena offers a Generic Clinical Application Package interface in order to connect TomTec applications as well as commercially available analysis and quantification tools to the Image-Arena platform.

    The software is suited for stand-alone workstations as well as for networked multisystem installations and therefore is an image management system for physician practices and hospitals. It is intended as a general purpose digital medical image processing tool.

    Image-Arena is not intended to be used for reading of mammography images.

    Image-Com software is intended for reviewing and measuring of digital medical data of different modalities. It can be driven by Image-Arena or other third party platforms and is intended to launch other commercially available analysis and quantification tools. .

    Echo-Com software is intended to serve as a versatile solution for Stress echo examinations in patients who may not be receiving enough oxygen because of blocked arteries. Echo-Com software is intended for reviewing, wall motion scoring and reporting of stress echo studies.

    Device Description

    Image-Arena is an SQL database based image management system that provides the capability to import, export, store, retrieve and report digital studies.

    lmage Arena is developed as a common interface platform for TomTec - and commercially available analysis and quantification tools (= clinical application packages) that can be connected to Image-Arena through the Generic Clinical Application Package interface (= Generic CAP Interface)

    lmage-Arena manages different digital medical data from different modalities except digital mammography.

    Image-Arena is suited as stand-alone workstation as well as networked multisystem server / client installations.

    Image-Arena runs on an integrated Intel Pentium high performance computer system based on Microsoft™ Windows standards. Communication and data exchange are done using standard TCP/IP, DICOM and HL7 protocols.

    Image-Arena provides the possibility to create user defined medical reports.

    The system does not produce any original medical images.

    Image-Com is a clinical application package software for reviewing and measuring of digital medical data. Image-Com is either embedded in Image-Arena platform or can be integrated into Third Party platforms, such as PACS or CVIS.

    Echo-Com is a clinical application package software for reviewing and reporting of digital stress echo data. Echo-Com is either embedded in Image-Arena Platform or can be integrated into Third Party platforms, such as PACS or CVIS.

    AI/ML Overview

    Here's an analysis of the provided text regarding the acceptance criteria and study for the Image-Arena and Image-Arena Applications (Image-Arena 4.5, Echo-Com 4.5, Image-Com 4.5) device:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The provided document does not explicitly state specific numerical acceptance criteria for performance metrics (e.g., accuracy, sensitivity, specificity). Instead, it relies on a qualitative comparison to predicate devices and general statements about safety and effectiveness.

    Acceptance Criteria (Implied)Reported Device Performance
    Device is as safe as predicate device."The clinical test results support the conclusion that the device is as safe as effective..."
    Device is as effective as predicate device."...and performs as well as or better than the predicate device."
    Device performs as well as or better than predicate device."...and performs as well as or better than the predicate device."
    Software testing and validation completed successfully."Software testing and validation were done at the module and system level according to written test protocols established before testing was conducted. Test results were reviewed by designated technical professionals before software proceeded to release."
    Overall product concept is clinically accepted."The overall product concept was clinically accepted..."

    2. Sample Size Used for the Test Set and Data Provenance:

    The document does not specify the sample size for any clinical test set, nor does it provide details on the data provenance (e.g., country of origin, retrospective or prospective). It simply states "clinical test results."

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:

    The document does not mention the number of experts used to establish ground truth or their specific qualifications.

    4. Adjudication Method for the Test Set:

    The document does not describe any adjudication method used for a test set.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    The document does not mention if an MRMC comparative effectiveness study was done, or any effect size of human readers improving with AI vs. without AI assistance. The device described is an image management and analysis system, not an AI-assisted diagnostic tool in the sense of directly improving human reader performance on a task. Its role is to provide tools for reviewing, measuring, and reporting, which could indirectly improve efficiency or consistency but this is not quantifiable in the provided text as an "effect size" of AI assistance.

    6. Standalone Performance (Algorithm Only without Human-in-the-Loop Performance):

    The document does not present any standalone (algorithm only) performance data. The device is described as an image management and analysis system, implying human interaction for reviewing, measuring, and reporting.

    7. Type of Ground Truth Used:

    The document does not specify the type of ground truth used for any clinical testing. Given that it's an image management and analysis system, it's likely that if any ground truth was established for "clinical test results," it would be based on expert clinical interpretation or existing patient records, but this is not explicitly stated.

    8. Sample Size for the Training Set:

    The document does not mention a training set sample size. This type of device is an image management and analysis platform, not a machine learning model that typically involves distinct training sets for algorithm development in the way that, for example, a CAD system would.

    9. How Ground Truth for the Training Set Was Established:

    As there is no mention of a training set, there is no information on how its ground truth might have been established.

    Ask a Question

    Ask a specific question about this device

    K Number
    K110595
    Device Name
    4D SONO-SCAN 1.0
    Date Cleared
    2011-04-07

    (36 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    TOMTEC IMAGING SYSTEMS, GMBH

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    4D Sono-Scan 1.0 is intended to analyze digital ultrasound images for computerized 3-dimensional and 4-dimensional (dynamic 3D) image processing.

    The 4D Sono-Scan 1.0 reads certain digital 3D/4D image file formats for reprocessing to a proprietary 3D/4D image file format for subsequent 3D/4D tomographic reconstruction and rendering. It is intended as a general purpose digital 3D/4D ultrasound image processing tool.

    4D Sono-Scan 1.0 intended as software for reviewing 3D/4D data sets and perform basic measurements in 3D.

    Device Description

    The 4D Sono-Scan 1.0 is a clinical application package for high performance PC platforms based on Microsoft® Windows® operating system standards. 4D Sono-Scan 1.0 is proprietary software for the analysis, storage, retrieval, reconstruction and rendering of digitized ultrasound B-mode images. The data can be acquired by ultrasound machines that are able to acquire and store 4D datasets (i.e. Toshiba Aplio XG or Zonare Z.ONE). The digital 3D/4D data can be used for basic measurements like areas, distances and volumes.

    4D Sono-Scan 1.0 is compatible to different TomTec Image-Arena platforms and their derivatives (i.e. Zonare IQ Workstation) for offline analysis. The platform enhances the workflow by providing the database, import, export and other advanced high-level research functionalities. All analyzed data and images will be transferred to the platform for reporting and statistical quantification purposes via the Generic CAP Interface.

    The Generic CAP (= clinical application packages) Interface is used to connect clinical application packages (=CAPs) to platforms to exchange digital medical data.

    AI/ML Overview

    The provided text describes the 4D Sono-Scan 1.0 device, its intended use, and a general statement about its testing. However, it does not explicitly define acceptance criteria in a quantifiable table format, nor does it detail a specific study with statistical results to prove the device meets such criteria.

    The document refers to verification and validation documentation (Chapter 16), which would typically contain such information, but these chapters are not included in the provided text.

    Based on the information available:

    1. Table of Acceptance Criteria and Reported Device Performance:

    No explicit table of acceptance criteria or specific quantifiable performance metrics are provided in the given text. The document generally states that "The overall product concept was clinically accepted and the clinical test results support the conclusion that the subject device is as safe as effective, and performs as well as the predicate devices."

    2. Sample Size Used for the Test Set and Data Provenance:

    • Test Set Sample Size: Not specified.
    • Data Provenance: Not specified (e.g., country of origin, retrospective or prospective).

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:

    Not specified.

    4. Adjudication Method for the Test Set:

    Not specified.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    No mention of an MRMC study or any effect size comparing human readers with and without AI assistance is provided. The device is software for analyzing existing ultrasound images and performing measurements, not directly an AI-assisted diagnostic tool for human readers in the sense of improving their diagnostic accuracy.

    6. Standalone (Algorithm Only) Performance:

    The document states, "Software testing and validation were done at the module and system level according to written test protocols established before testing was conducted." This implies standalone testing of the software's functionality (e.g., ability to reprocess, reconstruct, render, and perform basic measurements), rather than a clinical performance study with human subjects. However, no specific performance metrics for this standalone testing are provided beyond the general statement that test results support its safety and effectiveness compared to predicates.

    7. Type of Ground Truth Used:

    While not explicitly stated, for a device performing basic measurements on ultrasound images, the "ground truth" for testing would likely involve:

    • Comparison of software measurements against manually performed measurements (e.g., by experts) on the same images.
    • Comparison of software-generated 3D/4D reconstructions against expected anatomical structures or other validated reconstruction methods.
    • Verification of the software's ability to accurately read and reprocess different 3D/4D image file formats.

    8. Sample Size for the Training Set:

    Not applicable. The 4D Sono-Scan 1.0 is described as a "clinical application package" and "proprietary software for the analysis, storage, retrieval, reconstruction and rendering of digitized ultrasound B-mode images." It performs "basic measurements like areas, distances and volumes." There is no indication that this device uses machine learning or AI models that require a separate "training set" in the conventional sense. It appears to be a rule-based or algorithmic software for image processing and quantification.

    9. How the Ground Truth for the Training Set Was Established:

    Not applicable, as there is no indication of a training set for machine learning. The software's functionality would have been developed and verified against known principles of image processing, geometry, and engineering specifications.

    Ask a Question

    Ask a specific question about this device

    K Number
    K103782
    Date Cleared
    2011-01-27

    (31 days)

    Product Code
    Regulation Number
    870.1425
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    TOMTEC IMAGING SYSTEMS, GMBH

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    4D MV Assessment is intended to retrieve, analyze and store digital ultrasound images for computerized 3-dimensional and 4-dimensional (dynamic 3D) image processing.

    4D MV Assessment reads certain digital 3D/4D image file formats for reprocessing to a proprietary 3D/4D image file format for subsequent 3D/4D tomographic reconstruction and rendering. It is intended as a general purpose digital 3D/4D ultrasound image processing tool for cardiology.

    4D MV-Assessment 2.0 is intended as software to analyze pathologies related to the mitral valve.

    Device Description

    4D MV-Assessment® is a clinical application package for high performance PC platforms based on Microsoft® Windows® operating system standards. 4D MV-Assessment is software for the retrieval, reconstruction, rendering and analysis of digitized ultrasound B-mode images and Color Doppler images. The data is acquired by ultrasound machines that are able to store compatible 3D/4D datasets. The digital 3D/4D data can be used for comprehensive morphological and functional assessment of the mitral valve.

    4D MV-Assessment is compatible with different Tom Tec Image-ArenaTM platforms, their derivatives or any other platform that provides and supports the Generic CAP Interface. Platforms enhance the workflow by providing the database, import, export and other functionalities. All analyzed data and images will be transferred to the platform for reporting and statistical quantification purposes.

    4D MV-Assessment is designed for 2-, 3- and 4-dimensional morphological and functional analysis of mitral valves (MV). Based on an easy and intuitive workflow the application package generates models of anatomical structures such as MV annulus, leaflet and the closure line. Automatically derived parameters allow quantification of pre- and post-operative valvular function and comparison of morphology.

    4D MV-Assessment improves the presentation of anatomy and findings to surgeons and cardiologists and visualizes the complex morphology and dynamics of the mitral valve.

    The Generic CAP Interface is used to connect clinical application packages (=CAPs) to platforms to exchange digital medical data.

    AI/ML Overview

    The provided text from the 510(k) summary for the "4D MV-Assessment 2.0" device does not contain detailed information regarding specific acceptance criteria, study methodologies, or performance results in the way requested by the prompt.

    The document states:

    • "Software testing and validation were done at the module and system level according to written test protocols established before testing, was conducted. Test results were reviewed by designated technical professionals before software proceeded to release."
    • "The overall product concept was clinically accepted and the clinical test results support the conclusion that the subject device is as safe as effective, and performs as well as the predicate devices."
    • "Test results support the conclusion, that the subject device is as safe as effective, and performs as well as or better than the predicate devices."

    However, it explicitly refers to "Chapter 16: Software, Verification and Validation Documentation" for details, which is not included in the provided text. Therefore, I cannot extract the specific information requested in the table and detailed study aspects.

    Based only on the provided text, I can state that:

    1. Acceptance Criteria and Reported Device Performance: This information is not provided in the given summary. The document generally states that the device was found to be "as safe as effective, and performs as well as or better than the predicate devices" based on non-clinical and clinical performance data, but specific metrics or criteria are not detailed.

    2. Sample size for the test set and data provenance: Not mentioned.

    3. Number of experts used to establish ground truth and qualifications: Not mentioned.

    4. Adjudication method: Not mentioned.

    5. Multi-Reader Multi-Case (MRMC) comparative effectiveness study: Not mentioned. The document focuses on the device's standalone capabilities and its comparison to predicate devices, but no human-in-the-loop study with effect sizes is described.

    6. Standalone (algorithm-only) performance: The device is described as "software for the retrieval, reconstruction, rendering and analysis of digitized ultrasound B-mode images and Color Doppler images" and performs "3- and 4-dimensional morphological and functional analysis of mitral valves." This implies a standalone algorithmic function, but specific performance metrics for this standalone mode are not provided in the summary.

    7. Type of ground truth used: Not explicitly stated. The document refers to "clinical test results" and "morphological and functional assessment," which implies comparison against clinical observations or established diagnostic criteria, but the precise nature of the ground truth (e.g., expert consensus, pathology, outcome data) is not specified.

    8. Sample size for the training set: Not mentioned.

    9. How the ground truth for the training set was established: Not mentioned.

    Ask a Question

    Ask a specific question about this device

    K Number
    K090461
    Date Cleared
    2009-05-22

    (88 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    TOMTEC IMAGING SYSTEMS, GMBH

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Image-Arena Platform Software is intended to serve as a data management platform for clinical application packages. It provides information that is used for clinical diagnosis purposes.

    The software is suited for stand-alone workstations as well as for networked multisystem installations and therefore is an image management system for research and routine use in both physician practices and hospitals. It is intended as a general purpose digital medical image processing tool for cardiology.

    As the Image-Arena Applications software tool package is modular structured, clinical applications packages with different indications for use can be connected.

    Echo-Com software is intended to serve as a versatile solution for Stress Echo examinations in patients who may not be receiving enough blood or oxygen because of blocked arteries.

    Image-Com software is intended for reviewing, measuring and reporting of DICOM data of the cardiac modalities US and XA. It can be driven by Image-Arena or other third party platforms and is intended to launch other clinical applications.

    The clinical application package 2D Cardiac Performance Analysis is indicated for cardiac quantification based on echocardiographic data. It provides measurements of myocardial function (displacement, velocity and strain) that is used for clinical diagnosis purposes of patients with suspected heart disease.

    Device Description

    The hardware requirements are based on an Intel Pentium high performance computer system and Microsoft® Windows XP Professional™ or Microsoft® Vista™ Operating System standards.

    Image-Arena is suited as stand-alone workstation as well as networked multisystem installations. Image Arena is developed as a common interface platform for TomTec and 3rd party clinical application packages that can be connected to Image-Arena through the 300 party Interface. The different application packages have all access to the central database and can be enabled on a modular basis thus allowing custom tailored solutions of Image-Arena.

    The Image-Arena Application is a software tool package designed for analysis, documentation and archiving of ultrasound studies in multiple dimensions and X-ray angiography studies.

    The Image-Arena Application software tools are modular structured and consist of different software modules. The different modules can be combined on the demand of the users to fulfil the requirements of a clinical researcher or routine oriented physician.

    The Image-Arena Application offers features to import different digital 2D, 3D and 4D (dynamic 3D) image formats based on defined file format standards (DICOM-, HPSONOS-, GE-,TomTec-file formats) in one system, thus making image analysis independent of the ultrasound-device or other imaging devices used.

    Offline measurements, documentation in standard report forms, the possibility to implement user-defined report templates and instant access to the stored data through digital archiving make it a flexible tool for image analysis and storage of different imaging modalities data including 2D. M-Mode. Pulsed (PW) Doppler Mode, Continuous (CW) wave Doppler Mode, Power Amplitude Doppler Mode, Color Doppler Mode, Doppler Tissue Imaging and 3D/4D imaging modes.

    2D Cardiac Performance Analysis 1.0 is an additional clinical application package for high performance PC platforms based on Microsoft Windows™ operating system standards. 2D Cardiac Performance Analysis 1.0 is a software for the analysis, storage and retrieval of digitized ultrasound B-mode images. The data can be acquired by ultrasound machines that are able to acquire and store 2D ultrasound datasets. The digital 2D data can be used for comprehensive functional assessment of the myocardial function.

    2D Cardiac Performance Analysis 1.0 is designed to run with a TomTec Data Management Platform for offline analysis. The TomTec Data Management Platform enhances the workflow by providing the database, import, export and other advanced high-level research functionalities. All analyzed data and images will be transferred to the platform for reporting and statistical quantification purposes.

    2D Cardiac Performance Analysis 1.0 is designed for the 2-dimensional functional analysis of myocardial function. Based on two dimensional datasets a speckle tracking algorithm supports the calculation of a 2D contour model that represents the endocardial and epicardial border. From that contours the corresponding velocities, displacement and strain can be derived.

    The 2D Cardiac Performance Analysis 1.0 application is a visual and quantitative method for assessing cardiac mechanics and the dynamics of cardiac motion.

    AI/ML Overview

    The document lacks specific acceptance criteria (performance metrics with thresholds) and detailed study results to confirm the device meets these criteria. The approval is based on substantial equivalence to predicate devices, not on a detailed comparative effectiveness study with specific performance outcomes.

    Here's an attempt to answer the questions based on the provided text, while highlighting the missing information:

    1. A table of acceptance criteria and the reported device performance

    No explicit acceptance criteria (e.g., sensitivity, specificity, accuracy thresholds) or detailed performance metrics are provided in the document. The submission states that "Test results support the conclusion, that the subject device is as safe as effective, and performs as well as or better than the predicate devices," but no specific data is given.

    Acceptance CriteriaReported Device Performance
    Not specifiedNot specified

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document mentions "clinical test results" but does not provide any details regarding the sample size, data provenance, or whether the study was retrospective or prospective.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    This information is not provided in the document.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    This information is not provided in the document.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    The document mentions that "The overall product concept was clinically accepted and the clinical test results support the conclusion that the subject device is as safe as effective, and performs as well as the predicate devices." However, it does not describe a multi-reader, multi-case (MRMC) comparative effectiveness study, nor does it provide an effect size for human reader improvement with or without AI assistance. The focus seems to be on demonstrating equivalence to predicate devices rather than measuring improvement.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The document describes the "2D Cardiac Performance Analysis 1.0" software as calculating parameters based on a "manual drawn contour" by a prerequisite user, suggesting a human-in-the-loop component for defining initial contours. Therefore, a purely standalone algorithm performance without human interaction is not explicitly described or implied.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    The document does not specify the type of ground truth used for evaluating the device's performance.

    8. The sample size for the training set

    The document does not provide any information about a training set or its sample size.

    9. How the ground truth for the training set was established

    The document does not provide any information about a training set or how its ground truth was established.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 2