K Number
K132544
Date Cleared
2013-11-25

(104 days)

Product Code
Regulation Number
892.2050
Panel
RA
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Indications for use of TomTec-Arena software are diagnostic review, quantification and reporting of cardiovascular, fetal and abdominal structures and function of patients with suspected disease.

Device Description

TomTec-Arena is a clinical software package for reviewing, quantifying and reporting digital medical data. TomTec-Arena runs on high performance PC platforms based on Microsoft Windows operating system standards. The software is compatible with different TomTec Image-Arena™ platforms, their derivatives or third party platforms. Platforms enhance the workflow by providing the database, import, export and other services. All analyzed data and images will be transferred to the platform for archiving, reporting and statistical quantification purposes. Tom Tec-Arena consists of the following optional clinical application packages: Image-Com, 4D LV-Analysis/Function, 4D RV-Function, 4D Cardio-View, 4D MV-Assessment, Echo-Com, 2D Cardiac-Performance Analysis, 2D Cardiac-Performance Analysis MR, 4D Sono-Scan.

AI/ML Overview

The provided document does not contain detailed acceptance criteria or a study proving the device meets specific performance criteria. Instead, it is a 510(k) summary for a software package, TomTec-Arena 1.0, and focuses on demonstrating substantial equivalence to predicate devices.

Here's a breakdown of what is and is not available in the provided text, in response to your requested information:

1. A table of acceptance criteria and the reported device performance

  • Not available. The document states that "Testing was performed according to internal company procedures. Software testing and validation were done at the module and system level according to written test protocols established before testing was conducted." However, it does not provide the specific acceptance criteria or the quantitative reported device performance against those criteria. It only provides a high-level summary of tests passed.

2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)

  • Not available. The document explicitly states: "Substantial equivalence determination of this subject device was not based on clinical data or studies." Therefore, there is no test set in the sense of a clinical performance study with patient data. The "tests" mentioned are software validation and verification.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)

  • Not applicable. As indicated above, no clinical test set with patient data was used for substantial equivalence determination. Ground truth establishment by experts for clinical performance is not mentioned.

4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

  • Not applicable. No clinical test set.

5. If a multi-reader, multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

  • Not applicable. No MRMC study was conducted or mentioned. The device is a software package for review, quantification, and reporting, and its substantial equivalence was not based on clinical performance data demonstrating impact on human readers.

6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

  • Not explicitly detailed as a standalone performance study in the context of clinical accuracy. The document confirms that "measurement verification is completed without deviations" as part of non-clinical performance testing. This suggests that the algorithm's measurements were verified, but the specifics of this verification (e.g., what measurements, against what standard, sample size, etc.) are not provided. It's a software verification, not a clinical standalone performance study.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

  • Not applicable for clinical ground truth. For the non-clinical "measurement verification," the ground truth would likely be a known or calculated value for the data being measured, but the specific type of ground truth against which software measurements were verified is not described.

8. The sample size for the training set

  • Not applicable. The document describes "TomTec-Arena" as a clinical software package for reviewing, quantifying, and reporting existing digital medical data. It is not an AI/ML device that requires a training set in the typical sense for learning patterns. Its functionality is based on established algorithms for image analysis and quantification.

9. How the ground truth for the training set was established

  • Not applicable. No training set for an AI/ML model is mentioned.

Summary of available information regarding performance:

The document states that:

  • "Testing was performed according to internal company procedures."
  • "Software testing and validation were done at the module and system level according to written test protocols established before testing was conducted."
  • "Test results were reviewed by designated technical professionals before software proceeded to release."
  • "All requirements have been verified by tests or other appropriate methods."
  • "The incorporated OTS Software is considered validated either by particular tests or implied by the absence of OTS SW related abnormalities during all other V&V activities."
  • The summary conclusions indicate:
    • "all automated tests were reviewed and passed"
    • "feature complete test completed without deviations"
    • "functional tests are completed"
    • "measurement verification is completed without deviations"
    • "multilanguage tests are completed without deviations"
  • "Substantial equivalence determination of this subject device was not based on clinical data or studies."
  • A "clinical evaluation following the literature route based on the assessment of benefits, associated with the use of the device, was performed." This literature review supported the conclusion that the device is "as safe as effective, and performs as well as or better than the predicate devices."

In essence, TomTec-Arena 1.0 was cleared based on non-clinical software verification and validation, comparison to predicate devices, and a literature review, rather than a prospective clinical performance study with explicit acceptance criteria for diagnostic accuracy.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).