(59 days)
Indications for use of TOMTEC-ARENA software are quantification and reporting of cardiovascular, fetal, and abdominal structures and function of patients with suspected disease to support the physicians in the diagnosis.
TOMTEC-ARENA (TTA2) is a clinical software package for reviewing, quantifying and reporting digital medical data. The software is compatible with different IMAGE-ARENA platforms and third party platforms.
Platforms enhance the workflow by providing the database, import, export and other services. All analyzed data and images will be transferred to the platform for archiving, reporting and statistical quantification purposes.
TTA2 consists of the following optional modules:
- TOMTEC-ARENA SERVER & CLIENT .
- . IMAGE-COM/ECHO-COM
- REPORTING ●
- AutoStrain (LV, LA, RV) ●
- . 2D CARDIAC-PERFORMANCE ANALYSIS (Adult and Fetal)
- . 4D LV-ANALYSIS
- 4D RV-FUNCTION
- . 4D CARDIO-VIEW
- 4D MV-ASSESSMENT
- 4D SONO-SCAN .
The provided text is a 510(k) summary for the TOMTEC-ARENA software. It details the device's substantial equivalence to predicate devices and outlines non-clinical performance data. However, it explicitly states "No clinical testing conducted in support of substantial equivalence when compared to the predicate devices."
Therefore, I cannot provide information on acceptance criteria or a study that proves the device meets those criteria from the given text as no clinical study was performed.
Here's a breakdown of what can be extracted or inferred based on the document's content:
1. A table of acceptance criteria and the reported device performance:
Not applicable, as no clinical performance data or acceptance criteria for clinical performance are reported in this document. The document states that the device was tested to meet design and performance requirements through non-clinical methods.
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):
Not applicable, as no clinical test set was used. Non-clinical software verification was performed.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):
Not applicable, as no clinical test set requiring expert ground truth was mentioned.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
Not applicable, as no clinical test set requiring adjudication was mentioned.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
No MRMC comparative effectiveness study was done, as explicitly stated: "No clinical testing conducted in support of substantial equivalence". The device is a "Picture archiving and communications system" and advanced analysis tools; the document does not indicate it's an AI-assisted diagnostic tool that would typically undergo such a study.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
While the document describes various "Auto" modules (AutoStrain, Auto LV, Auto LA) which imply algorithmic processing, it does not detail standalone performance studies for these algorithms. The context is generally about reviewing, quantifying, and reporting digital medical data to support physicians, not to replace interpretation. The comparison tables highlight that for certain features (e.g., 4D RV-Function, 4D MV-Assessment), the subject device uses machine learning algorithms for 3D surface model creation, with the user able to edit, accept, or reject the contours/landmarks. This indicates a human-in-the-loop design rather than a standalone algorithm for final diagnosis.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc):
Not applicable for clinical ground truth, as no clinical studies were performed. For the software verification, the "ground truth" would be the predefined design and performance requirements.
8. The sample size for the training set:
Not applicable for clinical training data, as no clinical studies were performed. While some modules utilize "machine learning algorithms" (e.g., for 3D surface model creation), the document does not disclose the training set size or its characteristics.
9. How the ground truth for the training set was established:
Not applicable for clinical training data. The document mentions machine learning algorithms are used (e.g., in 4D RV-FUNCTION and 4D MV-ASSESSMENT for creating 3D surface models), but it does not describe how the training data for these algorithms, or their ground truth, was established.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).