Search Results
Found 4 results
510(k) Data Aggregation
(59 days)
Indications for use of TOMTEC-ARENA software are quantification and reporting of cardiovascular, fetal, and abdominal structures and function of patients with suspected disease to support the physician in the diagnosis.
TOMTEC-ARENA is a clinical software package for reviewing, quantifying and reporting digital medical data. The software can be integrated into third party platforms. Platforms enhance the workflow by providing the database, import, export and other services. All analyzed data and images will be transferred to the platform for archiving, reporting and statistical quantification purposes.
TTA2 consists of the following optional modules:
- IMAGE-COM
- I REPORTING
- AutoStrain LV / SAX / RV / LA I
- 2D CPA
- FETAL 2D CPA ■
- 4D LV-ANALYSIS
- . 4D RV-FUNCTION
- I 4D CARDIO-VIEW
- I 4D MV-ASSESSMENT
- I 4D SONO-SCAN
- TOMTEC DATACENTER (incl. STUDY LIST, DATA MAINTENANCE, WEB ■ REVIEW)
The purpose of this traditional 510(k) pre-market notification is to introduce semi-automated cardiac measurements based on an artificial intelligence and machine learning (AI/ML) algorithm. The Al/ML algorithm is a Convolutional Network (CNN) developed using a Supervised Learning approach. This Al/ML algorithm enables TOMTEC-ARENA to produce semi-automated and editable echocardiographic measurements on BMODE and DOPPLER datasets. The algorithm was developed using a controlled internal process that defines activities from the inspection of input data to the training and deployment of the algorithm: The training process begins with the model observing, and optimizing its parameters based on the training pool data. The model's prediction and performance are then evaluated against the test pool. The test pool data is set aside at the beginning of the project. During the training process, the Al/ML algorithm learned to predict measurements by being presented with a large number of echocardiographic data manually generated by qualified healthcare professionals. The echocardiographic studies were randomly assigned to be either used for training (approx. 2,800 studies) or testing (approx. 500 studies). A semi-automated measurement consists of a cascade of detection steps. It starts with a rough geometric estimate, which is subsequently refined more and more: The user selects a frame on which the semi-automated measurements shall be performed in TOMTEC-ARENA. Image- & metadata, e.g. pixel spacing, are transferred to the semi-automated measurement detector. The semi-automated measurement detector predicts the position of start and end caliper in the pixel coordinate system. These co-coordinates are transferred back to the CalcEngine, which converts the received data back into real world coordinates (e.g. mm) and creates the graphical overlay. This superimposed line can be edited by the user. The end user can edit, accept, or reject the measurement(s). This feature does not introduce any new measurements, but allows the end user to perform semi-automated measurements. The end user can also still perform manual measurements and it is not mandatory to use the semi-automated measurements. The semi-automated measurements are licensed separately.
Here's an analysis of the acceptance criteria and study details for the TOMTEC-ARENA device, based on the provided FDA 510(k) summary:
The 510(k) summary describes the TOMTEC-ARENA software, which introduces semi-automated cardiac measurements based on an AI/ML algorithm. The primary focus of the non-clinical performance data is on software verification, risk analysis, and usability evaluation, as no clinical testing was conducted.
1. Table of Acceptance Criteria and Reported Device Performance
The provided document does not explicitly list quantitative acceptance criteria for the AI/ML algorithm's performance in terms of accuracy or precision of the semi-automated measurements. Instead, it states that "Completion of all verification activities demonstrated that the subject device meets all design and performance requirements." and "Testing performed demonstrated that the proposed TOMTEC-ARENA (TTA2.50) meets defined requirements and performance claims." These are general statements rather than specific, measurable performance metrics.
Similarly, there are no reported quantitative device performance metrics (e.g., accuracy, sensitivity, specificity, or error rates) for the AI/ML algorithm's measurements mentioned in this summary. The summary focuses on the functional equivalence and safety of the AI-powered feature compared to existing manual measurements and predicate devices.
However, the document does imply a core "acceptance criterion":
| Acceptance Criteria (Implied) | Reported Device Performance |
|---|---|
| Functional Equivalence/Accuracy: The semi-automated measurements (BMODE and DOPPLER) should provide measurement suggestions that are comparable in principle/technology to those included in the reference device and can be edited, accepted, or rejected by the user. | "Support of additional semi-automated measurements compared to reference device. Additional measurements rely on same principle/technology (e.g. line detection, single-point) as those included in reference device.""The measurement suggestion can be edited. Manual measurements as with TTA2.40.00 are still possible." |
| Safety and Effectiveness: The introduction of semi-automated measurements should not adversely affect the safety and effectiveness of the device. | "No impact to the safety or effectiveness of the device." "Verification activities performed confirmed that the differences in the design did not adversely affect the safety and effectiveness of the subject device." |
| Usability: The device is safe and effective for intended users, uses, and environments. | "TOMTEC-ARENA has been found to be safe and effective for the intended users, uses, and use environments." |
| Compliance: Adherence to relevant standards (IEC 62304, IEC 62366-1) and internal processes. | "Software verification was performed according to the standard IEC 62304...""A Summative Usability Evaluation was performed... according to the standard IEC 62366-1...""The proposed modifications were tested in accordance with TOMTEC's internal processes." |
Without specific quantitative metrics for the AI's measurement accuracy, it's challenging to provide a detailed performance table. The provided information focuses on the design validation process rather than specific benchmark results for the AI's performance.
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: Approximately 500 studies.
- Data Provenance: The document does not specify the country of origin of the data. It states, "The echocardiographic studies were randomly assigned to be either used for training (approx. 2,800 studies) or testing (approx. 500 studies)." It does not explicitly state if the data was retrospective or prospective. Given that these are "studies" used for training and testing an algorithm, it is highly probable that they are retrospective data sets, collected prior to the algorithm's deployment.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: Not specified. The document states "a large number of echocardiographic data manually generated by qualified healthcare professionals." This implies multiple professionals but does not quantify them.
- Qualifications of Experts: "qualified healthcare professionals." Specific qualifications (e.g., radiologist with X years of experience, sonographer, cardiologist) are not provided.
4. Adjudication Method for the Test Set
- Adjudication Method: Not specified. The ground truth was "manually generated by qualified healthcare professionals," but the process for resolving discrepancies among multiple professionals (if multiple were involved per case) is not described.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was an MRMC study done? No. The summary explicitly states: "No clinical testing conducted in support of substantial equivalence when compared to the predicate devices." The nature of the AI algorithm as providing semi-automated, editable measurements, rather than a diagnostic output, likely informed this decision. The user is always in the loop and can accept, edit, or reject the AI's suggestions.
- Effect size of human readers improvement with AI vs. without AI assistance: Not applicable, as no MRMC study was performed.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study
- Was a standalone study done? Not explicitly detailed in terms of quantitative performance metrics. While the algorithm "predicts the position of start and end caliper in the pixel coordinate system" and this prediction is mentioned as being evaluated against the test pool, the results are not presented as a standalone performance metric. The nature of the device, where the user can "edit, accept, or reject the measurement(s)", strongly implies that standalone performance is not the primary focus for regulatory purposes, as it is always intended to be used with human oversight. The comparison is generally with the predicate device's manual measurement workflow and a reference device's semi-automated features.
7. Type of Ground Truth Used
- Type of Ground Truth: "manually generated by qualified healthcare professionals." This suggests expert consensus or expert-derived measurements serving as the reference standard for the algorithm's training and testing.
8. Sample Size for the Training Set
- Sample Size for Training Set: Approximately 2,800 studies.
9. How the Ground Truth for the Training Set Was Established
- How Ground Truth Was Established: "The Al/ML algorithm learned to predict measurements by being presented with a large number of echocardiographic data manually generated by qualified healthcare professionals." This indicates that human experts manually performed the measurements on the training data, and these manual measurements served as the ground truth for the supervised learning model.
Ask a specific question about this device
(59 days)
Indications for use of TOMTEC-ARENA software are quantification and reporting of cardiovascular, fetal, and abdominal structures and function of patients with suspected disease to support the physicians in the diagnosis.
TOMTEC-ARENA (TTA2) is a clinical software package for reviewing, quantifying and reporting digital medical data. The software is compatible with different IMAGE-ARENA platforms and third party platforms.
Platforms enhance the workflow by providing the database, import, export and other services. All analyzed data and images will be transferred to the platform for archiving, reporting and statistical quantification purposes.
TTA2 consists of the following optional modules:
- TOMTEC-ARENA SERVER & CLIENT .
- . IMAGE-COM/ECHO-COM
- REPORTING ●
- AutoStrain (LV, LA, RV) ●
- . 2D CARDIAC-PERFORMANCE ANALYSIS (Adult and Fetal)
- . 4D LV-ANALYSIS
- 4D RV-FUNCTION
- . 4D CARDIO-VIEW
- 4D MV-ASSESSMENT
- 4D SONO-SCAN .
The provided text is a 510(k) summary for the TOMTEC-ARENA software. It details the device's substantial equivalence to predicate devices and outlines non-clinical performance data. However, it explicitly states "No clinical testing conducted in support of substantial equivalence when compared to the predicate devices."
Therefore, I cannot provide information on acceptance criteria or a study that proves the device meets those criteria from the given text as no clinical study was performed.
Here's a breakdown of what can be extracted or inferred based on the document's content:
1. A table of acceptance criteria and the reported device performance:
Not applicable, as no clinical performance data or acceptance criteria for clinical performance are reported in this document. The document states that the device was tested to meet design and performance requirements through non-clinical methods.
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):
Not applicable, as no clinical test set was used. Non-clinical software verification was performed.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):
Not applicable, as no clinical test set requiring expert ground truth was mentioned.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
Not applicable, as no clinical test set requiring adjudication was mentioned.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
No MRMC comparative effectiveness study was done, as explicitly stated: "No clinical testing conducted in support of substantial equivalence". The device is a "Picture archiving and communications system" and advanced analysis tools; the document does not indicate it's an AI-assisted diagnostic tool that would typically undergo such a study.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
While the document describes various "Auto" modules (AutoStrain, Auto LV, Auto LA) which imply algorithmic processing, it does not detail standalone performance studies for these algorithms. The context is generally about reviewing, quantifying, and reporting digital medical data to support physicians, not to replace interpretation. The comparison tables highlight that for certain features (e.g., 4D RV-Function, 4D MV-Assessment), the subject device uses machine learning algorithms for 3D surface model creation, with the user able to edit, accept, or reject the contours/landmarks. This indicates a human-in-the-loop design rather than a standalone algorithm for final diagnosis.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc):
Not applicable for clinical ground truth, as no clinical studies were performed. For the software verification, the "ground truth" would be the predefined design and performance requirements.
8. The sample size for the training set:
Not applicable for clinical training data, as no clinical studies were performed. While some modules utilize "machine learning algorithms" (e.g., for 3D surface model creation), the document does not disclose the training set size or its characteristics.
9. How the ground truth for the training set was established:
Not applicable for clinical training data. The document mentions machine learning algorithms are used (e.g., in 4D RV-FUNCTION and 4D MV-ASSESSMENT for creating 3D surface models), but it does not describe how the training data for these algorithms, or their ground truth, was established.
Ask a specific question about this device
(24 days)
Indications for use of TomTec-Arena software are quantification and reporting of cardiovascular, fetal, and abdominal structures and function of patients with suspected disease to support the physicians in the diagnosis.
TomTec-Arena™ is a clinical software package for reviewing, quantifying and reporting digital medical data. The software is compatible with different TomTec Image-Arena™ platforms and TomTec-Arena Server®, their derivatives or third party platforms.
Platforms enhance the workflow by providing the database, import, export and other services. All analyzed data and images will be transferred to the platform for archiving, reporting and statistical quantification purposes.
TomTec-Arena™ TTA2 consists of the following optional modules:
- Image-Com
- 4D LV-Analysis and 4D LV-Function
- 4D RV-Function
- 4D Cardio-View
- 4D MV-Assessment
- Echo-Com
- 2D Cardiac-Performance Analysis
- 2D Cardiac-Performance Analysis MR
- 4D Sono-Scan
- Reporting
- Worksheet
- TomTec-Arena Client
The provided text does not contain detailed acceptance criteria or a study that explicitly proves the device meets those criteria. Instead, it describes a substantial equivalence submission for the TomTec Arena TTA2, a picture archiving and communications system.
The document focuses on demonstrating that the new device is substantially equivalent to previously marketed predicate devices (TomTec-Arena 1.0 and Image-Arena 4.5). It outlines changes made to the device, primarily bug fixes, operability enhancements, and feature changes (repackaging or new appearance of existing technology).
It explicitly states: "Substantial equivalence determination of this subject device was not based on clinical data or studies." This means that a detailed clinical performance study with defined acceptance criteria for the device's diagnostic performance was not conducted as part of this submission for determining substantial equivalence.
While non-clinical performance data (software testing and validation) was performed according to internal company procedures, the acceptance criteria for this testing are not explicitly stated in a quantifiable manner within the provided text, beyond "expected results and acceptance (pass/fail) criteria have been defined in all test protocols."
Therefore, most of the requested information regarding acceptance criteria, study details, sample sizes, expert qualifications, and ground truth establishment cannot be extracted from the provided text.
Here is a summary of what can be extracted:
-
A table of acceptance criteria and the reported device performance:
- Acceptance Criteria: Not explicitly stated in quantifiable terms for the device's diagnostic performance. The document mentions "expected results and acceptance (pass/fail) criteria have been defined in all test protocols" for internal software testing.
- Reported Device Performance:
- All automated tests were reviewed and passed.
- Feature complete test completed without deviations.
- Functional tests are completed.
- Measurement verification is completed without deviations.
- All non-verified bugs have been evaluated and are rated as minor deviations and deferred to the next release.
- The overall product concept was clinically accepted and supports the conclusion that the device is as safe as effective, and performs as well as or better than the predicate device.
- The Risk-Benefit Assessment concludes that the benefit is superior to the risk, and the risk is low.
- The data are sufficient to demonstrate compliance with essential requirements covering safety and performance.
- The claims made in the device labeling are substantiated by clinical data (via literature review).
-
Sample size used for the test set and the data provenance: Not applicable, as no clinical study with a test set was detailed. Non-clinical software testing involved various test cases but the sample size (number of test cases) and their provenance are not specified.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable, as no clinical study with a test set requiring expert ground truth was detailed.
-
Adjudication method for the test set: Not applicable.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: Not applicable. The device is a "Picture archiving and communications system" and "Image Review and Quantification Software," not explicitly an AI-assisted diagnostic device, and no MRMC study was mentioned.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Not applicable. The focus is on software functionality and equivalence to predicate devices, not AI algorithm performance.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.): For non-clinical software testing, the "ground truth" would be the expected output of the software functions based on established specifications and requirements. For the "clinical acceptance" mentioned, it refers to a literature review, implying published clinical data served as the basis for concluding safety and effectiveness relative to predicate devices and general medical standards.
-
The sample size for the training set: Not applicable, as this is not an AI/ML device with a training set.
-
How the ground truth for the training set was established: Not applicable.
Ask a specific question about this device
(104 days)
Indications for use of TomTec-Arena software are diagnostic review, quantification and reporting of cardiovascular, fetal and abdominal structures and function of patients with suspected disease.
TomTec-Arena is a clinical software package for reviewing, quantifying and reporting digital medical data. TomTec-Arena runs on high performance PC platforms based on Microsoft Windows operating system standards. The software is compatible with different TomTec Image-Arena™ platforms, their derivatives or third party platforms. Platforms enhance the workflow by providing the database, import, export and other services. All analyzed data and images will be transferred to the platform for archiving, reporting and statistical quantification purposes. Tom Tec-Arena consists of the following optional clinical application packages: Image-Com, 4D LV-Analysis/Function, 4D RV-Function, 4D Cardio-View, 4D MV-Assessment, Echo-Com, 2D Cardiac-Performance Analysis, 2D Cardiac-Performance Analysis MR, 4D Sono-Scan.
The provided document does not contain detailed acceptance criteria or a study proving the device meets specific performance criteria. Instead, it is a 510(k) summary for a software package, TomTec-Arena 1.0, and focuses on demonstrating substantial equivalence to predicate devices.
Here's a breakdown of what is and is not available in the provided text, in response to your requested information:
1. A table of acceptance criteria and the reported device performance
- Not available. The document states that "Testing was performed according to internal company procedures. Software testing and validation were done at the module and system level according to written test protocols established before testing was conducted." However, it does not provide the specific acceptance criteria or the quantitative reported device performance against those criteria. It only provides a high-level summary of tests passed.
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
- Not available. The document explicitly states: "Substantial equivalence determination of this subject device was not based on clinical data or studies." Therefore, there is no test set in the sense of a clinical performance study with patient data. The "tests" mentioned are software validation and verification.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
- Not applicable. As indicated above, no clinical test set with patient data was used for substantial equivalence determination. Ground truth establishment by experts for clinical performance is not mentioned.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
- Not applicable. No clinical test set.
5. If a multi-reader, multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- Not applicable. No MRMC study was conducted or mentioned. The device is a software package for review, quantification, and reporting, and its substantial equivalence was not based on clinical performance data demonstrating impact on human readers.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Not explicitly detailed as a standalone performance study in the context of clinical accuracy. The document confirms that "measurement verification is completed without deviations" as part of non-clinical performance testing. This suggests that the algorithm's measurements were verified, but the specifics of this verification (e.g., what measurements, against what standard, sample size, etc.) are not provided. It's a software verification, not a clinical standalone performance study.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Not applicable for clinical ground truth. For the non-clinical "measurement verification," the ground truth would likely be a known or calculated value for the data being measured, but the specific type of ground truth against which software measurements were verified is not described.
8. The sample size for the training set
- Not applicable. The document describes "TomTec-Arena" as a clinical software package for reviewing, quantifying, and reporting existing digital medical data. It is not an AI/ML device that requires a training set in the typical sense for learning patterns. Its functionality is based on established algorithms for image analysis and quantification.
9. How the ground truth for the training set was established
- Not applicable. No training set for an AI/ML model is mentioned.
Summary of available information regarding performance:
The document states that:
- "Testing was performed according to internal company procedures."
- "Software testing and validation were done at the module and system level according to written test protocols established before testing was conducted."
- "Test results were reviewed by designated technical professionals before software proceeded to release."
- "All requirements have been verified by tests or other appropriate methods."
- "The incorporated OTS Software is considered validated either by particular tests or implied by the absence of OTS SW related abnormalities during all other V&V activities."
- The summary conclusions indicate:
- "all automated tests were reviewed and passed"
- "feature complete test completed without deviations"
- "functional tests are completed"
- "measurement verification is completed without deviations"
- "multilanguage tests are completed without deviations"
- "Substantial equivalence determination of this subject device was not based on clinical data or studies."
- A "clinical evaluation following the literature route based on the assessment of benefits, associated with the use of the device, was performed." This literature review supported the conclusion that the device is "as safe as effective, and performs as well as or better than the predicate devices."
In essence, TomTec-Arena 1.0 was cleared based on non-clinical software verification and validation, comparison to predicate devices, and a literature review, rather than a prospective clinical performance study with explicit acceptance criteria for diagnostic accuracy.
Ask a specific question about this device
Page 1 of 1