K Number
K122289
Device Name
IMAGE-COM 5.0
Date Cleared
2012-10-24

(86 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Image-Com software is intended for reviewing and measuring of digital medical data of different modalities. It can be driven by Image-Arena or other third party platforms and is intended to launch other commercially available analysis and quantification tools.

Device Description

Image-Com is a clinical application package software for reviewing and measuring of digital medical data. Image-Com is either embedded in Image-Arena™ platform or can be integrated into Third Party platforms, such as PACS or CVIS.

AI/ML Overview

The provided document is a 510(k) summary for the TomTec Imaging Systems Image-Com 5.0 software. This device is classified as a Picture Archiving and Communications System (PACS) and is intended for reviewing and measuring digital medical data. The submission focuses on demonstrating substantial equivalence to predicate devices rather than proving a new clinical claim with specific performance metrics.

Therefore, the document does not contain the detailed information typically found in a study designed to establish new acceptance criteria and prove a device meets them through clinical performance data. Instead, it relies on non-clinical software testing and a literature review to demonstrate safety and effectiveness comparable to predicate devices.

Here's an analysis based on the information available:

1. Table of Acceptance Criteria and Reported Device Performance:

The document does not specify quantitative acceptance criteria in terms of clinical performance metrics (e.g., sensitivity, specificity, accuracy) for Image-Com 5.0. It primarily focuses on demonstrating that the software functions as intended and is as safe and effective as its predicate devices.

Acceptance Criteria CategoryReported Device Performance
Non-Clinical Performance
Automated TestsAll reviewed and passed
Feature Complete TestCompleted without deviations
Functional TestsCompleted
Measurement VerificationCompleted without deviations
Multilanguage TestsCompleted without deviations
Bug EvaluationNon-verified bugs rated as minor deviations, deferred to next release
OTS Software ValidationConsidered validated by tests or absence of abnormalities during V&V
Clinical Performance
Clinical AcceptanceOverall product concept clinically accepted
Safety & Effectiveness"as safe as effective, and performs as well as or better than the predicate devices" (Clinical evaluation based on literature review)
Risk-Benefit AssessmentBenefit superior to risk (Risk rated as low)
Compliance with Essential RequirementsData sufficient to demonstrate compliance with essential requirements for safety and performance
Labeling SubstantiationClaims made in device labeling substantiated by clinical data (via literature review)

2. Sample size used for the test set and the data provenance:

  • Test Set Sample Size: The document does not specify a distinct "test set" in the context of clinical performance data with a defined sample size (e.g., number of patient cases or images). The "testing" mentioned pertains to internal software verification and validation activities.
  • Data Provenance: The clinical performance data assessment was primarily based on a literature review. This means the data was retrospective, gathered from published studies related to the concepts and techniques employed by the device, or similar devices. The country of origin of this literature is not specified, but the applicant company is German.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

This information is not provided as the device's clinical performance was assessed via a literature review rather than a de novo clinical study with a new dataset requiring independent expert ground truthing. The "ground truth" implicitly relies on established medical knowledge and clinical consensus as presented in the reviewed literature.

4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

No explicit adjudication method is mentioned for establishing ground truth, as no specific clinical test set was created for this 510(k) submission requiring such a process. The reliance on a literature review means that the adjudication (or consensus) would have occurred within the original studies being reviewed.

5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

A multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly conducted or described in this 510(k) summary. The Image-Com 5.0 is a software package for review and measurement, not an AI-assisted diagnostic tool in the typical sense that would necessitate an MRMC study to show improvement over human readers. The document states its purpose is to demonstrate substantial equivalence to existing PACS and image analysis software.

6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

The document discusses "automated tests" as part of non-clinical performance data, which would imply standalone algorithm testing. However, this refers to software functionality and measurement verification, not clinical diagnostic performance of an AI algorithm in isolation. Image-Com 5.0 aids in reviewing and measuring, implying a human-in-the-loop workflow, but the submission doesn't contain a standalone clinical performance study typical for an AI-powered diagnostic device.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

For the clinical evaluation, the "ground truth" was established through existing medical literature and accepted clinical standards for reviewing and measuring digital medical data, as referenced in the clinical evaluation. This would encompass expert consensus and established diagnostic criteria reflected in published studies. No new pathology or outcomes data was generated for this submission.

8. The sample size for the training set:

Image-Com 5.0 is a software for reviewing and measuring medical data, not a machine learning model that undergoes a "training" phase with a specific dataset. Therefore, there is no "training set" sample size mentioned or applicable in the context of this submission. The software's algorithms and functionalities are presumably developed based on established image processing techniques and clinical guidelines.

9. How the ground truth for the training set was established:

As there is no "training set" in the machine learning sense, this question is not applicable. The underlying principles and measurement methodologies within the software would be based on established medical and engineering knowledge.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).