K Number
K150843
Manufacturer
Date Cleared
2015-04-24

(25 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

syngo.via is a software solution intended to be used for viewing, manipulation, and storage of medical images.

It can be used as a stand-alone device or together with a variety of cleared and unmodified syngo based software options.

syngo via supports interpretation of examinations within healthcare institutions, for example, in Radiology, Nuclear Medicine and Cardiology environments.

The system is not intended for the displaying of digital mammography images for diagnosis in the U.S.

Device Description

Siemens AG intends to market the Picture Archiving and Communications System, syngo.via, software version VB10A. This 510(k) submission describes several modifications to the previously cleared predicate device, syngo.via, software version VA20A.

syngo.via is a software only medical device, which is delivered on DVD to be installed on common IT hardware. This hardware has to fulfil the defined requirements. Any hardware platform that complies to the specified minimum hardware and software requirements and with successful installation verification and validation activities can be supported. The hardware itself is not seen as part of the medical device syngo.via and therefore not in the scope of this 510(k) submission.

syngo.via provides tools and features to cover the radiological tasks reading images and reporting. syngo.via supports DICOM formatted images and objects. syngo.via also supports storage of Structured DICOM Reports. In a comprehensive imaging suite, syngo. via interoperates with a Radiology Information System (RIS) to enable customer specific workflows.

syngo.via is based on a client-server architecture. The server processes and renders the data from the connected modalities. The server provides central services including image processing and temporary storage, and incorporates the local database. The client provides the user interface for interactive image viewing and processing and can be installed and started on each workplace that has a network connection to the server.

The server's backend communication and storage solution is based on Microsoft Windows server operating systems. The client machines are based on Microsoft Windows operating systems.

syngo.via supports various monitor setups and can be adapted to a range of image types by connecting different monitor types.

The subject device and the predicate device share the same fundamental scientific technology. This device description holds true for the subject device, syngo.via, software version VB10A; as well as the predicate device, syngo.via, software version VA20A.

AI/ML Overview

The provided text describes a 510(k) submission for "syngo.via (version VB10A)", a Picture Archiving and Communications System, and its substantial equivalence to a predicate device. However, the document does not contain specific acceptance criteria, reported device performance data, details of a specific study proving it meets acceptance criteria, or information on sample sizes, ground truth establishment, or expert involvement in the way a clinical performance study report would.

The document focuses on:

  • Regulatory Clearance: FDA's clearance of the device (K150843) as substantially equivalent to a predicate device (K123920).
  • Device Description: General features, software architecture, operating systems, and some high-level feature differences from the predicate.
  • Non-clinical Performance Testing: Stating that tests were conducted for verification and validation and that the device conforms to certain standards and cybersecurity requirements.
  • Software Verification and Validation: Asserts that software documentation for a "Moderate Level of Concern" software was included, and testing results support that "all the software specifications have met the acceptance criteria."

Therefore, I cannot populate the table or answer most of the questions as the specific details are not present in the provided text.

Here's what can be inferred or explicitly stated based only on the provided text:

1. A table of acceptance criteria and the reported device performance

  • Acceptance Criteria/Performance: The document states that "all the software specifications have met the acceptance criteria" through non-clinical verification and validation testing. However, the specific acceptance criteria (e.g., minimum accuracy rates, latency thresholds) and the numerical results for these criteria are not provided.
    The focus is on comparing features and ensuring the new version doesn't introduce new safety risks.
Acceptance Criteria (Not Explicitly Stated)Reported Device Performance (Not Explicitly Stated)
(Specific performance metrics are not detailed in this document. The submission focuses on software functionality and safety.)The document states: "The testing results support that all the software specifications have met the acceptance criteria."

2. Sample size used for the test set and the data provenance

  • Sample Size: Not provided. The document refers to "non-clinical tests" and "verification and validation testing" but does not specify sample sizes for any test sets.
  • Data Provenance: Not provided. As it's non-clinical testing, it likely refers to internal testing data. No information on country of origin or retrospective/prospective nature is given.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

  • Number of Experts: Not provided.
  • Qualifications of Experts: Not provided.
    Since the document describes non-clinical software verification and validation, it's unlikely external medical experts were used to establish "ground truth" in the clinical sense. The "ground truth" here would likely be defined by internal software requirements and specifications.

4. Adjudication method for the test set

  • Adjudication Method: Not provided.

5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

  • MRMC Study: No. The document describes a "Picture Archiving and Communications System," which is a viewing and manipulation software, not an AI-powered diagnostic aide in the traditional sense that would warrant an MRMC study comparing human reader performance with and without AI assistance. This submission focuses on software updates and substantial equivalence, not a clinical efficacy claim for a new AI algorithm.

6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

  • Standalone Performance: The device itself is a "software only medical device" which is intended for "viewing, manipulation, communication, and storage of medical images" and "supports interpretation and evaluation of examinations within healthcare institutions." While the software performs functions independently, the context of "standalone performance" often refers to the diagnostic accuracy of an AI algorithm without human intervention for making diagnoses. This document does not claim diagnostic capabilities for the software itself, but rather tools to aid human interpretation. Thus, a standalone diagnostic performance study (in the context of an AI algorithm making a diagnosis) was not done/relevant given the stated intended use.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

  • Type of Ground Truth: The document refers to "software specifications" and "hazard analysis." For non-clinical software testing, the "ground truth" would be the expected behavior and output defined by the software's functional and performance requirements. This is not clinical ground truth like pathology or expert consensus on patient cases.

8. The sample size for the training set

  • Training Set Sample Size: Not applicable/Not provided. This document describes a traditional software system, not a machine learning or AI algorithm that requires a "training set" in the common sense.

9. How the ground truth for the training set was established

  • Training Set Ground Truth Establishment: Not applicable/Not provided. As above, this is not a machine learning model requiring a training set with establish ground truth.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).