(144 days)
syngo.via View&GO is indicated for image rendering and post-processing of DICOM images to support the interpretation in the field of radiology, nuclear medicine and cardiology.
syngo.via View&GO is a software-only medical device, which is delivered by download to be installed on common IT hardware. This hardware has to fulfil the defined requirements. Any hardware platform that complies to the specified minimum hardware and software requirements and with successful installation verification and validation activities can be supported. The hardware itself is not seen as part of the medical device syngo.via View&GO and therefore not in the scope of this 510(k) submission.
syngo.via View&GO provides tools and features to cover the radiological tasks preparation for reading, reading images and support reporting. syngo.via View&GO supports DICOM formatted images and objects.
syngo.via View&GO is a standalone viewing and reading workplace. This is capable of rendering the data from the connected modalities for the post processing activities. syngo.via View&GO provides the user interface for interactive image viewing and processing with a limited short-term storage which can be interfaced with any Long-term storage (e.g. PACS) via DICOM syngo.via View&GO is based on Microsoft Windows operating systems.
syngo.via View&GO supports various monitor setups and can be adapted to a range of image types by connecting different monitor types.
The provided text is a 510(k) Summary for the Siemens Healthcare GmbH device "syngo.via View&GO" (Version VA30A). This document focuses on demonstrating substantial equivalence to a predicate device (syngo.via View&GO, Version VA20A) rather than presenting a detailed study of the device's performance against specific acceptance criteria for a novel algorithm.
The document states that the software is a Medical Image Management and Processing System, and its purpose is for "image rendering and post-processing of DICOM images to support the interpretation in the field of radiology, nuclear medicine and cardiology." It specifically states, "No automated diagnostic interpretation capabilities like CAD are included. All image data are to be interpreted by trained personnel."
Therefore, the provided text does not contain the information requested regarding acceptance criteria and a study proving an algorithm meets those criteria for diagnostic performance. It does not describe an AI/ML algorithm or its associated performance metrics.
However, based on the provided text, I can infer some aspects and highlight what information is missing if this were an AI-driven diagnostic device.
Here's an analysis based on the assumption that if this were an AI-based device, these fields would typically be addressed:
Summary of Device Performance (Based on provided text's limited scope for a general medical image processing system):
Since "syngo.via View&GO" is a medical image management and processing system without automated diagnostic interpretation capabilities, the acceptance criteria and performance data would revolve around its functionality, usability, and safety in handling and presenting medical images. The provided text primarily establishes substantial equivalence based on the lack of significant changes in core functionality and the adherence to relevant standards for medical software and imaging.
1. Table of acceptance criteria and the reported device performance:
The document doesn't provide a table of performance metrics for an AI algorithm. Instead, it describes "Non-clinical Performance Testing" focused on:
- Conformance to standards (DICOM, JPEG, ISO 14971, IEC 62304, IEC 82304-1, IEC 62366-1, IEEE Std 3333.2.1-2015).
- Software verification and validation (demonstrating continued conformance with special controls for medical devices containing software).
- Risk analysis and mitigation.
- Cybersecurity requirements.
- Functionality of the device (as outlined in the comparison table between subject and predicate device).
Reported Performance/Findings (General):
- "The testing results support that all the software specifications have met the acceptance criteria."
- "Testing for verification and validation for the device was found acceptable to support the claims of substantial equivalence."
- "Results of all conducted testing were found acceptable in supporting the claim of substantial equivalence."
- The device "does not introduce any new significant potential safety risks and is substantially equivalent to and performs as well as the predicate device."
Example of what a table might look like if this were an AI algorithm, along with why it's not present:
Acceptance Criterion (Hypothetical for AI) | Reported Device Performance (Hypothetical for AI) |
---|---|
Primary Endpoint: Sensitivity for detecting X > Y% | Not applicable - device has no diagnostic AI. |
Secondary Endpoint: Specificity for detecting X > Z% | Not applicable - device has no diagnostic AI. |
Image Rendering Accuracy (e.g., visual fidelity compared to ground truth) | "All the software specifications have met the acceptance criteria." (general statement) |
DICOM Conformance | Conforms to NEMA PS 3.1-3.20 (2016a) |
User Interface Usability (e.g., according to human factors testing) | Changes are "limited to the common look and feel based on Siemens Healthineers User Interface Style Guide." "The changes... doesn't impact the safety and effectiveness... of the subject device." |
Feature Functionality (e.g., MPR, MIP/MinIP, VRT, measurements) | "Algorithms underwent bug-fixing and minor improvements. No re-training or change in algorithm models was performed." "The changes... doesn't impact the safety and effectiveness... of the subject device." |
2. Sample size used for the test set and the data provenance:
- Not explicitly stated for diagnostic performance, as the device does not have automated diagnostic capabilities.
- The software verification and validation activities would involve testing with various DICOM images to ensure proper rendering and processing. The exact number of images or datasets used for these software tests is not detailed.
- Data Provenance: Not specified, as it's not a clinical performance study.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not applicable / Not stated. Ground truth for diagnostic accuracy is not established for this device, as it does not perform automated diagnosis. The ground truth for software functionality would be the expected behavior of the software according to its specifications.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not applicable. No clinical adjudication method is described, as this is neither a clinical study nor an AI diagnostic device.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC study was NOT done/described. The device explicitly states it has "No automated diagnostic interpretation capabilities like CAD are included. All image data are to be interpreted by trained personnel." Therefore, it does not offer AI assistance for diagnosis.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Not applicable. The device is a "Medical Image Management and Processing System" that provides tools for human interpretation; it is not a standalone diagnostic algorithm.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Not applicable for diagnostic purposes. For software functionality, the ground truth is the defined behavior as per the software specifications and design.
8. The sample size for the training set:
- Not applicable/Not stated. The document explicitly states for the "Imaging algorithms" section that "No re-training or change in algorithm models was performed." This implies that the algorithms are traditional image processing algorithms, not machine learning models that require training data in the context of diagnostic AI. If there were any minor algorithmic adjustments, the training data for such classical algorithms is typically the mathematical formulation itself rather than a dataset of clinical cases for machine learning.
9. How the ground truth for the training set was established:
- Not applicable. As indicated above, there is no mention of "training" in the context of machine learning. The algorithms are described as undergoing "bug-fixing and minor improvements" but no "re-training or change in algorithm models."
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).