(268 days)
syngo.via WebViewer is a software-only device indicated for reviewing medical images from syngo via. It supports interpretation and evaluations within healtheare institutions, for example, in Radiology, Nuclear Medicine and Cardiology environments (supported Image types: CT, MR, CR, DR, DX, PET). It is not intended for storage or distribution of medical images.
syngo.via WebViewer is an option for the syngo.via system and cannot be run without it. It is client server architecture and the client is intended to run on web clients which are connected to the healtheare institution IT infrastructure where the customer will insure HIPAA compliance. The communication of syngo.via WebViewer with connected medical IT systems will be done via standard interfaces such as but not limited to DICOM.
The system is not intended for the display of digital mammography images for diagnosis.
The syngo.via WebViewer is a software-based Picture Archiving and Communications System (PACS) used with the syngo.via system. The syngo.via WebViewer provides secure access to rendered medical image data and basic image manipulation through web browsers and mobile devices within the reach of the hospital network.
It extends the syngo.via WebViewer software application previous cleared under K111079. New image types supported are PET and X-Ray images. It also provides functionality for displaying images via a mobile application on an iPad.
The Siemens syngo.via WebViewer (K130998) is a PACS viewing software. The provided document does not contain acceptance criteria or a study that directly proves the device meets specific performance criteria through metrics like sensitivity, specificity, or accuracy for diagnostic tasks. Instead, the submission focuses on establishing substantial equivalence to a predicate device (syngo.via WebViewer K111079) based on its intended use, technical characteristics, and the results of non-clinical software verification and validation.
1. Table of Acceptance Criteria and Reported Device Performance:
As the device is a PACS viewing software, the acceptance criteria are not typically expressed in terms of diagnostic performance metrics (e.g., sensitivity, specificity) but rather in terms of functional performance, adherence to standards, and safety. The document states that "software verification and validation (Unit Test Level, Integration Test Level and System Test Level) was performed for all newly developed components and the complete system according to the following standards." The table below summarizes the implied acceptance criteria from the non-clinical tests and the device's adherence:
Acceptance Criterion (Implied from Standards & V&V) | Reported Device Performance |
---|---|
Adherence to DICOM Standard | Software verification and validation performed |
Adherence to ISO/IEC 15444-1:2005+TC 1:2007 (JPEG 2000) | Software verification and validation performed |
Adherence to ISO/IEC 10918-1:1994 + TC 1:2005 (JPEG) | Software verification and validation performed |
Adherence to HL7 [2006] | Software verification and validation performed |
Adherence to IEC 62304:2006 (Medical device software) | Software verification and validation performed |
Adherence to IEC 62366:2007 (Usability) | Software verification and validation performed |
Adherence to ISO 14971:2007 (Risk Management) | Software verification and validation performed; Risk analysis performed to identify potential hazards |
Adherence to IEC 60601-1-4:2000 (Safety) | Software verification and validation performed |
Secure access to rendered medical image data | Ensured via syngo.via WebViewer Data Management |
Basic image manipulation functionality | Provided via web browsers and mobile devices |
Compatibility with supported image types (CT, MR, CR, DR, DX, PET) | DICOM formatted images supported |
Compatibility with connected medical IT systems via standard interfaces (e.g., DICOM, HL7) | Communication via standard interfaces |
System safety and effectiveness | Instructions for use, cautions, and warnings in labeling; Risk management process followed |
Substantial equivalence to predicate device | Confirmed through comparison of intended use and technical characteristics |
2. Sample Size Used for the Test Set and Data Provenance:
The document does not describe a clinical study or a test set of medical images with a specific sample size used to evaluate diagnostic performance. The validation mentioned is "software verification and validation (Unit Test Level, Integration Test Level and System Test Level)", which refers to engineering and software quality assurance testing rather than a clinical performance study using patient data. Therefore, there is no information on sample size or data provenance (e.g., country of origin, retrospective/prospective).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:
This level of detail is not provided as the submission focuses on software validation and substantial equivalence, not a clinical study involving ground truth establishment by experts for diagnostic performance.
4. Adjudication Method for the Test Set:
Not applicable, as no clinical test set for diagnostic performance requiring expert adjudication is described.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
No MRMC comparative effectiveness study is mentioned or appears to have been performed for this 510(k) submission. The document focuses on showing substantial equivalence to a predicate device rather than demonstrating a performance improvement with or without AI assistance.
6. Standalone Performance Study:
No standalone (algorithm only without human-in-the-loop performance) study is described, as the device is a medical image viewing software, not an AI diagnostic algorithm. The "software-only device" refers to its deployment model, not its functionality as an autonomous diagnostic tool.
7. Type of Ground Truth Used:
Ground truth as understood in the context of diagnostic accuracy studies (e.g., pathology, expert consensus) is not mentioned. The "ground truth" for the software validation activities would be the expected functional behavior and adherence to standards, checked against specified requirements.
8. Sample Size for the Training Set:
Not applicable. The device is a viewing software, not an AI/ML algorithm that requires a training set of data.
9. How the Ground Truth for the Training Set Was Established:
Not applicable, as no training set is relevant for this type of device.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).