Search Results
Found 1 results
510(k) Data Aggregation
(27 days)
syngo.via (Version VB40A)
syngo.via is a software solution intended to be used for viewing, manipulation, and storage of medical images.
It can be used as a stand-alone device or together with a variety of cleared and unmodified syngo based software options. syngo .via supports interpretation and evaluation of examinations within healthcare institutions, for example, in Radiology, Nuclear Medicine and Cardiology environments.
The system is not intended for the displaying of digital mammography images for diagnosis in the U.S.
Siemens Healthcare GmbH intends to market the Picture Archiving and Communications System, syngo.via. software version VB40A. This 510(k) submission describes several modifications to the previously cleared predicate device, syngo.via, software version VB10A.
syngo.via is a software only medical device, which is delivered by download to be installed on common IT hardware. This hardware has to fulfil the defined requirements. Any hardware platform that complies to the specified minimum hardware and software requirements and with successful installation verification and validation activities can be supported. The hardware itself is not seen as part of the medical device syngo.via and therefore not in the scope of this 510(k) submission.
syngo.via provides tools and features to cover the radiological tasks reading images and reporting. syngo.via supports DICOM formatted images and objects. syngo.via also supports storage of Structured DICOM Reports. In a comprehensive imaging suite, syngo.via interoperates with a Radiology Information System (RIS) to enable customer specific workflows.
syngo.via is based on a client-server architecture. The server processes and renders the data from the connected modalities. The server provides central services including image processing and temporary storage, and incorporates the local database. The client provides the user interface for interactive image viewing and can be installed and started on each workplace that has a network connection to the server.
The server's backend communication and storage solution is based on Microsoft Windows server operating systems. The client machines are based on Microsoft Windows operating systems.
syngo.via supports various monitor setups and can be adapted to a range of image types by connecting different monitor types.
The subject device and the predicate device share fundamental scientific technology. This device description holds true for the subject device, syngo.via, software version VB40A, as well as the predicate device, syngo.via, software version VB10A.
This document describes the 510(k) summary for syngo.via (Version VB40A), a Picture Archiving and Communications System developed by Siemens Healthcare GmbH. The submission asserts substantial equivalence to the predicate device, syngo.via (Version VB10A).
The document states that a study was conducted to demonstrate the substantial equivalence of the new device by focusing on verification and validation testing of the modifications. However, it does not provide detailed acceptance criteria and reported device performance in a table format. Nor does it detail a specific study with sample sizes, data provenance, expert qualifications, or adjudication methods for proving the device meets acceptance criteria.
Here's an attempt to extract and synthesize the information based on the provided text, recognizing the limitations of the submission's detail:
1. Table of Acceptance Criteria and Reported Device Performance
The submission does not provide a table of acceptance criteria or specific reported device performance in a quantitative manner. Instead, it makes a qualitative claim about the modifications:
"The software specifications have met the acceptance criteria. Testing for verification and validation for the device was found acceptable to support the claims of substantial equivalence."
And for specific features compared against the predicate:
"The changes between the predicate device and the subject device doesn't impact the safety and effectiveness of the subject device as the necessary measures taken for the safety and effectiveness of the subject device."
The document primarily focuses on verifying that new features and system updates (like operating system support, cyber security enhancements, and improvements to existing algorithms) do not negatively impact safety and effectiveness and perform comparably to the predicate device. The acceptance criteria would broadly relate to demonstrating functionality, safety, and effectiveness equivalent to the cleared predicate device.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document states: "Non-clinical tests were conducted for the device syngo.via during product development. The modifications described in this Premarket Notification were supported with verification and validation testing." And "Performance tests were conducted to test the functionality of the device syngo.via."
However, no specific information is provided regarding:
- The sample size used for the test set (number of images, cases, or tests)
- The data provenance (e.g., country of origin, whether the data was retrospective or prospective).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This information is not provided in the document. The submission describes software verification and validation, but not clinical performance studies involving expert readers to establish ground truth.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not provided in the document.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not mentioned or indicated in this submission. The device (syngo.via) is described primarily as a viewing, manipulation, communication, and storage system for medical images, with enhancements to existing imaging algorithms (e.g., Cinematic VRT, Organ Segmentation) and user interface improvements. It explicitly states: "No automated diagnostic interpretation capabilities like CAD are included. All image data are to be interpreted by trained personnel." Therefore, it is unlikely an MRMC study comparing human readers with and without AI assistance was conducted for this particular clearance, as the device itself does not offer AI-driven diagnostic assistance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The document describes "Imaging algorithms" such as:
- Multiplanar reconstruction (MPR)
- Maximum and Minimum Intensity Projection (MIP/MinIP)
- Volume Rendering Technique (VRT) with additional technique Cinematic VRT
- Organ segmentation based on existing ALPHA technology
- Change Visualization
- Automatic Spine Labeling, also for ribs in CT thorax scans (“Rib labeling”)
It states that "There are enhancements to the existing algorithms in the subject device compared to the predicate device." While these are algorithms, the submission treats them as enhancements to an imaging workstation rather than standalone diagnostic algorithms requiring separate "standalone performance" metrics in the way a CAD system might. The performance of these algorithms would have been assessed during the "Non-clinical Performance Testing" and "Software Verification and Validation" as mentioned, to ensure they function as intended and do not impact safety or effectiveness. However, no specific metrics for standalone performance (e.g., sensitivity, specificity for organ segmentation outside the context of human interpretation) are provided.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The document does not specify the type of ground truth used, as it does not detail specific performance studies involving diagnostic accuracy or clinical outcomes. The "ground truth" for software verification and validation would typically be established based on functional requirements, design specifications, and successful execution of tests.
8. The sample size for the training set
The document does not provide information about a training set size. This submission is for an updated version of a Picture Archiving and Communications System (PACS) and does not describe the development or training of new AI/ML models in a way that would typically involve a separate "training set." The Mention of "Organ segmentation based on existing ALPHA technology" suggests it might leverage pre-existing technology, but no details on its training are given for this submission.
9. How the ground truth for the training set was established
Since no training set information is provided (as per point 8), details on how its ground truth was established are also not available in the provided text.
Ask a specific question about this device
Page 1 of 1