Search Results
Found 4 results
510(k) Data Aggregation
(19 days)
syngo.via View&GO VA40A
syngo.via View&GO is indicated for image rendering and post-processing of DICOM images to support the interpretation in the field of radiology, nuclear medicine and cardiology.
Siemens Healthcare GmbH intends to market the Medical Image Management and Processing System, syngo.via View&GO, software version VA40A. This 510(k) submission describes several modifications to the previously cleared predicate device. syngo.via View&GO, software version VA30A.
syngo.via View&GO is a software-only medical device, which is delivered by download to be installed on common IT hardware. This hardware has to fulfil the defined requirements. Any hardware platform that complies to the specified minimum hardware and software requirements and with successful installation verification and validation activities can be supported. The hardware itself is not seen as part of the medical device syngo.via View&GO and therefore not in the scope of this 510(k) submission.
syngo.via View&GO provides tools and features to cover the radiological tasks preparation for reading, reading images and support reporting. syngo.via View&GO supports DICOM formatted images and objects.
syngo.via View&GO is a standalone viewing and reading workplace. This is capable of rendering the data from the connected modalities for the post processing activities. syngo.via View&GO provides the user interface for interactive image viewing and processing with a limited short-term storage which can be interfaced with any Long-term storage (e.g. PACS) via DICOM syngo.via View&GO is based on Microsoft Windows operating systems.
syngo.via View&GO supports various monitor setups and can be adapted to a range of image types by connecting different monitor types.
The subject device and the predicate device share fundamental scientific technology. This device description holds true for the subject device. syngo.via View&GO, software version VA40A, as well as the predicate device, syngo.via View&GO, software version VA30A.
The provided text is a 510(k) summary for the syngo.via View&GO VA40A software, seeking substantial equivalence to a predicate device (syngo.via View&GO VA30A). While it details the device, its intended use, and comparisons to the predicate, it does not contain information about specific acceptance criteria or the details of a study proving the device meets those criteria.
The document states:
- "Non-clinical tests were conducted for the device syngo.via View&GO during product development. The modifications described in this Premarket Notification were supported with verification and validation testing." (Page 14, Section 8)
- "The testing results support that all the software specifications have met the acceptance criteria. Testing for verification and validation for the device was found acceptable to support the claims of substantial equivalence." (Page 14, Section 9)
- "Performance tests were conducted to test the functionality of the device syngo.via View&GO. These tests have been performed to assess the functionality of the subject device. Results of all conducted testing were found acceptable in supporting the claim of substantial equivalence." (Page 14, Section 10)
However, it does not provide the specific "acceptance criteria" themselves, nor does it describe the details of the "study" (beyond mentioning "non-clinical tests" and "verification and validation testing") that would demonstrate performance against these criteria.
Therefore, I cannot fulfill your request for the following information based solely on the provided text:
- A table of acceptance criteria and the reported device performance: The acceptance criteria are not explicitly listed, nor are the specific performance results against them. The document only generally states that "all software specifications have met the acceptance criteria."
- Sample sizes used for the test set and the data provenance: No information on sample sizes or data provenance (country, retrospective/prospective) for the test set is provided.
- Number of experts used to establish the ground truth for the test set and the qualifications of those experts: As no specific study details are given, this information is not present.
- Adjudication method for the test set: No information on adjudication is provided.
- If a multi reader multi case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance: The document states the device is a "Medical Image Management and Processing System" and explicitly says "No automated diagnostic interpretation capabilities like CAD are included." (Page 9, CAD Functionalities table row). It is a post-processing and viewing software, not an AI/CAD system designed to directly improve human diagnostic performance via AI assistance. Therefore, an MRMC study for AI assistance would likely not be relevant or performed for this device category.
- If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: The provided information hints at functional and software verification/validation, which are forms of standalone testing, but no specific performance metrics are given.
- The type of ground truth used: Not specified.
- The sample size for the training set: The document implies this is not an AI/ML algorithm that requires a "training set" in the typical sense for clinical performance. The "Imaging algorithms" section (Page 7-8) mentions "bug-fixing and minor improvements" and "No re-training or change in algorithm models was performed," suggesting that existing, validated algorithms were refined.
- How the ground truth for the training set was established: Not applicable, as detailed above.
In summary, the provided FDA 510(k) summary focuses on demonstrating substantial equivalence to a predicate device through software verification and validation, and functional performance tests, rather than a detailed clinical study with specific acceptance criteria and performance metrics typically seen for AI/ML diagnostic aids. The changes introduced in VA40A compared to VA30A are primarily related to software architecture, operating system support (Windows 11), minor algorithm bug fixes, and user interface improvements, and the inclusion of a "Cinematic VRT" algorithm that was previously cleared. The "Imaging algorithms" section explicitly states: "The changes between the predicate device and the subject device doesn't impact the safety and effectiveness of the subject device as the necessary measures were taken for the safety and effectiveness of the subject device." This implies the focus was on ensuring the new version maintained the safety and effectiveness of the predicate, rather than proving a statistically significant improvement via a new clinical study.
Ask a specific question about this device
(144 days)
syngo.via View&GO
syngo.via View&GO is indicated for image rendering and post-processing of DICOM images to support the interpretation in the field of radiology, nuclear medicine and cardiology.
syngo.via View&GO is a software-only medical device, which is delivered by download to be installed on common IT hardware. This hardware has to fulfil the defined requirements. Any hardware platform that complies to the specified minimum hardware and software requirements and with successful installation verification and validation activities can be supported. The hardware itself is not seen as part of the medical device syngo.via View&GO and therefore not in the scope of this 510(k) submission.
syngo.via View&GO provides tools and features to cover the radiological tasks preparation for reading, reading images and support reporting. syngo.via View&GO supports DICOM formatted images and objects.
syngo.via View&GO is a standalone viewing and reading workplace. This is capable of rendering the data from the connected modalities for the post processing activities. syngo.via View&GO provides the user interface for interactive image viewing and processing with a limited short-term storage which can be interfaced with any Long-term storage (e.g. PACS) via DICOM syngo.via View&GO is based on Microsoft Windows operating systems.
syngo.via View&GO supports various monitor setups and can be adapted to a range of image types by connecting different monitor types.
The provided text is a 510(k) Summary for the Siemens Healthcare GmbH device "syngo.via View&GO" (Version VA30A). This document focuses on demonstrating substantial equivalence to a predicate device (syngo.via View&GO, Version VA20A) rather than presenting a detailed study of the device's performance against specific acceptance criteria for a novel algorithm.
The document states that the software is a Medical Image Management and Processing System, and its purpose is for "image rendering and post-processing of DICOM images to support the interpretation in the field of radiology, nuclear medicine and cardiology." It specifically states, "No automated diagnostic interpretation capabilities like CAD are included. All image data are to be interpreted by trained personnel."
Therefore, the provided text does not contain the information requested regarding acceptance criteria and a study proving an algorithm meets those criteria for diagnostic performance. It does not describe an AI/ML algorithm or its associated performance metrics.
However, based on the provided text, I can infer some aspects and highlight what information is missing if this were an AI-driven diagnostic device.
Here's an analysis based on the assumption that if this were an AI-based device, these fields would typically be addressed:
Summary of Device Performance (Based on provided text's limited scope for a general medical image processing system):
Since "syngo.via View&GO" is a medical image management and processing system without automated diagnostic interpretation capabilities, the acceptance criteria and performance data would revolve around its functionality, usability, and safety in handling and presenting medical images. The provided text primarily establishes substantial equivalence based on the lack of significant changes in core functionality and the adherence to relevant standards for medical software and imaging.
1. Table of acceptance criteria and the reported device performance:
The document doesn't provide a table of performance metrics for an AI algorithm. Instead, it describes "Non-clinical Performance Testing" focused on:
- Conformance to standards (DICOM, JPEG, ISO 14971, IEC 62304, IEC 82304-1, IEC 62366-1, IEEE Std 3333.2.1-2015).
- Software verification and validation (demonstrating continued conformance with special controls for medical devices containing software).
- Risk analysis and mitigation.
- Cybersecurity requirements.
- Functionality of the device (as outlined in the comparison table between subject and predicate device).
Reported Performance/Findings (General):
- "The testing results support that all the software specifications have met the acceptance criteria."
- "Testing for verification and validation for the device was found acceptable to support the claims of substantial equivalence."
- "Results of all conducted testing were found acceptable in supporting the claim of substantial equivalence."
- The device "does not introduce any new significant potential safety risks and is substantially equivalent to and performs as well as the predicate device."
Example of what a table might look like if this were an AI algorithm, along with why it's not present:
Acceptance Criterion (Hypothetical for AI) | Reported Device Performance (Hypothetical for AI) |
---|---|
Primary Endpoint: Sensitivity for detecting X > Y% | Not applicable - device has no diagnostic AI. |
Secondary Endpoint: Specificity for detecting X > Z% | Not applicable - device has no diagnostic AI. |
Image Rendering Accuracy (e.g., visual fidelity compared to ground truth) | "All the software specifications have met the acceptance criteria." (general statement) |
DICOM Conformance | Conforms to NEMA PS 3.1-3.20 (2016a) |
User Interface Usability (e.g., according to human factors testing) | Changes are "limited to the common look and feel based on Siemens Healthineers User Interface Style Guide." "The changes... doesn't impact the safety and effectiveness... of the subject device." |
Feature Functionality (e.g., MPR, MIP/MinIP, VRT, measurements) | "Algorithms underwent bug-fixing and minor improvements. No re-training or change in algorithm models was performed." "The changes... doesn't impact the safety and effectiveness... of the subject device." |
2. Sample size used for the test set and the data provenance:
- Not explicitly stated for diagnostic performance, as the device does not have automated diagnostic capabilities.
- The software verification and validation activities would involve testing with various DICOM images to ensure proper rendering and processing. The exact number of images or datasets used for these software tests is not detailed.
- Data Provenance: Not specified, as it's not a clinical performance study.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not applicable / Not stated. Ground truth for diagnostic accuracy is not established for this device, as it does not perform automated diagnosis. The ground truth for software functionality would be the expected behavior of the software according to its specifications.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not applicable. No clinical adjudication method is described, as this is neither a clinical study nor an AI diagnostic device.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC study was NOT done/described. The device explicitly states it has "No automated diagnostic interpretation capabilities like CAD are included. All image data are to be interpreted by trained personnel." Therefore, it does not offer AI assistance for diagnosis.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Not applicable. The device is a "Medical Image Management and Processing System" that provides tools for human interpretation; it is not a standalone diagnostic algorithm.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Not applicable for diagnostic purposes. For software functionality, the ground truth is the defined behavior as per the software specifications and design.
8. The sample size for the training set:
- Not applicable/Not stated. The document explicitly states for the "Imaging algorithms" section that "No re-training or change in algorithm models was performed." This implies that the algorithms are traditional image processing algorithms, not machine learning models that require training data in the context of diagnostic AI. If there were any minor algorithmic adjustments, the training data for such classical algorithms is typically the mathematical formulation itself rather than a dataset of clinical cases for machine learning.
9. How the ground truth for the training set was established:
- Not applicable. As indicated above, there is no mention of "training" in the context of machine learning. The algorithms are described as undergoing "bug-fixing and minor improvements" but no "re-training or change in algorithm models."
Ask a specific question about this device
(28 days)
syngo.via View&GO
syngo.via View&GO is a software solution intended to be used for viewing, communication, and storage of medical images. It can be used as a stand-alone device or together with a variety of cleared and unmodified syngo based software options.
syngo.via View&GO supports interpretation of examinations within healthcare institutions, for example, in Radiology, Nuclear Medicine and Cardiology environments. The system is not intended for the displaying of digital mammography images for diagnosis in the U.S.
Siemens Healthcare GmbH intends to market the Picture Archiving and Communications System, syngo.via View&GO, software version VA20A. This 510(k) submission describes several modifications to the previously cleared predicate device, syngo.via View&GO, software version VA10A.
syngo.via View&GO is a software only medical device, which is delivered by download to be installed on common IT hardware. This hardware has to fulfil the defined requirements. Any hardware platform that complies to the specified minimum hardware and software requirements and with successful installation verification and validation activities can be supported. The hardware itself is not seen as part of the medical device syngo.via View&GO and therefore not in the scope of this 510(k) submission.
syngo.via View&GO provides tools and features to cover the radiological tasks preparation for reading, reading images and support reporting. syngo.via View&GO supports DICOM formatted images and objects.
syngo.via View&GO is a standalone viewing and reading workplace. This is capable of rendering the data from the connected modalities for the post processing activities. syngo.via View&GO provides the user interface for interactive image viewing and processing with a limited short-term storage which can be interfaced with any Long-term storage (e.g. PACS) via DICOM syngo.via View&GO is based on Microsoft Windows operating systems.
syngo.via View&GO supports various monitor setups and can be adapted to a range of image types by connecting different monitor types.
The subject device and the predicate device share the same fundamental scientific technology. This device description holds true for the subject device, syngo.via View&GO, software version VA20A, as well as the predicate device, syngo.via View&GO, software version VA10A.
The provided text is a 510(k) summary for the medical device syngo.via View&GO (Version VA20A). It compares this device to a predicate device (syngo.via View&GO, Version VA10A) and outlines the testing performed to demonstrate substantial equivalence.
Here's a breakdown of the requested information based on the provided document:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly present a table of acceptance criteria with corresponding performance metrics in a quantitative manner. Instead, it describes "performance tests" that were conducted to test the "functionality of the device." The summary states:
Acceptance Criteria (Implicit from the document's claims):
- Continued conformance with special controls for medical devices containing software.
- All software specifications have met the acceptance criteria.
- Acceptable verification and validation for the device to support claims of substantial equivalence.
- The device performs comparably to and is as safe and effective as the predicate device.
- The device does not introduce any new significant potential safety risks.
Reported Device Performance:
The document states that the "results of all conducted testing were found acceptable in supporting the claim of substantial equivalence." This implies that the device met all implicit acceptance criteria by successfully passing the functionality tests.
The document highlights the following general performance findings:
- The subject device (VA20A) and the predicate device (VA10A) share the same fundamental scientific technology.
- The changes between the predicate and subject device (e.g., added reprocessing algorithms, anatomical range presets, DICOM printer support) do not impact the safety and effectiveness of the subject device.
- The software documentation is in conformance with FDA's Guidance Document "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices."
- Risk analysis was completed, and risk control was implemented to mitigate identified hazards.
- Cybersecurity requirements were met by implementing processes for preventing unauthorized access, modifications, misuse, or denial of use.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document does not provide details on the sample size used for the test set or the data provenance (e.g., country of origin, retrospective/prospective nature of the data). It broadly mentions "non-clinical tests were conducted for the device syngo.via View&GO during product development" and "testing for verification and validation for the device was found acceptable." This suggests internal testing by the manufacturer rather than a study with a specific patient dataset described for a test set.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This information is not provided in the document. The testing described is verification and validation of software functionality, not a clinical study involving human interpretation of images and ground truth established by experts.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not provided in the document. As mentioned above, the testing appears to be centered on software functionality verification and validation, not a clinical study requiring adjudication of expert readings.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
A multi-reader, multi-case (MRMC) comparative effectiveness study was not done or at least not described in this 510(k) summary. The device is a "Picture Archiving and Communications System" and a "software solution intended to be used for viewing, manipulation, communication, and storage of medical images." It is not described as an AI-powered diagnostic aid meant to directly improve human reader performance in the context of an MRMC study. The document explicitly states: "No automated diagnostic interpretation capabilities like CAD are included. All image data are to be interpreted by trained personnel."
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
A standalone performance study focused on algorithmic diagnostic accuracy was not done or at least not described in this 510(k) summary for diagnostic purposes. The device is a viewing and processing software, not an algorithm providing diagnostic interpretations. The testing focused on technical functionality, safety, and equivalence to a predicate device.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
Since this submission pertains to a picture archiving and communication system (PACS) for viewing and manipulation, not a diagnostic AI algorithm, the concept of "ground truth" as typically used in diagnostic performance studies (e.g., pathology, clinical outcomes) for the entire device as a whole is not applicable or described. The "ground truth" for the verification and validation would likely be based on technical specifications, expected software behavior, and known standards (e.g., DICOM compliance).
8. The sample size for the training set
This information is not provided and is not applicable as the device is not described as an AI model developed through machine learning with a distinct training set. It's a software solution with defined functionalities.
9. How the ground truth for the training set was established
This information is not provided and is not applicable as the device is not an AI model that undergoes a training phase requiring a "training set" with established "ground truth" in the traditional sense of machine learning for diagnostic tasks.
Ask a specific question about this device
(23 days)
syngo.via View&GO (Version VA10A)
syngo.via View&GO is a software solution intended to be used for Viewing, communication, and storage of medical images. It can be used as a stand-alone device or together with a variety of cleared and unmodified syngo based software options.
syngo.via View&GO supports interpretation of examinations within healthcare institutions, for example, in Radiology, Nuclear Medicine and Cardiology environments. The system is not intended for the displaying of digital mammography images for diagnosis in the US.
Siemens Healthcare GmbH intends to market the Picture Archiving and Communications System, syngo.via View&GO, software version VA10A. This 510(k) submission describes several modifications to the previously cleared predicate device, syngo.via, software version VB10A.
syngo.via View&GO is a software only medical device, which is delivered by download to be installed on common IT hardware. This hardware has to fulfil the defined requirements. Any hardware platform that complies to the specified minimum hardware and software requirements and with successful installation verification and validation activities can be supported. The hardware itself is not seen as part of the medical device syngo.via View&GO and therefore not in the scope of this 510(k) submission.
syngo.via View&GO provides tools and features to cover the radiological tasks preparation for reading, reading images and support for reporting 4. syngo.via View&GO supports DICOM formatted images and objects.
syngo.via View&GO is a standalone viewing and reading workplace. This is capable of rendering the data from the connected modalities for the post processing activities. syngo.via View&GO provides the user interface for interactive image viewing and processing with a limited short term storage which can be interfaced with any Long term storage (e.g. PACS) via DICOM.
syngo.via View&GO is based on Microsoft Windows operating systems.
syngo.via View&GO supports various monitor setups and can be adapted to a range of image types by connecting different monitor types.
The subject device and the predicate device share fundamental scientific technology.
The provided text describes a 510(k) premarket notification for syngo.via View&GO (Version VA10A), a Picture Archiving and Communications System (PACS). However, it does not contain the detailed information required to answer all parts of your request, specifically regarding a clinical study with detailed acceptance criteria, sample sizes, expert involvement, and ground truth establishment.
The document primarily focuses on demonstrating substantial equivalence to a predicate device (syngo.via VB10A) through comparisons of intended use, technological characteristics, and non-clinical performance testing. It highlights that syngo.via View&GO is a simplified version of the predicate device, with some functionalities removed.
Here's a breakdown of what can be extracted and what is missing:
1. A table of acceptance criteria and the reported device performance
The document states: "The testing results support that all the software specifications have met the acceptance criteria." However, it does not provide a specific table of acceptance criteria or quantitative performance metrics for the device’s functionality. It mentions "non-clinical tests were conducted for the device syngo.via View&GO during product development" to assess functionality, but the results themselves are not detailed in terms of specific performance against defined criteria.
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
This information is not provided in the document. The testing described is "non-clinical" and focuses on software verification and validation, not clinical performance using a specific test set of patient data.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This information is not provided as the document does not describe a clinical study with expert-established ground truth.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not provided as the document does not describe a clinical study.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
This information is not provided. The device is a PACS system for viewing, manipulation, communication, and storage of medical images, and "no automated diagnostic interpretation capabilities like CAD are included." Therefore, an MRMC study assessing AI assistance is not applicable to this device based on the provided information.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
The document primarily describes a "software only medical device" that is a "standalone viewing and reading workplace" intended for use by medical professionals. The software itself is a standalone system in terms of its architecture (compared to the client-server predecessor), but its intended use involves human interpretation. It does not describe a standalone algorithmic performance study in the context of diagnostic interpretation, as it explicitly states it has no CAD functionalities.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
This information is not provided as there is no description of a clinical study that required ground truth establishment. The testing mentioned is "non-clinical" for software functionality.
8. The sample size for the training set
This information is not provided. The document describes a PACS system which displays existing medical images. It does not mention any machine learning or AI components that would require a "training set" of data in the typical sense for diagnostic algorithms.
9. How the ground truth for the training set was established
This information is not provided, as there is no mention of a training set or ground truth establishment for such a set.
Summary of what is available from the document:
The provided document describes a non-clinical performance testing approach for a PACS system, syngo.via View&GO. The "study" (non-clinical testing) aims to demonstrate that modifications to a previously cleared predicate device (syngo.via VB10A) do not introduce new safety risks and that the new device remains substantially equivalent for its intended use.
- Acceptance Criteria & Performance: The document states that "all the software specifications have met the acceptance criteria" based on non-clinical verification and validation testing. However, specific quantitative acceptance criteria and detailed performance metrics are not provided.
- Study Type: Non-clinical software verification and validation testing.
- Sample Size/Data Provenance: Not applicable for a typical clinical test set as described. The testing focuses on software functionality, not clinical performance on a dataset of patient cases.
- Expert/Ground Truth/Adjudication: Not applicable, as it's a non-clinical software validation without diagnostic AI features.
- MRMC Study: Not applicable, as the device does not have AI diagnostic interpretation capabilities.
- Standalone Performance: The device is described as a "standalone viewing and reading workplace" in terms of its architecture, but not in the context of an algorithm performing diagnoses without human involvement.
- Training Set/Ground Truth for Training: Not applicable/not mentioned, as there are no AI components requiring training.
Ask a specific question about this device
Page 1 of 1