Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K201477
    Date Cleared
    2020-07-01

    (28 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    syngo.via View&GO is a software solution intended to be used for viewing, communication, and storage of medical images. It can be used as a stand-alone device or together with a variety of cleared and unmodified syngo based software options.

    syngo.via View&GO supports interpretation of examinations within healthcare institutions, for example, in Radiology, Nuclear Medicine and Cardiology environments. The system is not intended for the displaying of digital mammography images for diagnosis in the U.S.

    Device Description

    Siemens Healthcare GmbH intends to market the Picture Archiving and Communications System, syngo.via View&GO, software version VA20A. This 510(k) submission describes several modifications to the previously cleared predicate device, syngo.via View&GO, software version VA10A.

    syngo.via View&GO is a software only medical device, which is delivered by download to be installed on common IT hardware. This hardware has to fulfil the defined requirements. Any hardware platform that complies to the specified minimum hardware and software requirements and with successful installation verification and validation activities can be supported. The hardware itself is not seen as part of the medical device syngo.via View&GO and therefore not in the scope of this 510(k) submission.

    syngo.via View&GO provides tools and features to cover the radiological tasks preparation for reading, reading images and support reporting. syngo.via View&GO supports DICOM formatted images and objects.

    syngo.via View&GO is a standalone viewing and reading workplace. This is capable of rendering the data from the connected modalities for the post processing activities. syngo.via View&GO provides the user interface for interactive image viewing and processing with a limited short-term storage which can be interfaced with any Long-term storage (e.g. PACS) via DICOM syngo.via View&GO is based on Microsoft Windows operating systems.

    syngo.via View&GO supports various monitor setups and can be adapted to a range of image types by connecting different monitor types.

    The subject device and the predicate device share the same fundamental scientific technology. This device description holds true for the subject device, syngo.via View&GO, software version VA20A, as well as the predicate device, syngo.via View&GO, software version VA10A.

    AI/ML Overview

    The provided text is a 510(k) summary for the medical device syngo.via View&GO (Version VA20A). It compares this device to a predicate device (syngo.via View&GO, Version VA10A) and outlines the testing performed to demonstrate substantial equivalence.

    Here's a breakdown of the requested information based on the provided document:

    1. A table of acceptance criteria and the reported device performance

    The document does not explicitly present a table of acceptance criteria with corresponding performance metrics in a quantitative manner. Instead, it describes "performance tests" that were conducted to test the "functionality of the device." The summary states:

    Acceptance Criteria (Implicit from the document's claims):

    • Continued conformance with special controls for medical devices containing software.
    • All software specifications have met the acceptance criteria.
    • Acceptable verification and validation for the device to support claims of substantial equivalence.
    • The device performs comparably to and is as safe and effective as the predicate device.
    • The device does not introduce any new significant potential safety risks.

    Reported Device Performance:
    The document states that the "results of all conducted testing were found acceptable in supporting the claim of substantial equivalence." This implies that the device met all implicit acceptance criteria by successfully passing the functionality tests.

    The document highlights the following general performance findings:

    • The subject device (VA20A) and the predicate device (VA10A) share the same fundamental scientific technology.
    • The changes between the predicate and subject device (e.g., added reprocessing algorithms, anatomical range presets, DICOM printer support) do not impact the safety and effectiveness of the subject device.
    • The software documentation is in conformance with FDA's Guidance Document "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices."
    • Risk analysis was completed, and risk control was implemented to mitigate identified hazards.
    • Cybersecurity requirements were met by implementing processes for preventing unauthorized access, modifications, misuse, or denial of use.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document does not provide details on the sample size used for the test set or the data provenance (e.g., country of origin, retrospective/prospective nature of the data). It broadly mentions "non-clinical tests were conducted for the device syngo.via View&GO during product development" and "testing for verification and validation for the device was found acceptable." This suggests internal testing by the manufacturer rather than a study with a specific patient dataset described for a test set.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    This information is not provided in the document. The testing described is verification and validation of software functionality, not a clinical study involving human interpretation of images and ground truth established by experts.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    This information is not provided in the document. As mentioned above, the testing appears to be centered on software functionality verification and validation, not a clinical study requiring adjudication of expert readings.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    A multi-reader, multi-case (MRMC) comparative effectiveness study was not done or at least not described in this 510(k) summary. The device is a "Picture Archiving and Communications System" and a "software solution intended to be used for viewing, manipulation, communication, and storage of medical images." It is not described as an AI-powered diagnostic aid meant to directly improve human reader performance in the context of an MRMC study. The document explicitly states: "No automated diagnostic interpretation capabilities like CAD are included. All image data are to be interpreted by trained personnel."

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    A standalone performance study focused on algorithmic diagnostic accuracy was not done or at least not described in this 510(k) summary for diagnostic purposes. The device is a viewing and processing software, not an algorithm providing diagnostic interpretations. The testing focused on technical functionality, safety, and equivalence to a predicate device.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    Since this submission pertains to a picture archiving and communication system (PACS) for viewing and manipulation, not a diagnostic AI algorithm, the concept of "ground truth" as typically used in diagnostic performance studies (e.g., pathology, clinical outcomes) for the entire device as a whole is not applicable or described. The "ground truth" for the verification and validation would likely be based on technical specifications, expected software behavior, and known standards (e.g., DICOM compliance).

    8. The sample size for the training set

    This information is not provided and is not applicable as the device is not described as an AI model developed through machine learning with a distinct training set. It's a software solution with defined functionalities.

    9. How the ground truth for the training set was established

    This information is not provided and is not applicable as the device is not an AI model that undergoes a training phase requiring a "training set" with established "ground truth" in the traditional sense of machine learning for diagnostic tasks.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1