Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K230196
    Date Cleared
    2023-02-13

    (19 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K213665

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    syngo.via View&GO is indicated for image rendering and post-processing of DICOM images to support the interpretation in the field of radiology, nuclear medicine and cardiology.

    Device Description

    Siemens Healthcare GmbH intends to market the Medical Image Management and Processing System, syngo.via View&GO, software version VA40A. This 510(k) submission describes several modifications to the previously cleared predicate device. syngo.via View&GO, software version VA30A.

    syngo.via View&GO is a software-only medical device, which is delivered by download to be installed on common IT hardware. This hardware has to fulfil the defined requirements. Any hardware platform that complies to the specified minimum hardware and software requirements and with successful installation verification and validation activities can be supported. The hardware itself is not seen as part of the medical device syngo.via View&GO and therefore not in the scope of this 510(k) submission.

    syngo.via View&GO provides tools and features to cover the radiological tasks preparation for reading, reading images and support reporting. syngo.via View&GO supports DICOM formatted images and objects.

    syngo.via View&GO is a standalone viewing and reading workplace. This is capable of rendering the data from the connected modalities for the post processing activities. syngo.via View&GO provides the user interface for interactive image viewing and processing with a limited short-term storage which can be interfaced with any Long-term storage (e.g. PACS) via DICOM syngo.via View&GO is based on Microsoft Windows operating systems.

    syngo.via View&GO supports various monitor setups and can be adapted to a range of image types by connecting different monitor types.

    The subject device and the predicate device share fundamental scientific technology. This device description holds true for the subject device. syngo.via View&GO, software version VA40A, as well as the predicate device, syngo.via View&GO, software version VA30A.

    AI/ML Overview

    The provided text is a 510(k) summary for the syngo.via View&GO VA40A software, seeking substantial equivalence to a predicate device (syngo.via View&GO VA30A). While it details the device, its intended use, and comparisons to the predicate, it does not contain information about specific acceptance criteria or the details of a study proving the device meets those criteria.

    The document states:

    • "Non-clinical tests were conducted for the device syngo.via View&GO during product development. The modifications described in this Premarket Notification were supported with verification and validation testing." (Page 14, Section 8)
    • "The testing results support that all the software specifications have met the acceptance criteria. Testing for verification and validation for the device was found acceptable to support the claims of substantial equivalence." (Page 14, Section 9)
    • "Performance tests were conducted to test the functionality of the device syngo.via View&GO. These tests have been performed to assess the functionality of the subject device. Results of all conducted testing were found acceptable in supporting the claim of substantial equivalence." (Page 14, Section 10)

    However, it does not provide the specific "acceptance criteria" themselves, nor does it describe the details of the "study" (beyond mentioning "non-clinical tests" and "verification and validation testing") that would demonstrate performance against these criteria.

    Therefore, I cannot fulfill your request for the following information based solely on the provided text:

    1. A table of acceptance criteria and the reported device performance: The acceptance criteria are not explicitly listed, nor are the specific performance results against them. The document only generally states that "all software specifications have met the acceptance criteria."
    2. Sample sizes used for the test set and the data provenance: No information on sample sizes or data provenance (country, retrospective/prospective) for the test set is provided.
    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: As no specific study details are given, this information is not present.
    4. Adjudication method for the test set: No information on adjudication is provided.
    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance: The document states the device is a "Medical Image Management and Processing System" and explicitly says "No automated diagnostic interpretation capabilities like CAD are included." (Page 9, CAD Functionalities table row). It is a post-processing and viewing software, not an AI/CAD system designed to directly improve human diagnostic performance via AI assistance. Therefore, an MRMC study for AI assistance would likely not be relevant or performed for this device category.
    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: The provided information hints at functional and software verification/validation, which are forms of standalone testing, but no specific performance metrics are given.
    7. The type of ground truth used: Not specified.
    8. The sample size for the training set: The document implies this is not an AI/ML algorithm that requires a "training set" in the typical sense for clinical performance. The "Imaging algorithms" section (Page 7-8) mentions "bug-fixing and minor improvements" and "No re-training or change in algorithm models was performed," suggesting that existing, validated algorithms were refined.
    9. How the ground truth for the training set was established: Not applicable, as detailed above.

    In summary, the provided FDA 510(k) summary focuses on demonstrating substantial equivalence to a predicate device through software verification and validation, and functional performance tests, rather than a detailed clinical study with specific acceptance criteria and performance metrics typically seen for AI/ML diagnostic aids. The changes introduced in VA40A compared to VA30A are primarily related to software architecture, operating system support (Windows 11), minor algorithm bug fixes, and user interface improvements, and the inclusion of a "Cinematic VRT" algorithm that was previously cleared. The "Imaging algorithms" section explicitly states: "The changes between the predicate device and the subject device doesn't impact the safety and effectiveness of the subject device as the necessary measures were taken for the safety and effectiveness of the subject device." This implies the focus was on ensuring the new version maintained the safety and effectiveness of the predicate, rather than proving a statistically significant improvement via a new clinical study.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1