Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K213665
    Date Cleared
    2022-06-21

    (211 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Syngo Carbon Space VA20A

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Syngo Carbon Space is a software intended to display medical data and to support the review and analysis of medical images by trained medical professionals.

    Syngo Carbon Space "Diagnostic Workspace" is indicated for display, rendering, post-processing of medical data (mostly medical images) within healthcare institutions, for example, in the field of Radiology, Nuclear Medicine and Cardiology.

    Syngo Carbon Space "Physician Access" is indicated for display and rendering of medical data within healthcare institutions.

    Device Description

    Syngo Carbon Space is a software only medical device which is intended to be installed on recommended common IT Hardware. The hardware is not seen as part of the medical device. Syngo Carbon Space is intended to support reviews and analysis of medical images by trained medical practitioners. The software is used in Radiology for reading images and throughout the healthcare institutions for image & result distribution.

    Syngo Carbon Space is a medical device, provided in two variants/options.

    • Diagnostic Workspace (Fat/Thick Client)
    • Physician Access (Thin/Web Client)

    In any scenario, both the options can be installed/run on the same machine and be used simultaneously.

    Syngo Carbon Space Diagnostic Workspace provides a reading workspace for Radiology that supports display of medical image data & documents and connects intelligent work tools (diagnostic and non-diagnostic software elements) to enable easy access to the data needed, easy access to external tools and creation of actionable results.

    Syngo Carbon Space Physician Access provides a zero-footprint web application for enterprise-wide viewing of DICOM, non-DICOM, multimedia data and clinical documents to facilitate image and result distribution in the healthcare institution.

    Syngo Carbon Space is based on a client-server architecture. The server processes and renders the data from the connected modalities. The server provides central services including image processing and temporary storage. The client provides the user interface for interactive image viewing and processing and can be installed and started on each workplace that has a network connection to the server.

    AI/ML Overview

    The user is asking for information about the acceptance criteria and the study proving the device meets them for "Syngo Carbon Space VA20A".

    Based on the provided text, the device is software-only, intended for display, review, and analysis of medical images. It does not perform diagnostic interpretation (CAD functionalities) and instead supports trained medical professionals. The submission emphasizes non-clinical testing and comparison to a predicate device ("syngo.via VB40A").

    Here's a breakdown of the requested information:

    1. A table of acceptance criteria and the reported device performance

    The document does not explicitly present a table of acceptance criteria with corresponding reported device performance values in a quantitative manner. Instead, the performance is described in terms of verification and validation testing, and conformance to standards to demonstrate substantial equivalence to the predicate device.

    The "Comparison" table in Section 9 highlights several differences and states that the "Impact to Safety & Effectiveness" for these differences is "NA" or that "the necessary measures have been taken." This implies that the acceptance criteria for these aspects are met by demonstrating that the changes do not negatively impact safety and effectiveness or that the subject device performs at least as well as the predicate for enhanced features.

    For example, for "Measurement, Evaluation/Interpretation Tools," the subject device has "enhanced" functionalities like "Lesion Quantification." The impact is stated as "This differences between the predicate device and the subject device doesn't impact the safety and effectiveness of the subject device as the necessary measures taken." This suggests that the acceptance criteria for these enhanced tools were met by confirming their safe and effective operation, likely through internal validation.

    Similarly, for "Cyber Security," the enhancement of "System Hardening" is noted, and the impact is "The improved security function doesn't impact the safety and effectiveness of the subject device as the necessary measures taken." This implies the security performance meets enhanced criteria.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document states, "No clinical studies were carried out for the product, all performance testing was conducted in a non-clinical fashion as part of verification and validation activities of the medical device." Therefore, there is no information about sample size for a clinical test set or data provenance.

    However, for the "Lesion Segmentation Algorithm," it mentions "phantom and reader studies." It does not specify the sample size, data provenance, or whether the study was retrospective or prospective.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    Since no clinical studies were performed and the device does not provide automated diagnostic interpretation, the document does not specify the number or qualifications of experts used to establish ground truth for a test set in the conventional sense of diagnostic accuracy studies. For the "Lesion Segmentation Algorithm" (which is not a diagnostic interpretation tool but a measurement tool), reader studies were conducted, implying expert involvement, but details are not provided.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Given the absence of clinical studies and detailed information on the "phantom and reader studies" for the Lesion Segmentation Algorithm, no information is provided regarding an adjudication method for a test set.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No MRMC comparative effectiveness study is mentioned. The device is not a CAD or AI assistance tool for diagnosis but an image management and processing system. The "Lesion Quantification" feature is stated to be "a non-Deep learning algorithm."

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The "Lesion Segmentation Algorithm" was evaluated in "phantom and reader studies." The fact that reader studies were mentioned suggests that human reading was involved in evaluating the algorithm's performance, but the document does not definitively state that a standalone performance evaluation of the algorithm itself (without human-in-the-loop) was conducted, nor does it provide performance metrics for such a standalone evaluation. The device as a whole is intended to "support the review and analysis of medical images by trained medical professionals," indicating a human-in-the-loop design.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    For the "Lesion Segmentation Algorithm," it is implied that expert readers were involved in the "reader studies." For the overall device, which is an image display and processing system, the ground truth for validation would likely involve adherence to technical specifications, image quality standards (e.g., DICOM conformance), and verification that processing functions yield expected results, rather than a clinical ground truth for disease presence.

    8. The sample size for the training set

    No information is provided about a training set size. The "Lesion Quantification" is described as a "non-Deep learning algorithm," which generally doesn't require a large training set in the same way as deep learning models.

    9. How the ground truth for the training set was established

    No information is provided about a training set or how its ground truth would have been established. As mentioned, the Lesion Quantification tool is a non-deep learning algorithm, suggesting a different development and validation paradigm than AI tools that rely heavily on large, expertly annotated training datasets.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1