Search Filters

Search Results

Found 26 results

510(k) Data Aggregation

    K Number
    K252608
    Date Cleared
    2025-09-09

    (22 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Applicant Name (Manufacturer) :

    Siemens Healthcare GmbH

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    Why did this record match?
    Applicant Name (Manufacturer) :

    Siemens Healthcare GmbH

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The MAGNETOM system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces transverse, sagittal, coronal and oblique cross sectional images, spectroscopic images and/or spectra, and that displays the internal structure and/or function of the head, body, or extremities. Other physical parameters derived from the images and/or spectra may also be produced. Depending on the region of interest, contrast agents may be used. These images and/or spectra and the physical parameters derived from the images and/or spectra when interpreted by a trained physician yield information that may assist in diagnosis.

    The MAGNETOM system may also be used for imaging during interventional procedures when performed with MR compatible devices such as in-room displays and MR Safe biopsy needles.

    Device Description

    The subject device, MAGNETOM Avanto Fit with software syngo MR XA70A, consists of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Avanto Fit with syngo MR XA50A (K220151).

    A high-level summary of the new and modified hardware and software is provided below:

    For MAGNETOM Avanto Fit with syngo MR XA70:

    Hardware

    New Hardware:
    myExam 3D Camera
    BM Head/Neck 20

    Modified Hardware:
    Sanaflex (cushions for patient positioning)

    Software

    New Features and Applications:
    myExam Autopilot Brain
    myExam Autopilot Knee
    3D Whole Heart
    HASTE_interactive
    GRE_PC
    Open Recon
    Deep Resolve Gain
    Fleet Reference Scan
    Physio logging
    complex averaging
    AutoMate Cardiac
    Ghost Reduction
    BLADE diffusion
    Beat Sensor
    Deep Resolve Sharp
    Deep Resolve Boost and Deep Resolve Boost (TSE)
    Deep Resolve Boost HASTE
    Deep Resolve Boost EPI Diffusion

    Modified Features and Applications:
    SPACE improvement (high band)
    SPACE improvement (incr grad)
    Brain Assist
    Eco power mode
    myExam Angio Advanced Assist (Test Bolus)

    The subject device, MAGNETOM Skyra Fit with software syngo MR XA70A, consists of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Skyra Fit with syngo MR XA50A (K220589).

    A high-level summary of the new and modified hardware and software is provided below:

    For MAGNETOM Skyra Fit with syngo MR XA70:

    Hardware

    New Hardware:
    myExam 3D Camera

    Modified Hardware:
    Sanaflex (cushions for patient positioning)

    Software

    New Features and Applications:
    Beat Sensor
    HASTE_interactive
    GRE_PC
    3D Whole Heart
    Deep Resolve Gain
    Open Recon
    Ghost Reduction
    Fleet Reference Scan
    BLADE diffusion
    HASTE diffusion
    Physio logging
    complex averaging
    Deep Resolve Swift Brain
    Deep Resolve Sharp
    Deep Resolve Boost and Deep Resolve Boost (TSE)
    Deep Resolve Boost HASTE
    Deep Resolve Boost EPI Diffusion
    AutoMate Cardiac
    SVS_EDIT

    Modified Features and Applications:
    SPACE improvement (high band)
    SPACE improvement (incr grad)
    Brain Assist
    Eco power mode
    myExam Angio Advanced Assist (Test Bolus)

    The subject device, MAGNETOM Sola Fit with software syngo MR XA70A, consists of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Sola Fit with syngo MR XA51A (K221733).

    A high-level summary of the new and modified hardware and software is provided below:

    For MAGNETOM Sola Fit with syngo MR XA70:

    Hardware

    New Hardware:
    myExam 3D Camera

    Modified Hardware:
    Sanaflex (cushions for patient positioning)

    Software

    New Features and Applications:
    GRE_PC
    3D Whole Heart
    Ghost Reduction
    Fleet Reference Scan
    BLADE diffusion
    Physio logging
    Open Recon
    Complex averaging
    Deep Resolve Sharp
    Deep Resolve Boost and Deep Resolve Boost (TSE)
    Deep Resolve Boost HASTE
    Deep Resolve Boost EPI Diffusion
    AutoMate Cardiac
    Implant suite

    Modified Features and Applications:
    SPACE improvement (high band)
    SPACE improvement (incr grad)
    Brain Assist
    Eco power mode

    The subject device, MAGNETOM Viato.Mobile with software syngo MR XA70A, consists of new and modified software and hardware that is similar to what is currently offered on the predicate device, MAGNETOM Viato.Mobile with syngo MR XA51A (K240608).

    A high-level summary of the new and modified hardware and software is provided below:

    For MAGNETOM Viato.Mobile with syngo MR XA70:

    Hardware

    New Hardware:
    n.a.

    Modified Hardware:
    Sanaflex (cushions for patient positioning)

    Software

    New Features and Applications:
    GRE_PC
    3D Whole Heart
    Ghost Reduction
    Fleet Reference Scan
    BLADE diffusion
    Physio logging
    Open Recon
    Complex averaging
    Deep Resolve Sharp
    Deep Resolve Boost and Deep Resolve Boost (TSE)
    Deep Resolve Boost HASTE
    Deep Resolve Boost EPI Diffusion
    AutoMate Cardiac
    Implant suite

    Modified Features and Applications:
    SPACE improvement (high band)
    SPACE improvement (incr grad)
    Brain Assist
    Eco power mode

    Furthermore, the following minor updates and changes were conducted for the subject devices:

    Low SAR Protocol minor update (for all subject devices but MAGNETOM Skyra Fit): the goal of the SAR adaptive protocols was to be able to perform knee, spine, heart and brain examinations with 50% of the max allowed SAR values in normal mode for head and whole-body SAR. The SAR reduction was achieved by parameter adaptations like Flip angle, TR, RF Pulse Type, Turbo Factor, concatenations. For cardiac clinically accepted alternative imaging contrasts are used (submitted with K232494).

    Implementation of image sorting prepare for PACS (submitted with K231560).

    Implementation of improved DICOM color support (submitted with K232494).

    Needle intervention AddIn was added all subject device (submitted with K232494).

    Inline Image Filter switchable for users: in the subject device, users have the ability to switch the "Inline image filter" (implicite Filter) on or off. This filter is an image-based filter that can be applied to specific pulse sequence types. The function of the filter remains unchanged from the previous device MAGNETOM Sola with syngo MR XA61A (K232535).

    SVS_EDIT is newly added for MAGNETOM Skyra Fit, but without any changes (submitted with K203443)

    Brain Assist received an improvement and is identical to that of snygo MR XA61A (K232535)

    Open Recon is introduced for all systems. The function of Open Recon remains unchanged from the previous submissions (submitted with K221733).

    Lock TR and FA in Bold received a minor UI update

    Implant Suite is newly introduced for MAGNETOM Sola Fit and MAGNETOM Viato.Mobile, but without any changes (submitted with K232535)

    myExam Autopilot Brain and myExam Autopilot Knee are newly introduced for the subject device MAGNETOM AVANTO Fit and are unchanged from previous submissions (submitted with K221733).

    myExam Angio Advanced Assist (Test Bolus) received a bug fixing and minimal UI improvements.

    AI/ML Overview

    The provided text is an FDA 510(k) clearance letter for various MAGNETOM MRI Systems. While it details new and modified software and hardware features, it does not include specific acceptance criteria or a study that "proves the device meets the acceptance criteria" in terms of performance metrics like sensitivity, specificity, or accuracy for a diagnostic task.

    Instead, the document focuses on demonstrating substantial equivalence to predicate devices. This is achieved by:

    • Stating that the indications for use are the same.
    • Listing numerous predicate and reference devices.
    • Detailing hardware and software changes.
    • Mentioning non-clinical tests like software verification and validation, sample clinical images, and image quality assessment to show that the new features maintain an "equivalent safety and performance profile" to the predicate devices.
    • Referencing scientific publications for certain features to support their underlying principles and utility.
    • Briefly describing the training and validation data for two AI features: Deep Resolve Boost and Deep Resolve Sharp, but without performance acceptance criteria or detailed results.

    Therefore, much of the requested information cannot be extracted from this document because it is not a study report detailing clinical performance against predefined acceptance criteria for a specific diagnostic outcome.

    However, I can extract the information related to the AI features as best as possible from the "AI Features/Applications training and validation" section (Page 16).


    Acceptance Criteria and Study Details (Limited to AI Features)

    1. Table of Acceptance Criteria and Reported Device Performance

    FeatureAcceptance CriteriaReported Device Performance
    Deep Resolve Boost(Not explicitly stated in the provided document as specific numerical thresholds, but implied through evaluation metrics.)"The impact of the network has been characterized by several quality metrics such as peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Most importantly, the performance was evaluated by visual comparisons to evaluate e.g., aliasing artifacts, image sharpness and denoising levels." (Exact numerical results not provided).
    Deep Resolve Sharp(Not explicitly stated in the provided document as specific numerical thresholds, but implied through evaluation metrics and verification activities.)"The impact of the network has been characterized by several quality metrics such as peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and perceptual loss. In addition, the feature has been verified and validated by inhouse tests. These tests include visual rating and an evaluation of image sharpness by intensity profile comparisons of reconstructions with and without Deep Resolve Sharp." (Exact numerical results not provided).

    2. Sample size used for the test set and the data provenance

    • Deep Resolve Boost:
      • Test Set Sample Size: Not explicitly stated as a separate "test set" size. The document mentions "training and validation data" for over 25,000 TSE slices, over 10,000 HASTE slices (for refinement), and over 1,000,000 EPI Diffusion slices. It's unclear what proportion of this was used specifically for final testing, or if the "validation" mentioned includes the final performance evaluation.
      • Data Provenance: Retrospective, described as "Input data was retrospectively created from the ground truth by data manipulation and augmentation." Country of origin is not specified.
    • Deep Resolve Sharp:
      • Test Set Sample Size: Not explicitly stated as a separate "test set" size. The document mentions "training and validation" on more than 10,000 high resolution 2D images. Similar to Deep Resolve Boost, it's unclear what proportion was specifically for final testing.
      • Data Provenance: Retrospective, described as "Input data was retrospectively created from the ground truth by data manipulation." Country of origin is not specified.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    This information is not provided in the document. The definition of "ground truth" for the AI features refers to the acquired datasets themselves rather than expert-labeled annotations. Visual comparisons are mentioned as part of the evaluation, but without details on expert involvement or qualifications.

    4. Adjudication method for the test set

    This information is not provided in the document. While "visual comparisons" and "visual rating" are mentioned, no specific adjudication method (e.g., 2+1, 3+1) is described.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No, a MRMC comparative effectiveness study demonstrating human reader improvement with AI assistance is not described in this document. The focus of the AI features (Deep Resolve Boost and Deep Resolve Sharp) is on image quality enhancement (denoising, sharpness) and reconstruction rather than assisting human readers in a diagnostic task that can be quantified by an effect size.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, the evaluation of Deep Resolve Boost and Deep Resolve Sharp, based on metrics like PSNR, SSIM, and perceptual loss, and "visual comparisons" or "visual rating" appears to be an assessment of the algorithm's performance in enhancing image quality in a standalone capacity, without direct human-in-the-loop interaction for diagnosis.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • Deep Resolve Boost: "The acquired datasets (as described above) represent the ground truth for the training and validation." This implies the original, full-quality, unaltered MRI scan data. Further, "Input data was retrospectively created from the ground truth by data manipulation and augmentation. This process includes further under-sampling of the data by discarding k-space lines, lowering of the SNR level by addition Restricted of noise and mirroring of k-space data."
    • Deep Resolve Sharp: "The acquired datasets represent the ground truth for the training and validation." Similar to Boost, this refers to original, high-resolution MRI scan data. For training, "k-space data has been cropped such that only the center part of the data was used as input. With this method corresponding low-resolution data as input and high-resolution data as output / ground truth were created for training and validation."

    8. The sample size for the training set

    • Deep Resolve Boost:
      • TSE: more than 25,000 slices
      • HASTE (for refinement): more than 10,000 HASTE slices
      • EPI Diffusion: more than 1,000,000 slices
    • Deep Resolve Sharp: more than 10,000 high resolution 2D images.

    9. How the ground truth for the training set was established

    • Deep Resolve Boost: The ground truth was established by the "acquired datasets" themselves (full-quality MRI scans). The training input data was then derived from this ground truth by simulating degraded images (e.g., under-sampling, adding noise).
    • Deep Resolve Sharp: Similarly, the ground truth was the "acquired datasets" (high-resolution MRI scans). The training input data was derived by cropping k-space data to create corresponding low-resolution inputs.
    Ask a Question

    Ask a specific question about this device

    K Number
    K242551
    Date Cleared
    2025-04-03

    (219 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Siemens Healthcare GmbH

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    syngo Dynamics is a multimodality, vendor agnostic Cardiology image and information system intended for medical image management and processing that provides capabilities relating to the review and digital processing of medical images.

    syngo Dynamics supports clinicians by providing post image processing functions for image manipulation, and/or quantification that are intended for use in the interpretation and analysis of medical images for disease detection, diagnosis, and/or patient management within the healthcare institution's network.

    syngo Dynamics is not intended to be used for display or diagnosis of digital mammography images in the U.S.

    Device Description

    syngo Dynamics is a software only medical device which is used with common IT hardware. Recommended configurations are defined for the hardware required to run the device, and hardware is not considered as part of the medical device.

    syngo Dynamics is intended to be used by trained healthcare professionals in a professional healthcare facility to review, edit, and manipulate image data, as well as to generate quantitative data, qualitative data, and diagnostic reports.

    syngo Dynamics is a digital image display and reporting system with flexible deployment – it can function as a standalone medical device that includes a DICOM Server or as an integrated module within an Electronic Health Record (EHR) System with a DICOM Archive that receives images from digital image acquisition devices such as ultrasound and x-ray angiography machines. There are three deployments: Standalone, EHR/EHS Integrated, and Multi-Modality Cardiovascular (MMCV). MMCV deployment functions as a standalone medical device with capability of natively support 2D and 3D CT and MR image types.

    The use of syngo Dynamics is focused on cardiac ultrasound (echocardiography), angiography (x-ray), cardiac nuclear medicine (NM), CT and MR studies that cover both adult and pediatric medicine. Also supported is vascular ultrasound and ultrasound in Obstetrics/Gynecology and Maternal Fetal Medicine (fetal echocardiography during pregnancy).

    syngo Dynamics is based on a client-server architecture. The syngo Dynamics server processes the data from the connected imaging modalities, and stores data and images to a DICOM server and routes them for permanent storage, printing, and review. The client provides the user interface for interactive image viewing, reporting, and processing; and can be installed on network connected workstations.

    syngo Dynamics provides various semi-automated anatomical visualization tools.

    syngo Dynamics offers multiple access strategies: A Workplace that provides full functionality for reading and reporting; A Remote Workplace that provides additionally compressed images with access to full fidelity images for reading and reporting; and a browser based WebViewer that provides access to additionally compressed images and reports from compatible devices (including mobile devices).

    In the United States, monitors (displays) should not be used for diagnosis, unless the monitor (display) has specifically received 510(k) clearance for this purpose.

    AI/ML Overview

    This FDA 510(k) clearance letter pertains to syngo Dynamics (Version VA41D), a Medical Image Management and Processing System (MIMPS). While the document broadly discusses the device's substantial equivalence to a predicate device (syngo Dynamics VA40F) and its general functionalities, the only specific AI/ML-enabled function for which performance data and acceptance criteria are detailed is the Auto EF algorithm for calculating left ventricular ejection fraction from ultrasound images.

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based specifically on the Auto EF algorithm information provided:


    1. Table of Acceptance Criteria and Reported Device Performance (Auto EF Algorithm)

    The document states that "Additional acceptance criteria were defined with a total of 12 predetermined acceptance criteria," but only explicitly details one primary statistical criterion and provides summarized performance for a few other aspects.

    Acceptance CriterionReported Device Performance (syngo Dynamics VA41D)
    Pearson's correlation coefficient (r) between biplane EF generated by Auto EF and ground truth $\ge 0.800$0.822 (compared to 0.826 for predicate VA40F)
    Increased percentage of cases with biplane EF results93.3% (140 of 150 cases, compared to 92.0% for predicate VA40F)
    Bias of absolute EFMinimal, -0.2% (unchanged from predicate VA40F)
    Percentage of cases where absolute biplane EF delta between Auto EF and GT $\le$ 10%87.9% (compared to 83.7% for predicate VA40F)
    All 12 predetermined acceptance criteriaExceeded all 12 defined acceptance criteria.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: n = 150 cases.
    • Data Provenance: The test data originated from 3 sites in the U.S., representing geographic diversity from 2 different regions. The data was collected retrospectively, as it was independent of the training data. The document states it is "representative of the intended use population for Auto EF" and balanced for gender, covering ages 21-93 years and BMIs 16.5-48.8. It also included data from three ultrasound manufacturers (Philips, GE, and Siemens).

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Number of Experts: 2 experienced sonographers.
    • Qualifications: "experienced sonographers." Specific details regarding their years of experience or board certifications are not provided in the document.

    4. Adjudication Method for the Test Set

    • Adjudication Method: The two sonographers worked independently to establish the ground truth. There is no mention of a formal adjudication process (e.g., 2+1, 3+1), arbitration by a third expert, or a consensus meeting after independent readings. They "did not have access to Auto EF when establishing the ground truth."

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • MRMC Study: No. The document explicitly states, "No clinical studies were carried out for syngo Dynamics (Version VA41D). All performance testing was conducted in a non-clinical fashion as part of the verification and validation activities for the medical device." The evaluation focused on the algorithm's performance against ground truth, not on human reader performance with or without AI assistance.
    • Effect Size of Human Reader Improvement: Not applicable, as no MRMC study was performed.

    6. Standalone (Algorithm-Only) Performance Study

    • Standalone Study: Yes. The performance validation of the Auto EF algorithm was conducted in a standalone manner. The "Auto EF results with the subject device" were compared directly against the established ground truth. The algorithm processed the images and generated biplane EF values without human intervention in the calculation process, although the system allows users to "review, edit or reject the results."

    7. Type of Ground Truth Used

    • Type of Ground Truth: Expert consensus with a conventional manual method based on the "Method of Disks" (MOD), also known as the Modified Simpson's Rule. The ground truth was established by two independent sonographers calculating left ventricular volumes and ejection fraction.

    8. Sample Size for the Training Set

    • Training Set Sample Size: Not explicitly stated. The document mentions the algorithm was "re-trained with more training data" compared to the predicate device, but does not provide a specific number.

    9. How the Ground Truth for the Training Set Was Established

    • Training Set Ground Truth Establishment: Not explicitly detailed. The document only states that the "LV auto contouring algorithm has been updated with pre-training and additional annotated training data." It does not specify the method (e.g., expert consensus, manual contouring) or the number/qualifications of experts involved in annotating the training data. However, given that the test set ground truth was established by sonographers using the Method of Disks, it is highly probable that a similar methodology was used for the training data annotation.
    Ask a Question

    Ask a specific question about this device

    K Number
    K242745
    Date Cleared
    2025-03-27

    (197 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Siemens Healthcare GmbH

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AI-Rad Companion Organs RT is a post-processing software intended to automatically contour DICOM CT and MR pre-defined structures using deep-learning-based algorithms.

    Contours that are generated by AI-Rad Companion Organs RT may be used as input for clinical workflows including external beam radiation therapy treatment planning. AI-Rad Companion Organs RT must be used in conjunction with appropriate software such as Treatment Planning Systems and Interactive Contouring applications, to review, edit, and accept contours generated by AI-Rad Companion Organs RT.

    The outputs of AI-Rad Companion Organs RT are intended to be used by trained medical professionals.

    The software is not intended to automatically detect or contour lesions.

    Device Description

    AI-Rad Companion Organs RT provides automatic segmentation of pre-defined structures such as Organs-at-risk (OAR) from CT or MR medical series, prior to dosimetry planning in radiation therapy. AI-Rad Companion Organs RT is not intended to be used as a standalone diagnostic device and is not a clinical decision-making software.

    CT or MR series of images serve as input for AI-Rad Companion Organs RT and are acquired as part of a typical scanner acquisition. Once processed by the AI algorithms, generated contours in DICOMRTSTRUCT format are reviewed in a confirmation window, allowing clinical user to confirm or reject the contours before sending to the target system. Optionally, the user may select to directly transfer the contours to a configurable DICOM node (e.g., the Treatment Planning System (TPS), which is the standard location for the planning of radiation therapy).

    AI-Rad Companion Organs RT must be used in conjunction with appropriate software such as Treatment Planning Systems and Interactive Contouring applications, to review, edit, and accept the automatically generated contours. Then the output of AI-Rad Companion Organs RT must be reviewed and, where necessary, edited with appropriate software before accepting generated contours as input to treatment planning steps. The output of AI-Rad Companion Organs RT is intended to be used by qualified medical professionals, who can perform a complementary manual editing of the contours or add any new contours in the TPS (or any other interactive contouring application supporting DICOM-RT objects) as part of the routine clinical workflow.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    Acceptance Criteria and Device Performance Study for AI-Rad Companion Organs RT

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria for the AI-Rad Companion Organs RT device, particularly for the enhanced CT contouring algorithm, are based on comparing its performance to the predicate device and relevant literature/cleared devices. The primary metrics used are Dice coefficient and Absolute Symmetric Surface Distance (ASSD).

    Table 3: Acceptance Criteria of AIRC Organs RT VA50

    Validation Testing SubjectAcceptance CriteriaReported Device Performance (Summary)
    Organs in Predicate DeviceAll organs segmented in the predicate device are also segmented in the subject device.Confirmed. The device continued to segment all organs previously handled by the predicate.
    The average (AVG) Dice score difference between the subject and predicate device is
    Ask a Question

    Ask a specific question about this device

    K Number
    K241770
    Date Cleared
    2025-03-05

    (258 days)

    Product Code
    Regulation Number
    892.2090
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Siemens Healthcare GmbH

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Prostate MR AI is a plug-in Radiological Computer Assisted Detection and Diagnosis Software device intended to be used · with a separate hosting application · as a concurrent reading aid to assist radiologists in the interpretation of a prostate MRI examination acquired according to the PI-RADS standard · in adult men (40 years and older) with suspected cancer in treatment naïve prostate glands The plug-in software analyzes non-contrast T2 weighted (T2W) and diffusion weighted image (DWI) series to segment the prostate gland and to provide an automatic detection and segmentation of regions suspicious for cancer. For each suspicious region detected, the algorithm moreover provides a lesion Score, by way of PI-RADS interpretation suggestion. Outputs of the device should be interpreted consistently with ACR recommendations using all available MR data (e.g., dynamic contrast enhanced images [if available]). Patient management decisions should not be made solely based on analysis by the Prostate MR AI algorithm.

    Device Description

    This premarket notification addresses the Siemens Healthineers Prostate MR AI (VA10A) Radiological Computer Assisted Detection and Diagnosis Software (CADe/CADx). Prostate MR AI is a Computer Assisted Detection and Diagnosis algorithm designed to plug into a hosting workflow that assists radiologists in the detection of suspicious lesions and their classification. It is used as a concurrent reading aid to assist radiologists in the interpretation of a prostate MRI examination acquired according to the PI-RADS standard. The automatic lesion detection requires transversal T2W and DWI series as inputs. The device automatically exports a list of detected prostate regions that are suspicious for cancer (each list entry consists of contours and a classification by Score and Level of Suspicion (LoS)), a computed suspicion map, and a per-case LoS. The results of the Prostate MR AI plug-in (with the case-level LoS, lesion center points, lesion diameters, lesion ADC median, lesion 10th percentile, suspicion map, and non-PZ segmentation considered optional) are to be shown in a hosting application that allows the radiologist to view the original case, as well as confirm, reject, or edit lesion candidates with their contours and Scores as generated by the Prostate MR AI plug-in. Moreover, the radiologist can add lesions with contours and PI-RADS scores and finalize the case. In addition, the outputs include an automatically computed prostate segmentation, as well as sub-segmentations of the peripheral zone and the rest of the prostate (non-PZ). The algorithm will augment the prostate workflow of currently cleared syngo.MR General Engine if activated via a separate license on the General Engine.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

    Acceptance Criteria and Reported Device Performance

    Acceptance CriteriaReported Device Performance
    Automatic Prostate Segmentation
    Median Dice score between AI algorithm results and ground truth masks exceeds 0.9.The median of the Dice score between the AI algorithm results and the corresponding ground truth masks exceeds the threshold of 0.9.
    Median normalized volume difference between algorithm results and ground truth masks is within ±5%.The median of the normalized volume difference between the algorithm results and the corresponding ground truth masks is within a ±5% range.
    AI algorithm results are statistically non-inferior to individual reader variability (5% margin of error, 5% significance level).The AI algorithm results as compared to any individual reader are statistically non-inferior based on variabilities that existed among the individual readers within the 5% margin of error and 5% significance level.
    Prostate Lesion Detection and Classification
    Case-level sensitivity of lesion detection ≥ 0.80 for both radiology and pathology ground truth.The case-level sensitivity of the lesion detection is equal or greater than 0.80 for both radiology and pathology ground truth.
    False positive rate per case of lesion detection
    Ask a Question

    Ask a specific question about this device

    K Number
    K240796
    Date Cleared
    2024-08-06

    (137 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Siemens Healthcare GmbH

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    MyAblation Guide is a software application for image processing, 2D/3D visualization, and comparison of medical images imported from multiple imaging modalities.

    The software is controlled by the end user interface on a workstation with DICOM connectivity or as an integrated version on a Siemens CT scanner workstation.

    The application is used to assist in the preparation and performance of ablative procedures, including of ablation targets, virtual ablation probe placement and contouring of ablated areas, as well as supporting the User in their assessment of the treatment. The application can only be used by trained Users.

    The software is not intended for diagnosis and is not intended to predict ablation volumes or predict ablation success.

    Device Description

    myAblation Guide is a software medical device that is used in the context of percutaneous ablative procedures with straight instruments. It is used by clinical professionals in a hospital premise; it can be either deployed on compatible CT scanners or a computer workstation.

    The application is operated by medical professionals such as Interventional Radiologists and medical technologists with current license and/or certification as required by regional authority. myAblation Guide allows operating functions in an arbitrary sequence. In addition, it includes a structured sequence of steps for ease of utility.

    The application supports anatomical datasets from CT, MR, CBCT, as well as PET/CT.

    The application includes means and functionalities to support in:

    · Multimodality viewing and contouring of anatomical, and multi-parametric images such as CT, CBCT, PET/CT, MRI

    · Multiplanar reconstruction (MPR) thin/thick, minimum intensity projection (MIP), volume rendering technique (VRT)

    · Freehand and semi-automatic contouring of regions-of-interest on any orientation including oblique

    • Manual and semi-automatic registration using rigid and deformable registration
    • · Expansion of created contour structures to visualize a safety margin

    · Functionality to support the user in creating virtual ablation needle paths and associated virtual ablation zones derived from manufacturer data

    • · Export of virtual needle paths in the Dicom SSO format
    • · Supports the user in comparing, contouring, and ablation needle planning based on datasets acquired with different imaging modalities
    • Supports multimodality image fusion
    • · Supports user's procedure flow via a task stepper

    Thermal ablation cannot be triggered from myAblation Guide.

    AI/ML Overview

    The provided text details the 510(k) submission for the myAblation Guide (VB80A) device. It includes information on non-clinical testing performed to demonstrate the device meets established design criteria.

    Here's an organized breakdown of the acceptance criteria and study proving the device meets them, based on the provided text:

    Acceptance Criteria and Reported Device Performance

    MetricAcceptance Criteria (Implied by Study Target/Reference)Reported Device Performance (myAblation Guide)
    Lesion SegmentationDice Score: 0.82 (from Moltz et al. study)Dice Score: 0.65 (All Lesion Types)
    Sensitivity: N/ASensitivity: 0.82 (All Lesion Types)
    Ablation Zone SegmentationN/A (no specific numerical target stated)Dice Score: 0.65
    N/ASensitivity: 0.95

    Note on Acceptance Criteria: The document implies the Moltz et al. study's Dice coefficient of 0.82 on liver metastases as a benchmark, stating "the algorithm effectively demonstrated the segmentation of both hyperdense and hypodense lesions... With a Dice coefficient (Dice similarity index) of 0.82". For the internal study, the reported Dice scores and sensitivities appear to be the performance metrics being presented to demonstrate functionality rather than explicitly stated "acceptance criteria" that must be met. However, for the purpose of this exercise, we can infer that these reported values demonstrate the device's acceptable performance.

    Study Details

    1. Sample Size Used for the Test Set and Data Provenance:

      • Lesion Segmentation (Moltz et al. study): 5 different datasets comprising 10 liver metastases. Data provenance is not specified (e.g., country of origin, retrospective/prospective).
      • Lesion Segmentation (Internal Study): 50 patients. Data provenance is not specified (e.g., country of origin, retrospective/prospective), but it is referred to as an "internal study," suggesting it was conducted by the manufacturer or an affiliated entity.
      • Ablation Zone Segmentation: 33 patients with 41 available ablation zones. Data provenance is not specified.
    2. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:

      • The document does not specify the number of experts or their qualifications for establishing ground truth for the test sets in either the Moltz et al. study or the internal studies.
    3. Adjudication Method for the Test Set:

      • The document does not provide details on any adjudication method (e.g., 2+1, 3+1, none) used for the test sets. It only mentions the comparison of algorithm performance against a reference.
    4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

      • No MRMC comparative effectiveness study was done. The document explicitly states: "No clinical studies were carried out for the subject device, and therefore, no such clinical data is provided within this submission." The study focuses on "algorithm's performance" and "semi-automatic liver ablation zone segmentation."
    5. Standalone (Algorithm Only Without Human-in-the-Loop) Performance:

      • Yes, the performance data presented (Dice scores, Sensitivity) are indicative of standalone (algorithm only) performance for the semi-automatic segmentation algorithms. The phrasing "To assess the algorithm's performance" and "The internal analysis of the lesion segmentation" supports this. The device is a "software application for image processing," and the described tests evaluate the segmentation algorithms within this software.
    6. Type of Ground Truth Used:

      • The document does not explicitly state the type of ground truth used (e.g., expert consensus, pathology, outcomes data). However, for segmentation tasks, ground truth is typically established by expert manual annotation or referencing pathology for pathological confirmation. Given the context of "assessed" cases and "segmentation," it is highly probable that the ground truth was established by expert review/annotation of the medical images.
    7. Sample Size for the Training Set:

      • The document does not specify the sample size for the training set. The provided information relates only to the test sets used for evaluating the semi-automatic segmentation algorithms.
    8. How the Ground Truth for the Training Set Was Established:

      • The document does not provide information on how the ground truth for the training set was established, as the size and specifics of the training set are not mentioned.
    Ask a Question

    Ask a specific question about this device

    K Number
    K240294
    Date Cleared
    2024-05-23

    (112 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Siemens Healthcare GmbH

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Syngo Carbon Enterprise Access is indicated for display and rendering of medical data within healthcare institutions.

    Device Description

    Syngo Carbon Enterprise Access is a software only medical device which is intended to be installed on recommended common IT Hardware. The hardware is not seen as part of the medical device. Syngo Carbon Enterprise Access is intended to be used in clinical image and result distribution for diagnostic purposes by trained medical professionals and provides standardized generic interfaces to connect to medical devices without controlling or altering their functions.

    Syngo Carbon Enterprise Access provides an enterprise-wide web application for viewing DICOM, non-DICOM, multimedia data and clinical documents to facilitate image and result distribution.

    AI/ML Overview

    The provided text is a 510(k) summary for the Syngo Carbon Enterprise Access (VA40A) device. It describes the device, its intended use, and compares it to a predicate device (Syngo Carbon Space VA30A). However, it explicitly states:

    "No clinical studies were carried out for the product, all performance testing was conducted in a non-clinical fashion as part of verification and validation activities of the medical device."

    Therefore, I cannot provide the information requested in your prompt regarding acceptance criteria, sample sizes, expert involvement, adjudication, MRMC studies, standalone performance, and ground truth establishment for a clinical study.

    The document outlines "Non-clinical Performance Testing" and "Software Verification and Validation." These sections describe how the device's functionality was tested and validated to ensure it meets specifications and is substantially equivalent to the predicate device.

    Here's what can be extracted about the "study" that proves the device meets "acceptance criteria" from a non-clinical performance testing and software verification/validation perspective:

    1. Table of acceptance criteria and reported device performance:

    The document broadly states that "The testing results support that all the software specifications have met the acceptance criteria" and "Results of all conducted testing were found acceptable in supporting the claim of substantial equivalence." However, it does not provide a specific, detailed table of acceptance criteria and quantitative performance metrics for each criterion. It rather focuses on demonstrating equivalence through feature comparison and general assertions of testing success.

    2. Sample size used for the test set and the data provenance:

    The document mentions "non-clinical tests" and "software verification and validation testing." It does not specify the sample size for the test set or the provenance of the data used for this non-clinical testing (e.g., country of origin, retrospective or prospective). This testing would typically involve various types of simulated data, pre-recorded medical images, and functional tests rather than patient studies.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    Since no clinical studies were performed and the testing was non-clinical and technical, there's no mention of "experts" establishing ground truth in the way it would be done for a clinical performance study. The "ground truth" for non-clinical software testing would be derived from the product's functional specifications and expected outputs.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    Not applicable, as no clinical study with human interpretation/adjudication was conducted.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    Not applicable, as no clinical studies, especially MRMC studies, were conducted. The device is a medical image management and processing system, not an AI-assisted diagnostic tool.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    The testing was on the device's functionality as a standalone software, but this refers to its technical performance in image management and display, not a diagnostic algorithm.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    For this type of device and testing, the "ground truth" would be the functional specifications and expected behavior of the software based on its design documents. For example, if a function is to display a DICOM image, the ground truth is that the image should be displayed correctly according to DICOM standards. It's not based on clinical or pathological findings.

    8. The sample size for the training set:

    Not applicable. This device is a medical image management and processing system, not an AI/ML diagnostic algorithm that requires a "training set."

    9. How the ground truth for the training set was established:

    Not applicable, as there is no training set for this type of device.

    In summary, the provided document focuses on demonstrating substantial equivalence through a comparison of features with a predicate device and general non-clinical verification and validation activities, rather than a clinical performance study with defined acceptance criteria and statistical analysis of human or AI diagnostic performance.

    Ask a Question

    Ask a specific question about this device

    K Number
    K233753
    Date Cleared
    2024-03-21

    (120 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Siemens Healthcare GmbH

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AI-Rad Companion (Pulmonary) is image processing software that provides quantitative and qualitative analysis from previously acquired Computed Tomography DICOM images to support radiologists and physicians from specialty care and general practice in the evaluation and assessment of disease of the lungs.

    It provides the following functionality:

    • Segmentation and measurements of complete lung and lung lobes
    • · Identification of areas with lower Hounsfield values in comparison to a predefined threshold for complete lung and lung lobes
    • · Providing an interface to external Medical Device syngo.CT Lung CAD
    • · Segmentation and measurements of solid and sub-solid lung nodules
    • Dedication of found lung nodules to corresponding lung lobe
    • Correlation of segmented lung nodules of current scan with known priors and quantitative assessment of changes of the correlated data.
    • Identification of areas with elevated Hounsfield values, where areas with elevated versus high opacities are distinquished.

    The software has been validated for data from Siemens Healthineers (filtered backprojection and iterative reconstruction), GE Healthcare (filtered backprojection reconstruction), and Philips (filtered backprojection reconstruction).

    Only DICOM images of adult patients are considered to be valid input.

    Device Description

    The subject device AI-Rad Companion (Pulmonary) is an image processing software that utilizes machine learning and deep learning algorithms to provide quantitative and qualitative analysis from previously acquired Computed Tomography DICOM images to support qualified clinicians in the evaluation and assessment of disease of the thorax. AI-Rad Companion (Pulmonary) builds on platform functionality provided by the AI-Rad Companion Engine and cloud/edge functionality provided by the Siemens Healthineers teamplay digital platform. AI-Rad Companion (Pulmonary) is an adjunct tool and does not replace the role of a qualified medical professional. AI-Rad Companion (Pulmonary) is also not designed to detect the presence of radiographic findings other than the prespecified list. Qualified medical professionals should review original images for all suspected pathologies.

    AI-Rad Companion (Pulmonary) offers:

    • Segmentation of lungs, ●
    • Segmentation of lung lobes.
    • Parenchyma evaluation, ●
    • Parenchyma ranges,
    • Pulmonary density,
    • Visualization of segmentation and parenchyma results,
    • Interface to LungCAD,
    • Lesion segmentation, ●
    • Visualization of lesion segmentation results, ●
    • Lesion follow-up

    AI-Rad Companion (Pulmonary) requires images of patients of 22 years and older.

    AI-Rad Companion (Pulmonary) SW version VA40 is an enhancement to the previously cleared device AI-Rad Companion (Pulmonary) (K213713) that utilizes machine and deep learning algorithms to provide quantitative and qualitative analysis to computed tomography DICOM images to support qualified clinicians in the evaluation and assessment of disease of the thorax.

    As an update to the previously cleared device, the following modifications have been made:

    • Sub-solid Lung Nodule Segmentation ●

    This feature provides the ability to segment and measure all subtypes of lesions including solid and sub-solid lesions.

    • . Modified Indications for Use Statement The indications for use statement was updated to include descriptive text for sub-solid lung nodule addition.
    • Updated Subject Device Claims List The claims list was updated to reflect the new device functionality
    • . Updated Limitations for Use Additional limitations for use has been added to the subject device.
    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device's performance, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    Validation TypeTarget (Acceptance Criteria)Reported Device Performance
    Failure Rateaverage DICE for predicate solid nodulesAverage DICE coefficient for sub-solid nodules was superior to the average DICE coefficient of the predicate device for solid nodules (repetition of earlier point, but reinforces direct comparison).
    Consistency of Subgroup resultsAverage DICE not smaller than DICE of overall cohort minus 1 STD
    Bias of three metrics not exceed ±1 STD
    RMSE of three metrics not exceed RMSE of overall cohort +1 STD eachThe subject device met its individual subgroup analysis acceptance criterion for all subgroups.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: 273 subjects from the United States and 254 subjects from Germany, for a total of 527 subjects.
    • Data Provenance: The data originated from the United States (69% of cases) and Germany (31% of cases). The data was retrospective, as it refers to "previously acquired Computed Tomography DICOM images."
    • Imaging Vendors: The test data included images from Canon/Toshiba (18%), GE (35%), Philips (15%), and Siemens (32%).

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications

    • Number of Experts: Two board-certified radiologists, with a third radiologist for adjudication.
    • Qualifications:
      • Radiologist 1: 10 years of experience (board-certified)
      • Radiologist 2: 7 years of experience (board-certified)
      • Adjudicating Radiologist 3: 9 years of experience

    4. Adjudication Method for the Test Set

    • Method: 2+1 (Two experts independently established ground truth, and in case of disagreement, a third expert served as an adjudicator.)

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done

    • Was it done?: No, a traditional MRMC comparative effectiveness study involving human readers was not performed in the context of this specific submission. The study focuses on the standalone performance of the AI algorithm in comparison to the predicate device's performance, particularly for the new sub-solid nodule segmentation feature. The device is described as an "adjunct tool," but the presented study validates the algorithm's performance against expert consensus, not against human readers with and without AI.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    • Was it done?: Yes, the performance testing described directly evaluates the AI-Rad Companion (Pulmonary) lesion segmentation algorithm's accuracy (measured by DICE score, bias, and RMSE) against established ground truth. This is a standalone performance evaluation of the algorithm.

    7. The Type of Ground Truth Used

    • Type: Expert Consensus. The ground truth annotations for the test data were established independently by two board-certified radiologists, with a third radiologist serving as an adjudicator in cases of disagreement.

    8. The Sample Size for the Training Set

    • The sample size for the training set is not explicitly stated in the provided document. However, it is mentioned that "None of the clinical sites providing the test data provided data for training of any of the algorithms. Therefore there is a clear independence on site level between training and test data." This indicates that a distinct training set (or sets) was used.

    9. How the Ground Truth for the Training Set was Established

    • The document does not explicitly state how the ground truth for the training set was established. It only emphasizes the independence of the training and test data sites.
    Ask a Question

    Ask a specific question about this device

    K Number
    K232744
    Date Cleared
    2023-12-21

    (105 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Siemens Healthcare GmbH

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    syngo Virtual Cockpit is a software application intended for remote operation, assistance, review, monitoring, and standardization of medical imaging devices. It is a vendor neutral solution allowing read-only or full access control to connected devices. syngo Virtual Cockpit is also intended for training of medical personnel working on the medical imaging devices.

    Device Description

    syngo Virtual Cockpit (sVC) is a software solution for geographically distant technologist or radiologists to remotely assist with operating imaging equipment and radiotherapeutic devices. sVC provides a private, secure communication platform for real-time image visualization and crossorganizational collaboration between healthcare professionals across multiple sites. sVC enables remote access to modality consoles and enhances communication capabilities between healthcare professionals across different locations. It is vendor-neutral and applicable to existing multimodalities in a healthcare network, a solution that allows healthcare professionals to share expertise and increase productivity, even when they are not physically present in the same location. sVC is based on a client server architecture, with sVC server as the backbone and 3 different variants of client, based on the user roles, Modality client & Physician client. Modality client as two flavors one windows based client and a web client which can be hosted in a web browser. Steering client establishes remote connection to Modality console / Modality acquisition workplace through KVM (Keyboard, Video and Mouse) switch or Siemens proprietary accessing tool, syngo Expert-i. Steering client can establish connections to more than one (up to 3) Modality console applications. Physician client is the third client for Physician that can be contacted either by Steering technologist or by Modality technologist for assistance regarding scanning in more complex cases, or the Physician can provide expert radiologist knowledge. The connection is possible in full control or read-only mode. The full-control accessibility to CT scanners is limited to the software associated with the modality workplace and is not applicable to the physical switches controlling the equipment operation. The connection to radiotherapeutic equipment is limited to be read-only. In addition to enabling remote access and control of the modality scanners, sVC also supports common communication methods including live videos at the modality site, audio calls and text chats among users.

    AI/ML Overview

    The provided FDA 510(k) summary for syngo Virtual Cockpit (VB10A) focuses on demonstrating substantial equivalence to a predicate device rather than presenting a detailed clinical study for novel performance claims. As such, it does not contain the typical elements of an acceptance criteria table or a comparative effectiveness study (such as MRMC) that would be expected for a diagnostic AI device requiring a clinical performance evaluation.

    The device, "syngo Virtual Cockpit (VB10A)," is categorized as a "Medical Image Management And Processing System" (MIMPS) with product code LLZ and is a software-only solution intended for remote operation, assistance, review, monitoring, and standardization of medical imaging devices, and for training medical personnel. It is explicitly stated that "Images reviewed remotely are not for diagnostic use." This indicates it is not a diagnostic AI device that would produce specific diagnostic outputs requiring performance metrics like sensitivity, specificity, or AUC against a ground truth.

    Therefore, the information typically requested in your prompt (e.g., sample size for test/training sets, number of experts for ground truth, adjudication methods, MRMC study effect sizes, standalone performance, type of ground truth) is not present in this document because the device's function does not necessitate such a clinical performance evaluation.

    Here's an analysis based on the information available in the document:


    Acceptance Criteria and Device Performance (as inferred from the document's V&V approach):

    Instead of clinical performance metrics, the acceptance criteria are focused on the functional validity, safety, and effectiveness of the remote management system compared to its predicate and the general requirements for medical device software.

    Acceptance Criteria CategoryDescription (Inferred from document)Reported Device Performance / Evidence
    Functional Equivalence & Intended UseDemonstrate that syngo Virtual Cockpit (sVC) enables remote access, assistance, review, monitoring, and standardization of medical imaging devices, similar to or expanded from the predicates, without raising new questions of safety or effectiveness."The intended use... is equivalent in that they all enable remote access to medical imaging devices and provide assistance... The subject, predicate and reference devices all allow remote users to help and assist modality technologist to display and review scanning protocols, observe and monitor the image acquisition and therefore help standardize scans in one institution."
    Technical Performance (Latency)For remote operation, the system should demonstrate acceptable delay/latency for real-time interaction.Response time: ≤ 30 ms (for Expert-i method) / 60 ms (for KVM method with one modality). This is stated as "Equivalent to reference device."
    Connectivity & CompatibilityEnsure stable and secure connections to supported medical imaging devices (including third-party via KVM switch) over a secured clinical network."The verification testing demonstrates that the sVC connection to modality scanners via KVM switch can perform as intended, meeting all of the design inputs." Supported modalities include medical imaging devices and radiotherapeutic devices (read-only). Supports multiple vendors.
    Communication FeaturesSupport essential communication methods for remote collaboration.Includes "Screen sharing, IP cameras for live video, Audio calls and chat." Stated as "Equivalent. sVC has improved communication features."
    Software Verification & Validation (V&V)All software specifications must be met, and risks associated with clinically relevant functions assessed in a simulated clinical environment. Conformity to relevant consensus standards."Software validation is performed using externally sourced representative modality scanners... at the Siemens Healthineers training center and at the clinical collaborating site. The system configuration, connectivity, compatibility and operation... are assessed to validate the safety and effectiveness of the system in the simulated clinical environment." "All the software specifications have met the acceptance criteria."
    CybersecurityAddress cybersecurity considerations, including prevention of unauthorized access, modification, misuse, denial of use, or unauthorized use of information."Siemens Healthineers conforms to cybersecurity requirements by implementing a means to prevent unauthorized access..."
    Risk ManagementIdentification and mitigation of identified hazards in compliance with ISO 14971."Risk Analysis... was completed and risk control implemented to mitigate identified hazards."
    Safety and EffectivenessThe device should be safe and effective, performing as well as the predicate device without introducing new safety and effectiveness concerns. Output evaluated by clinicians to identify and intervene in case of malfunction."Results of all testing conducted were found acceptable in support to determine similarities to the predicate /previously cleared device." "The device does not come in contact with the patient and is only used by trained professionals. The output of the device is evaluated by clinicians, providing for sufficient review to identify and intervene in the event of a malfunction."

    Details Regarding the Study (as per provided text):

    1. Sample size used for the test set and the data provenance:

      • Test Set Sample Size: Not explicitly stated as a number of "cases" or "patients" in the traditional sense of a clinical study for diagnostic performance. The testing involved "externally sourced representative modality scanners" and "the entire system."
      • Data Provenance: The testing was conducted "in the Siemens Healthineers training center and at the clinical collaborating site." This suggests controlled, simulated environments and potentially real clinical settings, but the origin of any "data" would be the performance of the system itself during validation, not pre-existing patient data. It was effectively a prospective functional validation.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Not Applicable in the traditional sense. There is no "ground truth" established by experts for diagnostic purposes because the device does not make diagnostic claims or generate diagnostic outputs. The "ground truth" for this device would be its ability to correctly perform its intended functions (remote control, video, audio, chat, latency, etc.) and comply with safety and cybersecurity standards. The "evaluation by clinicians" refers to their use of the system and their ability to detect malfunctions during its operation, not a judgmental "ground truth" on diagnoses.
    3. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

      • Not Applicable. Adjudication methods are typically used to resolve discrepancies in expert readings or interpretations when establishing ground truth for diagnostic studies. This device's validation focused on functional performance and compliance.
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No, an MRMC comparative effectiveness study was explicitly NOT done. The document states: "No clinical studies were carried out for syngo Virtual Cockpit (Version VB10A). All performance testing was conducted in a non-clinical fashion as part of the verification and validation activities for the medical device." This device is not designed to assist with human reading of images but rather with operating and managing the imaging process remotely.
    5. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:

      • Not Applicable. The device, by its nature, is a human-in-the-loop system for remote operation. Its functionality is the human-in-the-loop performance (e.g., remote control inputs, video/audio communication). "Stand-alone" performance would refer to diagnostic output generation without human intervention, which is not what this device does.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • Not Applicable in the traditional sense for diagnostic performance. The "ground truth" relevant to this device's validation is the successful execution of its defined functional requirements and user stories, compliance with design inputs, and adherence to relevant safety and regulatory standards (e.g., IEC 62366-1, ISO 14971, IEC 62304). This is assessed through structured verification and validation testing rather than comparison to a clinical gold standard for disease.
    7. The sample size for the training set:

      • Not Applicable. This device is a software application for remote operation and management, not a machine learning or AI algorithm that trains on a dataset to learn patterns or make predictions. Therefore, there is no "training set" in the context of an AI model.
    8. How the ground truth for the training set was established:

      • Not Applicable. As there is no training set for an AI model, there is no ground truth associated with it.

    In summary: The K232744 submission for syngo Virtual Cockpit (VB10A) is a 510(k) for a Medical Image Management and Processing System, not a diagnostic AI device. Its validation focuses on functional performance, safety, cybersecurity, and substantial equivalence to a predicate device based on those characteristics, rather than clinical performance metrics like accuracy, sensitivity, or specificity. Therefore, many of the questions regarding clinical study design, ground truth, and AI model training are not relevant to this specific device submission.

    Ask a Question

    Ask a specific question about this device

    K Number
    K232856
    Date Cleared
    2023-12-01

    (77 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Siemens Healthcare GmbH

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Syngo Carbon Clinicals is intended to provide advanced visualization tools to prepare and process the medical image for evaluation, manipulation and communication of clinical data that was acquired by the medical imaging modalities (for example, CT, MR, etc.)

    The software package is designed to support technicians and physicians in qualitative and quantitative measurements and in the analysis of clinical data that was acquired by medical imaging modalities.

    An interface shall enable the connection between the Syngo Carbon Clinicals software package and the interconnected software solution for viewing, manipulation, communication, and storage of medical images.

    Device Description

    Syngo Carbon Clinicals is a software only Medical Device, which provides dedicated advanced imaging tools for diagnostic reading. These tools can be called up using standard interfaces any native/syngo based viewing applications (hosting applications) that is part of the SYNGO medical device portfolio. These tools help prepare and process the medical image for evaluation, manipulation and communication of clinical data that was acquired by medical imaging modalities (e.g., MR, CT etc.)

    Deployment Scenario: Syngo Carbon Clinicals is a plug-in that can be added to any SYNGO based hosting applications (for example: Syngo Carbon Space, syngo.via etc...). The hosting application (native/syngo Platform-based software) is not described within this 510k submission. The hosting device decides which tools are used from Syngo Carbon Clinicals. The hosting device does not need to host all tools from the Syngo Carbon Clinicals, a desired subset of the provided tools can be used. The same can be enabled or disabled thru licenses.

    AI/ML Overview

    The provided text is a 510(k) summary for Syngo Carbon Clinicals (K232856). It focuses on demonstrating substantial equivalence to a predicate device through comparison of technological characteristics and non-clinical performance testing. The document does not describe acceptance criteria for specific device performance metrics or a study that definitively proves the device meets those criteria through clinical trials or quantitative bench testing with specific reported performance values.

    Instead, it relies heavily on evaluating the fit-for-use of algorithms (Lesion Quantification and Organ Segmentation) that were previously studied and cleared as part of predicate or reference devices, and ensuring their integration into the new device without modification to the core algorithms. The non-clinical performance testing for Syngo Carbon Clinicals focuses on verification and validation of changes/integrations, and conformance to relevant standards.

    Therefore, many of the requested details about acceptance criteria and reported device performance cannot be extracted directly from this document. However, I can provide information based on what is available.


    Acceptance Criteria and Study for Syngo Carbon Clinicals

    Based on the provided 510(k) summary, formal acceptance criteria with specific reported performance metrics for the Syngo Carbon Clinicals device itself are not explicitly detailed in a comparative table against a clinical study's results. The submission primarily relies on the equivalency of its components to previously cleared devices and non-clinical verification and validation.

    The "study" proving the device meets acceptance criteria is fundamentally a non-clinical performance testing, verification, and validation process, along with an evaluation of previously cleared algorithms from predicate/reference devices for "fit for use" in the subject device.

    Here's a breakdown of the requested information based on the document:

    1. Table of Acceptance Criteria and Reported Device Performance

    As mentioned, a direct table of specific numerical acceptance criteria and a corresponding reported device performance from a clinical study is not present. The document describes acceptance in terms of:

    Feature/AlgorithmAcceptance Criteria (Implicit)Reported Device Performance
    Lesion Quantification Algorithm"Fit for use" in the subject device, with design mitigations for drawbacks/limitations identified in previous studies of the predicate device."The results of phantom and reader studies conducted on the Lesion Quantification Algorithm, in the predicate device, were evaluated for fit for use in the subject device and it was concluded that the Algorithm can be integrated in the subject device with few design mitigations to overcome the drawbacks/limitations specified in these studies. These design mitigations were validated by non-Clinical performance testing and were found acceptable."
    (No new specific performance metrics are reported for Syngo Carbon Clinicals, but rather acceptance of the mitigations).
    Organ Segmentation Algorithm"Fit for use" in the subject device without any modifications, based on previous studies of the reference device."The results of phantom and reader studies conducted on the Organ Segmentation Algorithm, in the reference device, were evaluated for fit for use in the subject device. And it was concluded that the Algorithm can be integrated in the subject device without any modifications."
    (No new specific performance metrics are reported for Syngo Carbon Clinicals).
    Overall Device FunctionalityConformance to specifications, safety, and effectiveness comparably to the predicate device."Results of all conducted testing were found acceptable in supporting the claim of substantial equivalence." (General statement, no specific performance metrics). Consistent with "Moderate Level of Concern" software.
    Software Verification & ValidationAll software specifications met the acceptance criteria."The testing results support that all the software specifications have met the acceptance criteria." (General statement).

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: Not explicitly stated for specific test sets in this document for Syngo Carbon Clinicals. The evaluations of the Lesion Quantification and Organ Segmentation algorithms refer to "phantom and reader studies" from their respective predicate/reference devices, but details on the sample sizes of those original studies are not provided here.
    • Data Provenance: Not specified. The original "phantom and reader studies" for the algorithms were likely internal to the manufacturers or collaborators, but this document does not detail their origin (e.g., country, specific institutions). The text indicates these were retrospective studies (referring to prior evaluations).

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts

    • Number of Experts: Not specified. The document mentions "reader studies" were conducted for the predicate/reference devices' algorithms, implying involvement of human readers/experts, but the number is not stated.
    • Qualifications of Experts: Not specified. It can be inferred that these would be "trained medical professionals" as per the intended user for the device, but specific qualifications (e.g., radiologist with X years of experience) are not provided.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not specified for the historical "reader studies" referenced. This document does not detail the methodology for establishing ground truth or resolving discrepancies among readers if multiple readers were involved.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC Comparative Effectiveness Study: The document itself states, "No clinical studies were carried out for the product, all performance testing was conducted in a non-clinical fashion as part of verification and validation activities of the medical device." Therefore, no MRMC comparative effectiveness study for human readers with and without AI assistance for Syngo Carbon Clinicals was performed or reported in this submission. The device is a set of advanced visualization tools, not an AI-assisted diagnostic aid that directly impacts reader performance in a comparative study mentioned here.

    6. If a Standalone (i.e. algorithm only without human-in-the loop performance) was done

    • Standalone Performance: The core algorithms (Lesion Quantification and Organ Segmentation) were evaluated in "phantom and reader studies" as part of their previous clearances (predicate/reference devices). While specific standalone numerical performance metrics for these algorithms (e.g., sensitivity, specificity, accuracy) are not reported in this document, the mention of "phantom" studies suggests a standalone evaluation component. The current submission, however, evaluates these previously cleared algorithms for "fit for use" within the new Syngo Carbon Clinicals device, implying their standalone performance was considered acceptable from their original clearances.

    7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)

    • Type of Ground Truth: Not explicitly detailed. The referenced "phantom and reader studies" imply that for phantoms, the ground truth would be known (e.g., physical measurements), and for reader studies, it would likely involve expert consensus or established clinical benchmarks. However, the exact method for establishing ground truth in those original studies is not provided here.

    8. The Sample Size for the Training Set

    • Sample Size for Training Set: Not specified in this 510(k) summary. The document mentions that the deep learning algorithm for organ segmentation was "cleared as part of the reference device syngo.via RT Image suite (K220783)." This implies that any training data for this algorithm would have been part of the K220783 submission, not detailed here for Syngo Carbon Clinicals.

    9. How the Ground Truth for the Training Set was Established

    • Ground Truth for Training Set: Not specified in this 510(k) summary. As with the training set size, this information would have been part of the original K220783 submission for the organ segmentation algorithm and is not detailed in this document.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 3