Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K251059
    Date Cleared
    2025-10-24

    (203 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Syngo Carbon Clinicals is intended to provide advanced visualization tools to prepare and process the medical image for evaluation, manipulation and communication of clinical data that was acquired by the medical imaging modalities (for example, CT, MR, etc.)

    OrthoMatic Spine provides the means to perform musculoskeletal measurements of the whole spine, in particular spine curve angle measurements.

    The TimeLens provides the means to compare a region of interest between multiple time points.

    The software package is designed to support technicians and physicians in qualitative and quantitative measurements and in the analysis of clinical data that was acquired by medical imaging modalities.

    An interface shall enable the connection between the Syngo Carbon Clinicals software package and the interconnected software solution for viewing, manipulation, communication, and storage of medical images.

    Device Description

    Syngo Carbon Clinicals is a software only Medical Device, which provides dedicated advanced imaging tools for diagnostic reading. These tools can be called up using standard interfaces any native/syngo based viewing applications (hosting applications) that is part of the SYNGO medical device portfolio. These tools help prepare and process the medical image for evaluation, manipulation and communication of clinical data that was acquired by medical imaging modalities (e.g., MR, CT etc.)

    Deployment Scenario: Syngo Carbon Clinicals is a plug-in that can be added to any SYNGO based hosting applications (for example: Syngo Carbon Space, syngo.via etc…). The hosting application (native/syngo Platform-based software) is not described within this 510k submission. The hosting device decides which tools are used from Syngo Carbon Clinicals. The hosting device does not need to host all tools from the Syngo Carbon Clinicals, a desired subset of the provided tools can be used. The same can be enabled or disabled thru licenses.

    When preparing the radiologist's reading workflow on a dedicated workplace or workstation, Syngo Carbon Clinicals can be called to generate additional results or renderings according to the user needs using the tools available.

    AI/ML Overview

    This document describes performance evaluation for two specific tools within Syngo Carbon Clinicals (VA41): OrthoMatic Spine and TimeLens.

    1. Table of Acceptance Criteria and Reported Device Performance

    Feature/ToolAcceptance CriteriaReported Device Performance
    OrthoMatic SpineAlgorithm's measurement deviations for major spinal measurements (Cobb angles, thoracic kyphosis angle, lumbar lordosis angle, coronal balance, and sagittal vertical alignment) must fall within the range of inter-reader variability.Cumulative Distribution Functions (CDFs) demonstrated that the algorithm's measurement deviations fell within the range of inter-reader variability for the major Cobb angle, thoracic kyphosis angle, lumbar lordosis angle, coronal balance, and sagittal vertical alignment. This indicates the algorithm replicates average rater performance and meets clinical reliability acceptance criteria.
    TimeLensNot specified as a reader study/bench test was not required due to its nature as a simple workflow enhancement algorithm.No specific quantitative performance metrics are provided, as clinical performance evaluation methods (reader studies) were deemed unnecessary. The tool is described as a "simple workflow enhancement algorithm".

    2. Sample Size Used for the Test Set and Data Provenance

    • OrthoMatic Spine:

      • Test Set Sample Size: 150 spine X-ray images (75 frontal views, 75 lateral views) were used in a reader study.
      • Data Provenance: The document states that the main dataset for training includes data from USA, Germany, Ukraine, Austria, and Canada. While this specifies the training data provenance, the provenance of the specific 150 images used for the reader study (test set) is not explicitly segregated or stated here. The study involved US board-certified radiologists, implying the test set images are relevant to US clinical practice.
      • Retrospective/Prospective: Not explicitly stated, but the description of "collected" images and patients with various spinal conditions suggests a retrospective collection of existing exams.
    • TimeLens: No specific test set details are provided as a reader study/bench test was not required.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • OrthoMatic Spine:

      • Number of Experts: Five US board-certified radiologists.
      • Qualifications: US board-certified radiologists. No specific years of experience are mentioned.
      • Ground Truth for Reader Study: The "mean values obtained from the radiologists' assessments" for the 150 spine X-ray images served as the reference for comparison against the algorithm's output.
    • TimeLens: Not applicable, as no reader study was conducted.

    4. Adjudication Method for the Test Set

    • OrthoMatic Spine: The algorithm's output was assessed against the mean values obtained from the five radiologists' assessments. This implies a form of consensus or average from multiple readers rather than a strict 2+1 or 3+1 adjudication.
    • TimeLens: Not applicable.

    5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • OrthoMatic Spine: A reader study was performed, which is a type of MRMC study. However, this was a standalone performance evaluation of the algorithm against human reader consensus, not a comparative effectiveness study with and without AI assistance for human readers. Therefore, there is no reported "effect size of how much human readers improve with AI vs without AI assistance." The study aimed to show the algorithm replicates average human rater performance.
    • TimeLens: Not applicable.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • OrthoMatic Spine: Yes, a standalone performance evaluation of the OrthoMatic Spine algorithm (without human-in-the-loop assistance) was conducted. The algorithm's measurements were compared against the mean values derived from five human radiologists.
    • TimeLens: The description suggests the TimeLens tool itself is a "simple workflow enhancement algorithm" and its performance was evaluated through non-clinical verification and validation activities rather than a specific standalone clinical study with an AI algorithm providing measurements.

    7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)

    • OrthoMatic Spine:
      • For the reader study (test set performance evaluation): Expert consensus (mean of five US board-certified radiologists' measurements) was used to assess the algorithm's performance.
      • For the training set: The initial annotations were performed by trained non-radiologists and then reviewed by board-certified radiologists. This can be considered a form of expert-verified annotation.
    • TimeLens: Not specified, as no clinical ground truth assessment was required.

    8. The Sample Size for the Training Set

    • OrthoMatic Spine:
      • Number of Individual Patients (Training Data): 6,135 unique patients.
      • Number of Images (Training Data): A total of 23,464 images were collected within the entire dataset, which was split 60% for training, 20% for validation, and 20% for model selection. Therefore, the training set would comprise approximately 60% of both the patient count and image count. So, roughly 3,681 patients and 14,078 images.
    • TimeLens: Not specified.

    9. How the Ground Truth for the Training Set Was Established

    • OrthoMatic Spine: Most images in the dataset (used for training, validation, and model selection) were annotated using a dedicated annotation tool (Darwin, V7 Labs) by a US-based medical data labeling company (Cogito Tech LLC). Initial annotations were performed by trained non-radiologists and subsequently reviewed by board-certified radiologists. This process was guided by written guidelines and automated workflows to ensure quality and consistency, with annotations including vertebral landmarks and key vertebrae (C7, L1, S1).
    • TimeLens: Not specified.
    Ask a Question

    Ask a specific question about this device

    K Number
    K232856
    Date Cleared
    2023-12-01

    (77 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Syngo Carbon Clinicals is intended to provide advanced visualization tools to prepare and process the medical image for evaluation, manipulation and communication of clinical data that was acquired by the medical imaging modalities (for example, CT, MR, etc.)

    The software package is designed to support technicians and physicians in qualitative and quantitative measurements and in the analysis of clinical data that was acquired by medical imaging modalities.

    An interface shall enable the connection between the Syngo Carbon Clinicals software package and the interconnected software solution for viewing, manipulation, communication, and storage of medical images.

    Device Description

    Syngo Carbon Clinicals is a software only Medical Device, which provides dedicated advanced imaging tools for diagnostic reading. These tools can be called up using standard interfaces any native/syngo based viewing applications (hosting applications) that is part of the SYNGO medical device portfolio. These tools help prepare and process the medical image for evaluation, manipulation and communication of clinical data that was acquired by medical imaging modalities (e.g., MR, CT etc.)

    Deployment Scenario: Syngo Carbon Clinicals is a plug-in that can be added to any SYNGO based hosting applications (for example: Syngo Carbon Space, syngo.via etc...). The hosting application (native/syngo Platform-based software) is not described within this 510k submission. The hosting device decides which tools are used from Syngo Carbon Clinicals. The hosting device does not need to host all tools from the Syngo Carbon Clinicals, a desired subset of the provided tools can be used. The same can be enabled or disabled thru licenses.

    AI/ML Overview

    The provided text is a 510(k) summary for Syngo Carbon Clinicals (K232856). It focuses on demonstrating substantial equivalence to a predicate device through comparison of technological characteristics and non-clinical performance testing. The document does not describe acceptance criteria for specific device performance metrics or a study that definitively proves the device meets those criteria through clinical trials or quantitative bench testing with specific reported performance values.

    Instead, it relies heavily on evaluating the fit-for-use of algorithms (Lesion Quantification and Organ Segmentation) that were previously studied and cleared as part of predicate or reference devices, and ensuring their integration into the new device without modification to the core algorithms. The non-clinical performance testing for Syngo Carbon Clinicals focuses on verification and validation of changes/integrations, and conformance to relevant standards.

    Therefore, many of the requested details about acceptance criteria and reported device performance cannot be extracted directly from this document. However, I can provide information based on what is available.


    Acceptance Criteria and Study for Syngo Carbon Clinicals

    Based on the provided 510(k) summary, formal acceptance criteria with specific reported performance metrics for the Syngo Carbon Clinicals device itself are not explicitly detailed in a comparative table against a clinical study's results. The submission primarily relies on the equivalency of its components to previously cleared devices and non-clinical verification and validation.

    The "study" proving the device meets acceptance criteria is fundamentally a non-clinical performance testing, verification, and validation process, along with an evaluation of previously cleared algorithms from predicate/reference devices for "fit for use" in the subject device.

    Here's a breakdown of the requested information based on the document:

    1. Table of Acceptance Criteria and Reported Device Performance

    As mentioned, a direct table of specific numerical acceptance criteria and a corresponding reported device performance from a clinical study is not present. The document describes acceptance in terms of:

    Feature/AlgorithmAcceptance Criteria (Implicit)Reported Device Performance
    Lesion Quantification Algorithm"Fit for use" in the subject device, with design mitigations for drawbacks/limitations identified in previous studies of the predicate device."The results of phantom and reader studies conducted on the Lesion Quantification Algorithm, in the predicate device, were evaluated for fit for use in the subject device and it was concluded that the Algorithm can be integrated in the subject device with few design mitigations to overcome the drawbacks/limitations specified in these studies. These design mitigations were validated by non-Clinical performance testing and were found acceptable." (No new specific performance metrics are reported for Syngo Carbon Clinicals, but rather acceptance of the mitigations).
    Organ Segmentation Algorithm"Fit for use" in the subject device without any modifications, based on previous studies of the reference device."The results of phantom and reader studies conducted on the Organ Segmentation Algorithm, in the reference device, were evaluated for fit for use in the subject device. And it was concluded that the Algorithm can be integrated in the subject device without any modifications." (No new specific performance metrics are reported for Syngo Carbon Clinicals).
    Overall Device FunctionalityConformance to specifications, safety, and effectiveness comparably to the predicate device."Results of all conducted testing were found acceptable in supporting the claim of substantial equivalence." (General statement, no specific performance metrics). Consistent with "Moderate Level of Concern" software.
    Software Verification & ValidationAll software specifications met the acceptance criteria."The testing results support that all the software specifications have met the acceptance criteria." (General statement).

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: Not explicitly stated for specific test sets in this document for Syngo Carbon Clinicals. The evaluations of the Lesion Quantification and Organ Segmentation algorithms refer to "phantom and reader studies" from their respective predicate/reference devices, but details on the sample sizes of those original studies are not provided here.
    • Data Provenance: Not specified. The original "phantom and reader studies" for the algorithms were likely internal to the manufacturers or collaborators, but this document does not detail their origin (e.g., country, specific institutions). The text indicates these were retrospective studies (referring to prior evaluations).

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts

    • Number of Experts: Not specified. The document mentions "reader studies" were conducted for the predicate/reference devices' algorithms, implying involvement of human readers/experts, but the number is not stated.
    • Qualifications of Experts: Not specified. It can be inferred that these would be "trained medical professionals" as per the intended user for the device, but specific qualifications (e.g., radiologist with X years of experience) are not provided.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not specified for the historical "reader studies" referenced. This document does not detail the methodology for establishing ground truth or resolving discrepancies among readers if multiple readers were involved.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC Comparative Effectiveness Study: The document itself states, "No clinical studies were carried out for the product, all performance testing was conducted in a non-clinical fashion as part of verification and validation activities of the medical device." Therefore, no MRMC comparative effectiveness study for human readers with and without AI assistance for Syngo Carbon Clinicals was performed or reported in this submission. The device is a set of advanced visualization tools, not an AI-assisted diagnostic aid that directly impacts reader performance in a comparative study mentioned here.

    6. If a Standalone (i.e. algorithm only without human-in-the loop performance) was done

    • Standalone Performance: The core algorithms (Lesion Quantification and Organ Segmentation) were evaluated in "phantom and reader studies" as part of their previous clearances (predicate/reference devices). While specific standalone numerical performance metrics for these algorithms (e.g., sensitivity, specificity, accuracy) are not reported in this document, the mention of "phantom" studies suggests a standalone evaluation component. The current submission, however, evaluates these previously cleared algorithms for "fit for use" within the new Syngo Carbon Clinicals device, implying their standalone performance was considered acceptable from their original clearances.

    7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)

    • Type of Ground Truth: Not explicitly detailed. The referenced "phantom and reader studies" imply that for phantoms, the ground truth would be known (e.g., physical measurements), and for reader studies, it would likely involve expert consensus or established clinical benchmarks. However, the exact method for establishing ground truth in those original studies is not provided here.

    8. The Sample Size for the Training Set

    • Sample Size for Training Set: Not specified in this 510(k) summary. The document mentions that the deep learning algorithm for organ segmentation was "cleared as part of the reference device syngo.via RT Image suite (K220783)." This implies that any training data for this algorithm would have been part of the K220783 submission, not detailed here for Syngo Carbon Clinicals.

    9. How the Ground Truth for the Training Set was Established

    • Ground Truth for Training Set: Not specified in this 510(k) summary. As with the training set size, this information would have been part of the original K220783 submission for the organ segmentation algorithm and is not detailed in this document.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1