Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K232136
    Date Cleared
    2024-01-04

    (170 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    syngo.CT Applications

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    syngo.CT Applications is a set of software applications for advanced visualization, measurement, and evaluation for specific body regions.

    This software package is designed to support the radiologists and physicians from emergency medicine, specialty care, urgent care, and general practice e.g. in the:

    • · Evaluation of perfusion of organs and tumors and myocardial tissue perfusion
    • · Evaluation of bone structures and detection of bone lesions
    • · Evaluation of CT images of the heart
    • · Evaluation of the coronary lesions
    • · Evaluation of the mandible and maxilla
    • · Evaluation of dynamic vessels and extended phase handling

    · Evaluation of the liver and its intrahepatic vessel structures to identify the vascular territories of sub-vessel systems in the liver

    • Evaluation of neurovascular structures
    • · Evaluation of the lung parenchyma
    • · Evaluation of non-enhanced Head CT images
    • · Evaluation of vascular lesions
    Device Description

    The syngo.CT Applications are syngo based post-processing software applications to be used for viewing and evaluating CT images provided by a CT diagnostic device and enabling structured evaluation of CT images.

    The syngo.CT Applications is a combination of fourteen (14) medical devices which are handled as features / functionalities within syngo.CT Applications.

    AI/ML Overview

    The provided text is a 510(k) summary for the device "syngo.CT Applications." It describes the device, its indications for use, and a comparison to a predicate device. However, it does not explicitly detail the acceptance criteria for the device's performance nor does it present a study that proves the device meets specific performance metrics.

    Instead, the document primarily focuses on:

    • Substantial Equivalence: Arguing that the new version of syngo.CT Applications is substantially equivalent to a previously cleared version and other reference devices.
    • Software Verification and Validation: Stating that V&V activities were performed and that the device conforms to special controls and standards.
    • Risk Analysis: Confirming risk analysis was completed and controls implemented.
    • Compliance with Standards: Listing recognized consensus standards the device meets.

    Therefore, many of the requested details about acceptance criteria and a specific performance study are not available in the provided text.

    Here's an attempt to answer based on the information available and what can be inferred:

    Acceptance Criteria and Device Performance Study Information

    Disclaimer: The provided document is a 510(k) summary, primarily focused on demonstrating substantial equivalence to a predicate device, rather than a detailed report of a performance study with specific acceptance criteria and results. Therefore, much of the requested information regarding specific quantitative acceptance criteria and the details of a primary performance study is not explicitly stated in the text. The document refers to "testing supports that all software specifications have met the acceptance criteria," but does not list these criteria or detailed results.


    1. Table of Acceptance Criteria and Reported Device Performance

    As mentioned above, specific quantitative acceptance criteria and their corresponding reported performance values are not detailed in the provided 510(k) summary. The document broadly states:

    • "The testing supports that all software specifications have met the acceptance criteria."
    • "The testing results support that all the software specifications have met the acceptance criteria."
    • "The result of all testing conducted was found acceptable to support the claim of substantial equivalence."

    This indicates that internal acceptance criteria were established and met, but their specifics are not published here.


    2. Sample Size Used for the Test Set and Data Provenance

    The document does not specify the sample size used for any test set or the provenance (e.g., country of origin, retrospective/prospective) of data used for testing. It refers to "testing for verification and validation" but does not provide these details.


    3. Number of Experts and Qualifications for Ground Truth

    The document does not provide any information regarding the number of experts used to establish ground truth for a test set, nor their qualifications. Given the nature of the device (advanced visualization and measurement tools), it is likely that such evaluation involved clinical experts, but this is not stated.


    4. Adjudication Method for the Test Set

    The document does not specify any adjudication method (e.g., 2+1, 3+1, none) used for establishing ground truth or evaluating the test set.


    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study and Effect Size

    The document does not mention that a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done, nor does it provide any effect size regarding human reader improvement with or without AI assistance. The submission focuses on software changes and bundling previously cleared functionalities.


    6. Standalone (Algorithm Only) Performance Study

    The document primarily describes a software application that assists radiologists and physicians. While it refers to "software verification and validation," it does not explicitly describe a standalone (algorithm only without human-in-the-loop performance) study for any specific AI or image processing component with quantitative results. The functions described are tools for evaluation by a human user.


    7. Type of Ground Truth Used

    The document does not explicitly state the type of ground truth used for any testing. Given the nature of medical imaging software, potential ground truth sources could include:

    • Expert consensus (e.g., radiologists, cardiologists)
    • Pathology reports
    • Clinical outcomes
    • Reference standards or phantoms
      However, the document does not specify which, if any, were used.

    8. Sample Size for the Training Set

    The document does not provide any information regarding the sample size used for the training set. This information is typically relevant for machine learning-based algorithms, and while syngo.CT Pulmo 3D mentions a "lung lobe segmentation algorithm," and syngo.CT CaScoring involves calculations, the document doesn't delve into the specifics of their underlying models or training.


    9. How Ground Truth for the Training Set Was Established

    The document does not provide any information on how ground truth for the training set was established.

    Ask a Question

    Ask a specific question about this device

    K Number
    K220450
    Date Cleared
    2022-03-07

    (18 days)

    Product Code
    Regulation Number
    892.1750
    Why did this record match?
    Device Name :

    syngo.CT Applications

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    syngo.CT Applications is a set of software applications for advanced visualization, measurement, and evaluation for specific body regions.

    This software package is designed to support the radiologists and physicians from emergency medicine, specialty care, urgent care, and general practice e.g. in the:

    • · Evaluation of perfusion of organs and tumors and myocardial tissue perfusion
    • · Evaluation of bone structures and detection of bone lesions
    • · Evaluation of CT images of the heart
    • · Evaluation of the coronary lesions
    • · Evaluation of the mandible and maxilla
    • · Evaluation of dynamic vessels and extended phase handling
    • · Evaluation of the liver and its intrahepatic vessel structures to identify the vascular territories of sub-vessel systems in the liver
    • · Evaluation of neurovascular structures
    • Evaluation of the lung parenchyma
    • · Evaluation of non-enhanced Head CT images
    • · Evaluation of vascular lesions
    Device Description

    The syngo.CT Applications are syngo based post-processing software applications to be used for viewing and evaluating CT images provided by a CT diagnostic device and enabling structured evaluation of CT images.

    The syngo.CT Applications is a combination of thirteen (13) former separately cleared medical devices which are now handled as features / functionalities within syngo.CT Applications. These functionalities are combined unchanged compared to their former cleared descriptions; however, some minor enhancements and improvements are made for the application syngo.CT Pulmo 3D only.

    AI/ML Overview

    The provided document is a 510(k) summary for syngo.CT Applications, which is a consolidation of thirteen previously cleared medical devices. The document explicitly states that "The testing supports that all software specifications have met the acceptance criteria" and "The result of all testing conducted was found acceptable to support the claim of substantial equivalence." However, it does not explicitly define specific acceptance criteria (e.g., target accuracy, sensitivity, specificity values) for the device's performance or detail the specific studies that prove these criteria are met. Instead, it relies on the premise that the functionalities remain unchanged from the previously cleared predicate devices, with only minor enhancements to one application (syngo.CT Pulmo 3D).

    Therefore, based on the provided text, I cannot fill in precise quantitative values for acceptance criteria or specific study results for accuracy, sensitivity, or specificity. The information provided heavily emphasizes software verification and validation, risk analysis, and adherence to consensus standards, rather than detailing a comparative effectiveness study or standalone performance metrics against a defined ground truth.

    Here's a breakdown of the available information and what is missing:


    1. Table of acceptance criteria and the reported device performance:

    Acceptance Criteria (Specific metrics, e.g., sensitivity, specificity, accuracy targets)Reported Device Performance (Specific values achieved in studies)
    Not explicitly stated in the document. The document indicates that all software specifications met acceptance criteria, but these criteria are not detailed.Not explicitly stated in the document. The document refers to the device's functionality remaining unchanged from previously cleared predicate devices.

    2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective):

    • Sample Size for Test Set: Not specified in the document.
    • Data Provenance: Not specified in the document.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience):

    • Number of Experts: Not specified in the document.
    • Qualifications of Experts: Not specified in the document.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    • Adjudication Method: Not specified in the document.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • MRMC Study Done: No. The document does not mention any MRMC comparative effectiveness study where human readers' performance with and without AI assistance was evaluated. The submission focuses on the consolidation of existing, cleared applications.
    • Effect Size of Improvement: Not applicable, as no MRMC study is reported.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • Standalone Study Done: Yes, implicitly. The document states, "The testing supports that all software specifications have met the acceptance criteria," suggesting that the software's performance was verified and validated independent of human interpretation to ensure its functionalities (visualization, measurement, evaluation) behave as intended. However, specific metrics (e.g., accuracy of a measurement tool compared to a gold standard) are not provided. The phrase "algorithm only" might not be fully accurate here given the device is a visualization and evaluation tool for human use, not an autonomous diagnostic AI.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • Type of Ground Truth: Not explicitly specified. Given the nature of visualization and evaluation tools, it would likely involve comparisons to known values, measurements, or expert-reviewed datasets, but the document does not detail this.

    8. The sample size for the training set:

    • Training Set Sample Size: Not applicable/Not specified. The document describes the device as a consolidation of existing, cleared software applications with "minor enhancements and improvements" only to syngo.CT Pulmo 3D. It does not indicate that new machine learning models requiring large training sets were developed for this specific submission; rather, it refers to the performance of existing, cleared applications.

    9. How the ground truth for the training set was established:

    • How Ground Truth for Training Set was Established: Not applicable/Not specified, for the same reasons as point 8. The document does not describe a new AI model training process for this submission.

    Summary of Device Rationale:

    The core of this 510(k) submission is the consolidation of thirteen previously cleared syngo.CT applications into a single "syngo.CT Applications" product. The applicant, Siemens Medical Solutions USA, Inc., states that the functionalities within this combined product are "unchanged compared to their former cleared descriptions" with only "minor enhancements and improvements" in syngo.CT Pulmo 3D (specifically regarding color assignments for lobe borders).

    The document asserts that "The performance data demonstrates continued conformance with special controls for medical devices containing software." It also states, "The risk analysis was completed, and risk control implemented to mitigate identified hazards. The testing results support that all the software specifications have met the acceptance criteria. Testing for verification and validation of the device was found acceptable to support the claims of substantial equivalence."

    This implies that the "acceptance criteria" largely revolve around the continued functional performance and adherence to specifications of the already cleared individual applications, plus verification of the minor changes to syngo.CT Pulmo 3D, and the successful integration into a single software package. However, quantitative performance metrics for the device against specific clinical tasks are not provided in this 510(k) summary document, as the submission focuses on the substantial equivalence of the consolidated product to its predicate devices, rather than presenting new clinical efficacy data.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1