Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K200515
    Date Cleared
    2020-03-25

    (23 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    syngo.CT Cardiac Planning

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    syngo.CT Cardiac Planning is an image analysis software package for evaluating contrast enhanced CT images. The software package is designed to support the physician in the qualitative and quantitative analysis of morphology and pathology of vascular and cardiac structures, with the overarching purpose of serving as input for planning of cardiovascular procedures.

    Device Description

    syngo.CT Cardiac Planning is an image analysis software package for evaluating contrast enhanced CT images. The software package is designed to support the physician in the qualitative and quantitative analysis of morphology and pathology of vascular and cardiac structures, with the overarching purpose of serving as input for planning of cardiovascular procedures.

    syngo.CT Cardiac Planning includes tools that support the clinician at different steps during diagnosis, including reading and reporting. The user has full control of the reported measurements and images and is able to choose the appropriate function suited for their clinical need. Features included in this software that aid in diagnosis can be grouped in the following categories:

    • . Basic reading: commodity features that are commonly available on CT cardiac postprocessing workstations
    • Advanced reading: additional features for increased user support during CT cardiac ● postprocessing.
    AI/ML Overview

    This document, K200515, describes Siemens' syngo.CT Cardiac Planning software. It states that the submission aims to clear an "error correction that return the Cardiac Planning software to its original specifications" and mentions that there are "no differences between the subject device and the predicate device" and no "new features or modification to already cleared features." Based on this, a full comparative effectiveness study with human readers (MRMC) or a standalone (algorithm only) performance study against clinical ground truth is not expected or provided. The document focuses on verification and validation demonstrating that the software performs as intended after the error correction, aligning with the original cleared specifications.

    Given the nature of this 510(k) submission, the provided text does not contain information about specific acceptance criteria related to clinical performance metrics (like sensitivity, specificity, accuracy) or a study proving the device meets these criteria in the typical sense for a new AI/ML device. Instead, the "acceptance criteria" discussed are largely related to software verification and validation, ensuring the corrected software meets its design specifications and maintains substantial equivalence to the predicate device.

    Here's an analysis based on the provided document:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document does not provide a table of performance acceptance criteria (e.g., sensitivity, specificity, or specific measurement accuracy thresholds) for the device's diagnostic capabilities. Instead, it refers to:

    Acceptance Criteria TypeReported Device Performance (Summary)
    Software Specifications- All software specifications met.
    Corrective Measures- Corrective measures implemented meet predetermined acceptance values.
    Verification & Validation- Functions work as designed, performance requirements and specifications met. - All hazard mitigations fully implemented.
    Risk Management- Risk analysis performed (ISO 14971 compliant). - Risk control implemented to mitigate identified hazards.

    The "Correction of the measurement algorithm" for "Measurement Tools" within the TAVI Feature is the specific area where a change was made and subsequently verified. The performance reported is that this correction brings the software back to its "original specifications" and achieves "substantially equivalent" performance to the predicate.

    2. Sample Size Used for the Test Set and Data Provenance:

    The document does not specify the sample size of a test set (e.g., number of patient cases) used for clinical performance evaluation. The testing described is primarily focused on software verification and validation, rather than a clinical performance study with patient data. Therefore, data provenance (country of origin, retrospective/prospective) is not applicable in the context of clinical performance evaluation.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:

    Not applicable, as a clinical performance study with a test set requiring expert ground truth is not detailed in this submission. The focus is on software function and correction verification.

    4. Adjudication Method for the Test Set:

    Not applicable for the same reasons as above.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    A multi-reader multi-case (MRMC) comparative effectiveness study was not conducted and is not mentioned. The submission's core purpose is to demonstrate that an error correction maintains the device's original cleared specifications, not to show improvement over human readers.

    6. Standalone (Algorithm Only) Performance Study:

    A standalone (algorithm only) performance study (e.g., measuring diagnostic accuracy independent of a human user) was not conducted and is not described. The document pertains to an error correction in existing, cleared software.

    7. Type of Ground Truth Used:

    The document implies a ground truth based on the original design specifications and expected behavior of the syngo.CT Cardiac Planning software (K170221). The testing confirmed that the corrected measurement algorithm performs according to these original specifications, which serve as the implicit "ground truth" for the verification activities. There is no mention of external clinical ground truth (e.g., pathology, outcomes data) for validating a diagnostic claim in this submission.

    8. Sample Size for the Training Set:

    Not applicable. This submission is for an error correction to an existing software product, not the development of a new algorithm that would involve a training set.

    9. How the Ground Truth for the Training Set Was Established:

    Not applicable, as no training set is mentioned or implied for this submission.

    Ask a Question

    Ask a specific question about this device

    K Number
    K170221
    Date Cleared
    2017-04-21

    (86 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    syngo.CT Cardiac Planning

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    syngo. CT Cardiac Planning is an image analysis software package for evaluating contrast enhanced CT images. The software package is designed to support the physician in the qualitative and quantitative analysis of morphology and pathology of vascular and cardiac structures, with the overarching purpose of serving as input for planning of cardiovascular procedures.

    Device Description

    syngo.CT Cardiac Planning is an image analysis software package for evaluating contrast enhanced CT images. The software package is designed to support the physician in the qualitative and quantitative analysis of morphology and pathology of vascular and cardiac structures, with the overarching purpose of serving as input for planning of cardiovascular procedures.

    syngo.CT Cardiac Planning includes tools that support the clinician at different steps during diagnosis, including reading and reporting. The user has full control of the reported measurements and is able to choose the appropriate function suited for their clinical need. Features included in this software that aid in diagnosis can be grouped in the following categories:

    • Basic reading: commodity features that are commonly available on CT cardiac post-processing workstations.
    • . Advanced reading: additional features for increased user support during CT cardiac post-processing.

    The user can operate the application in basic reading mode only, or advanced reading if deemed appropriate for the clinical task.

    If results are not as expected by the user (e.g. due to bad image quality caused by image artifacts, such as: noise, pacemaker artifacts, stair steps, wrong contrast timing, etc.), he or she can easily modify the computations or discard them and do a manual diagnosis. The corresponding information will be kept in the reporting object which is stored in the syngo.via database.

    As syngo. CT Cardiac Planning is designed for cardiovascular analysis, there are minimal requirements regarding the loaded data. The application requires contrast-enhanced CT data in order to delineate cardiac vasculature and valvular apparatuses, and/or segment the blood pool in the heart chambers properly. If the user loads data without contrast agent, the algorithms will not work properly. Then a clear visual feedback via a message box is provided. There are no further measurements and the algorithm stops the calculation. The user is asked to manually define the location of the annular plane and continue working from there. If that is not possible for the user, only an axial slice-based manual reading of the case can be performed in the application.

    AI/ML Overview

    Based on the provided text, the "syngo.CT Cardiac Planning" device is a post-processing software application. The document primarily focuses on demonstrating substantial equivalence to a predicate device (syngo.CT Cardiac Function) rather than presenting a standalone clinical study with detailed acceptance criteria and performance metrics.

    Therefore, many of the requested details related to clinical performance, ground truth, multi-reader studies, and specific acceptance criteria for diagnostic accuracy are not explicitly present in this 510(k) summary. The summary emphasizes non-clinical testing, software verification/validation, and comparison of technological characteristics.

    Here's an attempt to answer your questions based only on the provided text, highlighting what is available and what is not:

    1. A table of acceptance criteria and the reported device performance

    The document does not provide a table of acceptance criteria with specific performance metrics (e.g., sensitivity, specificity, accuracy) for a clinical study comparing the device's output against a ground truth. Instead, it states:

    "Performance tests were conducted to test the functionality of syngo.CT Cardiac Planning. These tests have been performed to test the ability of the included features of the results of these tests demonstrate that the subject device performs as intended. The result of all conducted testing was found acceptable to support the claim of substantial equivalence."

    This indicates functional and technical performance acceptance, but not clinical diagnostic performance criteria.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document mentions "Performance tests were conducted" and "non-clinical as well as bench-level tests have been conducted." It does not specify a sample size for a test set of medical images (i.e., patient cases) used to evaluate the device's clinical performance. It does not mention data provenance (country of origin, retrospective/prospective).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    Not applicable, as no clinical test set for diagnostic accuracy with expert ground truth is described in the provided text. The document states: "all image data are to be interpreted by trained personnel" and "the trained user has full control on any conducted step during the whole process," implying human oversight and interpretation are still required.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Not applicable, as no clinical test set requiring adjudication is described.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No MRMC study is mentioned. The device is a "post-processing software package designed to support the physician," implying it's a tool for assistance, but no study on human performance improvement with or without the tool is presented.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The document states: "both subject and predicate devices do neither provide primarily diagnosis nor automated diagnostics interpretation capabilities." This implies it's not a standalone diagnostic algorithm. Its purpose is to "support the physician in the qualitative and quantitative analysis" and serve as "input for planning," so standalone performance in a diagnostic capacity is not presented or claimed.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    Not applicable, as no clinical performance study requiring external ground truth is described. The "ground truth" for the non-clinical and bench-level tests would be the expected functional outcome of the software or adherence to technical specifications.

    8. The sample size for the training set

    Not applicable. The document describes a "post-processing software application" that "reuses the algorithms and technology as provided in the predicate device." It does not mention Machine Learning (ML) training or a training set. This appears to be a software update or re-design of existing algorithms integrated into a new product, rather than a new AI/ML model requiring a training/validation paradigm.

    9. How the ground truth for the training set was established

    Not applicable, as no training set is mentioned for an ML algorithm.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1