Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K233209
    Device Name
    uOmnispace.CT
    Date Cleared
    2024-05-17

    (232 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K230162, K230039, K170221, K133643, K182130

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    uOmnispace. CT is a software for viewing, manipulating, evaluating and analyzing medical images. It supports interpretation and evaluation of examinations within healthcare institutions. It has the following additions: -The uOmnispace. CT Colon Analysis application is intended to provide the user a tool to enable easy visualization and efficient evaluation of CT volume data sets of the colon. -The uOmnispace. CT Dental application is intended to provide the user a tool to reconstruct panoramic and paraxial views of jaw. -The uOmnispace. CT Lung Density Analysis application is intended to segment pulmonary, lobes, and airway, providing the user quantitative parameters, structure information to evaluate the lung and airway. -The uOmnispace.CT Lung Nodule application is intended to provide the user a tool for the review and analysis of thoracic CT images, providing quantitative and characterizing information about nodules in the lung in a single study, or over the time course of several thoracic studies. -The uOmnispace.CT Vessel Analysis application is intended to provide a tool for viewing, and evaluating CT vascular images. -The uOmnispace. CT Brain Perfusion is intended to calculate the parameters such as: CBV, CBF, etc. in order to analyze functional blood flow information about a region of interest (ROI) in brain. -The uOmnispace.CT Heart application is intended to segment heart and extract coronary artery. It also provides analysis of vascular stenosis, plaque and heart function. -The uOmnispace. CT Calcium Scoring application is intended to identify calcifications and calculate the calcium soore. -The uOmnispace. CT Dynamic Analysis application is intended to support visualization of the CT datasets over time with the 3D/4D display modes. -The uOmnispace.CT Bone Structure Analysis application is intended to provide visualization and labels for the ribs and spine, and support batch function for intervertebral disk. -The uOmnispace. CT Liver Evaluation application is intended to processing and visualization for liver segmentation and vessel extraction. It also provides a tool for the user to perform liver separation and residual liver segments evaluation. -The uOmnispace. CT Dual Energy is a post-processing software package that accepts UIH CT images acquired using different tube voltages and/or tube currents of the same anatomical location. The u0mnispace.CT Dual Energy application is intended to provide information on the chemical composition of the scanned body materials and/or contrast agents. Additionally, it enables images to be generated at multiple energies within the available spectrum. -The uOmnispace.CT Cardiovascular Combined Analysis is an image analysis software package for evaluating contrast enhanced CT images. The CT Cardiovascular Combined Analysis is intended to analyze vascular and cardiac structures. It can be used in the qualitative and quantitative for the analysis of head-neck, abdomen, multi-body part combined, TAVR (Transcatheter Aortic Valve Replacement) CT data as input for the planning of cardiovascular procedures.

    Device Description

    The uOmnispace.CT is a post-processing software based on the uOmnispace platform for viewing, manipulating, evaluating and analyzing medical images, can run alone or with other advanced commercially cleared applications.

    AI/ML Overview

    The provided text describes the performance data for three AI/ML algorithms integrated into the uOmnispace.CT software: Spine Labeling Algorithm, Rib Labeling Algorithm, and TAVR Analysis Algorithm.

    Here's a breakdown of the acceptance criteria and study details for each:


    1. Spine Labeling Algorithm

    Acceptance Criteria Table:

    Validation TypeAcceptance CriteriaReported Device PerformanceMeets Criteria?
    Score based on ground truthThe average score of the proposed device results is higher than 4 points.5.0 pointsYes

    Study Proving Device Meets Acceptance Criteria:

    • Sample Size for Test Set: 120 subjects.
    • Data Provenance: Retrospective, with data collected from five major CT manufacturers (GE, Philips, Siemens, Toshiba, UIH). Clinical subgroups included U.S. (90 subjects) and Asia (30 subjects) for ethnicity.
    • Number of Experts for Ground Truth: At least two licensed physicians with U.S. credentials.
    • Qualifications of Experts: Licensed physicians with U.S. credentials.
    • Adjudication Method: Ground truth annotations were made by "well-trained annotators" using an interactive tool to set annotation points and assign anatomical labels. All ground truth was finally evaluated by two licensed physicians with U.S. credentials. This suggests a post-annotation review/adjudication by experts.
    • MRMC Comparative Effectiveness Study: No, this was a standalone performance evaluation of the algorithm against established ground truth.
    • Standalone Performance: Yes, the performance of the algorithm itself was evaluated based on a scoring system against ground truth.
    • Type of Ground Truth Used: Expert consensus (annotators + review by licensed physicians).
    • Sample Size for Training Set: Not specified, but stated that "The training data used for the training of the spine labeling algorithm is independent of the data used to test the algorithm."
    • How Ground Truth for Training Set was Established: Not specified beyond the implication that a ground truth process was followed for training data as well.

    2. Rib Labeling Algorithm

    Acceptance Criteria Table:

    Validation TypeAcceptance CriteriaReported Device PerformanceMeets Criteria?
    Score based on ground truthThe average score of the proposed device results is higher than 4 points.5.0 pointsYes

    Study Proving Device Meets Acceptance Criteria:

    • Sample Size for Test Set: 120 subjects.
    • Data Provenance: Retrospective, with data collected from five major CT manufacturers (GE, Philips, Siemens, Toshiba, UIH). Clinical subgroups included U.S. (80 subjects) and Asia (40 subjects) for ethnicity.
    • Number of Experts for Ground Truth: At least two licensed physicians with U.S. credentials.
    • Qualifications of Experts: Licensed physicians with U.S. credentials.
    • Adjudication Method: Ground truth annotations were made by "well-trained annotators" using an interactive tool to generate initial rib masks, which were then refined, and anatomical labels assigned. After the first round, annotators "checked each other's annotation." Finally, all ground truth was evaluated by two licensed physicians with U.S. credentials. This indicates a multi-step adjudication process.
    • MRMC Comparative Effectiveness Study: No, this was a standalone performance evaluation of the algorithm against established ground truth.
    • Standalone Performance: Yes, the performance of the algorithm itself was evaluated based on a scoring system against ground truth.
    • Type of Ground Truth Used: Expert consensus (annotators + cross-checking + review by licensed physicians).
    • Sample Size for Training Set: Not specified, but stated that "The training data used for the training of the rib labeling algorithm is independent of the data used to test the algorithm."
    • How Ground Truth for Training Set was Established: Not specified beyond the implication that a ground truth process was followed for training data as well.

    3. TAVR Analysis Algorithm

    Acceptance Criteria Table:

    Validation TypeAcceptance CriteriaReported Device PerformanceMeets Criteria?
    Verify the consistency with ground truth (Mean Landmark Error)The mean landmark error between the proposed device results and ground truth is less than the threshold, 1 mm.0.86 mmYes
    Subjective Scoring of doctors with U.S. professional qualificationsThe average score of the evaluation criteria is higher than 2.3 pointsYes

    Study Proving Device Meets Acceptance Criteria:

    • Sample Size for Test Set: 60 subjects.
    • Data Provenance: Retrospective. Clinical subgroups included Asia (25 subjects) and U.S. (35 subjects) for ethnicity, including data from U.S. Facility 1 (25 subjects) and U.S. Facility 2 (10 subjects).
    • Number of Experts for Ground Truth: At least two licensed physicians with U.S. credentials for the final evaluation of the ground truth.
    • Qualifications of Experts: Licensed physicians with U.S. credentials (specifically, "two MD with the American Board of Radiology Qualification" for the subjective scoring).
    • Adjudication Method: Ground truth annotations were made by "well-trained annotators." After the first round of annotation, they "checked each other's annotation." Finally, all ground truth was evaluated by two licensed physicians with U.S. credentials. This indicates a multi-step adjudication process.
    • MRMC Comparative Effectiveness Study: No, this was a standalone performance evaluation of the algorithm against established ground truth and subjective expert scoring.
    • Standalone Performance: Yes, the performance of the algorithm itself was evaluated based on landmark error and subjective expert scoring.
    • Type of Ground Truth Used: Expert consensus (annotators + cross-checking + review by licensed physicians).
    • Sample Size for Training Set: Not specified, but stated that "The training data used for the training of the post-processing algorithm is independent of the data used to test the algorithm."
    • How Ground Truth for Training Set was Established: Not specified beyond the implication that a ground truth process was followed for training data as well.
    Ask a Question

    Ask a specific question about this device

    K Number
    K200515
    Date Cleared
    2020-03-25

    (23 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K170221

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    syngo.CT Cardiac Planning is an image analysis software package for evaluating contrast enhanced CT images. The software package is designed to support the physician in the qualitative and quantitative analysis of morphology and pathology of vascular and cardiac structures, with the overarching purpose of serving as input for planning of cardiovascular procedures.

    Device Description

    syngo.CT Cardiac Planning is an image analysis software package for evaluating contrast enhanced CT images. The software package is designed to support the physician in the qualitative and quantitative analysis of morphology and pathology of vascular and cardiac structures, with the overarching purpose of serving as input for planning of cardiovascular procedures.

    syngo.CT Cardiac Planning includes tools that support the clinician at different steps during diagnosis, including reading and reporting. The user has full control of the reported measurements and images and is able to choose the appropriate function suited for their clinical need. Features included in this software that aid in diagnosis can be grouped in the following categories:

    • . Basic reading: commodity features that are commonly available on CT cardiac postprocessing workstations
    • Advanced reading: additional features for increased user support during CT cardiac ● postprocessing.
    AI/ML Overview

    This document, K200515, describes Siemens' syngo.CT Cardiac Planning software. It states that the submission aims to clear an "error correction that return the Cardiac Planning software to its original specifications" and mentions that there are "no differences between the subject device and the predicate device" and no "new features or modification to already cleared features." Based on this, a full comparative effectiveness study with human readers (MRMC) or a standalone (algorithm only) performance study against clinical ground truth is not expected or provided. The document focuses on verification and validation demonstrating that the software performs as intended after the error correction, aligning with the original cleared specifications.

    Given the nature of this 510(k) submission, the provided text does not contain information about specific acceptance criteria related to clinical performance metrics (like sensitivity, specificity, accuracy) or a study proving the device meets these criteria in the typical sense for a new AI/ML device. Instead, the "acceptance criteria" discussed are largely related to software verification and validation, ensuring the corrected software meets its design specifications and maintains substantial equivalence to the predicate device.

    Here's an analysis based on the provided document:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document does not provide a table of performance acceptance criteria (e.g., sensitivity, specificity, or specific measurement accuracy thresholds) for the device's diagnostic capabilities. Instead, it refers to:

    Acceptance Criteria TypeReported Device Performance (Summary)
    Software Specifications- All software specifications met.
    Corrective Measures- Corrective measures implemented meet predetermined acceptance values.
    Verification & Validation- Functions work as designed, performance requirements and specifications met. - All hazard mitigations fully implemented.
    Risk Management- Risk analysis performed (ISO 14971 compliant). - Risk control implemented to mitigate identified hazards.

    The "Correction of the measurement algorithm" for "Measurement Tools" within the TAVI Feature is the specific area where a change was made and subsequently verified. The performance reported is that this correction brings the software back to its "original specifications" and achieves "substantially equivalent" performance to the predicate.

    2. Sample Size Used for the Test Set and Data Provenance:

    The document does not specify the sample size of a test set (e.g., number of patient cases) used for clinical performance evaluation. The testing described is primarily focused on software verification and validation, rather than a clinical performance study with patient data. Therefore, data provenance (country of origin, retrospective/prospective) is not applicable in the context of clinical performance evaluation.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:

    Not applicable, as a clinical performance study with a test set requiring expert ground truth is not detailed in this submission. The focus is on software function and correction verification.

    4. Adjudication Method for the Test Set:

    Not applicable for the same reasons as above.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    A multi-reader multi-case (MRMC) comparative effectiveness study was not conducted and is not mentioned. The submission's core purpose is to demonstrate that an error correction maintains the device's original cleared specifications, not to show improvement over human readers.

    6. Standalone (Algorithm Only) Performance Study:

    A standalone (algorithm only) performance study (e.g., measuring diagnostic accuracy independent of a human user) was not conducted and is not described. The document pertains to an error correction in existing, cleared software.

    7. Type of Ground Truth Used:

    The document implies a ground truth based on the original design specifications and expected behavior of the syngo.CT Cardiac Planning software (K170221). The testing confirmed that the corrected measurement algorithm performs according to these original specifications, which serve as the implicit "ground truth" for the verification activities. There is no mention of external clinical ground truth (e.g., pathology, outcomes data) for validating a diagnostic claim in this submission.

    8. Sample Size for the Training Set:

    Not applicable. This submission is for an error correction to an existing software product, not the development of a new algorithm that would involve a training set.

    9. How the Ground Truth for the Training Set Was Established:

    Not applicable, as no training set is mentioned or implied for this submission.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1