Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K192402
    Date Cleared
    2019-09-20

    (17 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K173625, K150757

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    syngo.CT Extended Functionality is intended to provide advanced visualization tools to prepare and process medical images for diagnostic purpose. The software package is designed to support technicians and physicians in qualitative and quantitative measurements and in the analysis of clinical data that was acquired and reconstructed by Computed Tomography (CT) scanners, and possibly other medical imaging modalities (e.g. MR scanners).

    An interface shall enable the connection between the syngo.CT Extended Functionality software package and the interconnected CT Scanner system.

    Result images created with the syngo.CT Extended Functionality software package can be used to assist trained technicians or physicians in diagnosis.

    Device Description

    syngo.CT Extended Functionality is a software bundle that offers tools to support special clinical evaluations. The "tools" are represented by the so-called Extensions. syngo.CT Extended Functionality can be used to create advanced visualizations and measurements on clinical data that was acquired and reconstructed by Computed Tomography (CT) scanners or other medical imaging modalities (e.g. MR scanners) by using the Extensions.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the syngo.CT Extended Functionality device, based on the provided FDA 510(k) summary:

    Acceptance Criteria and Device Performance Study

    The provided document describes the syngo.CT Extended Functionality as a software bundle offering tools for advanced visualization and measurements on medical images. It serves as an extension of a previously cleared predicate device, syngo.CT Clinical Extensions (K173625). The focus of the 510(k) submission is to demonstrate substantial equivalence to the predicate device, primarily through verification and validation of software functionality, especially the new Interactive Spectral Imaging (ISI) feature and modifications to existing extensions.

    1. Table of Acceptance Criteria and Reported Device Performance

    The document doesn't explicitly state "acceptance criteria" in a quantitative table with specific target values (e.g., accuracy > X%, sensitivity > Y%). Instead, the acceptance criteria are implicitly defined by the successful completion of various software verification and validation activities, performance standards, and the demonstration that the device "performs as intended" and is "comparable to the predicate devices in terms of technological characteristics and safety and effectiveness."

    Implicit Acceptance Criteria and Demonstrated Performance:

    Acceptance Criterion (Implicit)Reported Device Performance/Validation
    Functional Performance (General)All conducted testing was found acceptable to support the claim of substantial equivalence. The device "performs as intended."
    Functional Performance (New/Modified Features)- Interactive Spectral Imaging (ISI): A "phantom-based validation" and "Detailed Description and Bench Tests" were conducted to show the feature "operates as intended."
    • Vascular/Vessel Extension: "This modification is a usability improvement." (Implies successful verification of the improved functionality).
    • Oncology Extension: "Modified to support MR data for Diameter WHO." (Implies successful verification of MR data processing).
    • Multiphase Support for Merged 4D Series: "Usability improvement: The grouping logic has been extended to include cardiac gated datasets." (Implies successful verification of the extended grouping logic). |
      | Compliance with Safety and Performance Standards | The device fulfills requirements of:
    • Digital Imaging and Communications in Medicine (DICOM) Set; PS 3.1 – 3.20 (Recognition Number 12-300)
    • Medical Device Software – Software Life Cycle Processes; 62304:2006 (1st Edition) (Recognition Number 13-32)
    • Medical devices – Application of risk management to medical devices; 14971 Second Edition 2007-03-01 (Recognition Number 5-40)
    • Medical devices - Part 1: Application of usability engineering to medical devices IEC 62366-1:2015 (Recognition Number 5-114) |
      | Software Quality and Risk Management | - "Software Documentation for a Moderate Level of Concern software per FDA's Guidance Document 'Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices' issued on May 11, 2005 is included as part of this submission."
    • "Risk management is ensured via a hazard analysis, which is used to identify potential hazards. These potential hazards are controlled during development, verification and validation testing."
    • "The device labeling contains instructions for use and any necessary cautions and warnings to provide for safe and effective use of the device." |
      | Equivalence to Predicate Device | The device is deemed "as safe, as effective, and with performance substantially equivalent to the commercially available predicate devices." Test results show the subject device is "comparable to the predicate devices in terms of technological characteristics and safety and effectiveness." |

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: The document explicitly mentions that for the Interactive Spectral Imaging (ISI) functionality, a "phantom-based validation" was conducted. However, no specific number of images, patient cases, or phantom instances are provided. For other features, it refers to "non-clinical tests" and "bench tests" but does not give sample sizes.
    • Data Provenance:
      • The "phantom-based validation" suggests synthetic or controlled data.
      • No specific country of origin for any human patient data is mentioned, nor whether it was retrospective or prospective. Given that this is a software update for an existing imaging workstation, it's highly probable that internal test data, possibly from various sources (pre-existing clinical de-identified data or synthetic data), was used for verification.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    The document does not specify the number of experts or their qualifications used to establish ground truth for any aspect of the testing. The testing described appears to be primarily focused on technical performance and functional verification rather than diagnostic accuracy studies involving human expert reads. The phrase "Result images created with the syngo.CT Extended Functionality software package can be used to assist trained technicians or physicians in diagnosis" implies that the software provides tools for diagnosis, not a diagnosis itself, which aligns with the focus on technical verification.

    4. Adjudication Method for the Test Set

    No information is provided regarding an adjudication method. This is consistent with the likely focus on technical functional testing (e.g., verifying image transformations, measurements within software, and compliance with standards) rather than clinical accuracy studies requiring human reader agreement.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    No, an MRMC comparative effectiveness study was not explicitly mentioned or performed. The submission focuses on demonstrating substantial equivalence through technical and functional verification and compliance with standards, rather than a clinical trial assessing human reader performance with and without AI assistance. The device is described as providing "advanced visualization tools" to "support technicians and physicians," not necessarily an AI-driven diagnostic aid that would directly influence human reader accuracy in the way a CAD (Computer-Aided Detection) system might.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    The document describes "performance tests" and "phantom-based validation" for the Interactive Spectral Imaging (ISI) functionality and "non-clinical tests and a phantom-based bench test" for other features. These types of tests are inherently standalone performance evaluations of the software's functionality, without a human in the loop for the performance assessment itself (though a human would operate the software). The focus is on whether the software performs its intended function accurately (e.g., correct image generation, measurement calculations).

    7. The Type of Ground Truth Used for the Test Set

    The ground truth for the testing appears to be primarily technical specifications, phantom measurements, and expected output values based on the software's design and engineering requirements. For the "phantom-based validation" of ISI, the ground truth would likely be the known material properties or quantitative measurements of the phantom. For other functional tests, it would be the expected software output when processing specific input data according to defined algorithms. No mention of expert consensus, pathology, or outcomes data as ground truth is made for the described testing.

    8. The Sample Size for the Training Set

    The document is a 510(k) summary for a software update and extension of an existing product, and the testing described is primarily for verification and validation, implicitly for a mature software product built upon established technology. It does not mention or provide information about a "training set" for AI/ML algorithms. The device's description as "advanced visualization tools" and "measurements" suggests traditional image processing and analysis algorithms, not necessarily deep learning or AI that would require large, labeled training datasets. The "Interactive Spectral Imaging" feature is described as allowing the user to display representations of Dual Energy data, which often relies on pre-defined material decomposition algorithms rather than trained models.

    9. How the Ground Truth for the Training Set Was Established

    Since no "training set" for AI/ML is mentioned, the method for establishing its ground truth is not applicable here. The ground truth for the verification and validation of this device (as per point 7 above) would be based on technical specifications and known physical properties, not a labeled training dataset derived from human expert annotations.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1