Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K193294
    Date Cleared
    2020-07-10

    (226 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    AI-Rad Companion Engine

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AI-Rad Companion Engine is a software platform that provides basic visualization and enables external post-processing extensions for medical images used for diagnostic purposes.

    The software platform is designed to support technicians and trained physicians in qualitative and quantitative measurement and analysis of clinical data. The software platform provides the means for viewing, storing and transferring data into other systems such as PACS systems.

    The software platform also provides an interface to integrate additional Siemens Healthineers' clinical processing extensions.

    AI-Rad Companion Engine functionality includes:

    • Interface for multi-modality and multi-vendor Input/Output of DICOM data
    • Check of data validity using information of DICOM tags
    • Interface for extensions that provide post-processing functionality
    • Confirmation user interface for visualization of medical images processed by extensions
    • Configuration user interface for configuration of the medical device and extensions
    Device Description

    AI-Rad Companion Engine, as previously cleared under K183272, has been enhanced in version VA20. AI-Rad Companion Engine still provides the platform for all clinical extensions of the AI Rad Companion system and still falls under the same classification regulation as the predicate device. The engine supports DICOM communication, enabling post-processing extensions for medical images to be used for diagnostic purposes.

    AI-Rad Companion Engine will receive the imaging data to be processed either from an imaging modality or via auto-routing from the PACS system or a DICOM gateway. The results of the AIRC Extensions will be sent back to a configurable target node also utilizing DICOM standards. The means of data transfer will be handled by the "teamplay" infrastructure. Teamplay Images is an MDDS product, intended for data transfer, display and online storage of medical images and related data.

    As an update to the previously cleared device, the following modifications have been made:

      1. Support of software version VA20:
      • a. Support for additional clinical extensions
      • b. Modified workflow to increase the usability within AI-Rad Companion as well as with respect to informing the user regarding the status of clinical extensions
      1. Subject device claims list
    AI/ML Overview

    The provided document is a 510(k) Premarket Notification for the Siemens AI-Rad Companion Engine, specifically for an updated version (VA20). This document primarily focuses on demonstrating substantial equivalence to a previously cleared predicate device (AI-Rad Companion Engine, K183272).

    Crucially, the document states that no clinical tests were conducted for the modified AI-Rad Companion Engine VA20. This means the typical design of an AI/ML medical device approval process, which often includes a standalone performance study and a multi-reader multi-case (MRMC) comparative effectiveness study, was not performed for this specific submission. The reliance is on non-clinical bench testing and software validation to demonstrate equivalent safety and performance to the existing predicate.

    Therefore, many of the requested details about acceptance criteria and study design are not applicable to this specific 510(k) submission, as it is a predicate-based clearance.

    However, based on the information provided regarding the type of device and the predicate clearance strategy, we can infer some information and highlight what is explicitly stated:


    Acceptance Criteria and Study Details (Based on Provided Document)

    Since this 510(k) is for a software update (VA20) to an already cleared device (K183272) and relies on demonstrating substantial equivalence through non-clinical testing, there are no specific acceptance criteria for "device performance" in terms of clinical accuracy (e.g., sensitivity, specificity, accuracy) reported in this document. The acceptance criteria are instead focused on software functionality, validation, and adherence to established standards, demonstrating that the updated device is equally safe and effective as the predicate.

    1. Table of Acceptance Criteria and Reported Device Performance

    Given the nature of this 510(k) (software update demonstrating equivalence), the "acceptance criteria" are related to software validation and regulatory compliance, rather than clinical performance metrics. The "reported device performance" is the successful completion of these non-clinical tests.

    Acceptance Criteria CategorySpecific Criteria (Implicitly Met)Reported Device Performance/Outcome
    Software FunctionalityAll testable requirements in the Requirement Specifications and Risk Analysis are successfully verified and traced.Unit, System, and Integration tests were performed. "All testable requirements...have been successfully verified and traced in accordance with the Siemens Healthineers DH product development (lifecycle) process."
    Usability/ Human FactorsHuman factor usability validation is addressed.Addressed in system testing and usability validation test records. The new version offers "usability enhancements."
    Software ValidationSoftware verification and regression testing meet previously determined acceptance criteria in test plans. Conformance to FDA Guidance for Software in Medical Devices (Moderate Level of Concern)."The software verification and regression testing have been performed successfully to meet their previously determined acceptance criteria as stated in the test plans." Performance data demonstrates "continued conformance with special controls for medical devices containing software." Complies with "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices" (May 11, 2005).
    Standard ConformanceDevice meets requirements of relevant industry standards (e.g., ISO, IEC, DICOM for functionality, risk management, usability, software lifecycle).Tested to meet requirements of conformity to multiple industry standards, including IEC 62366-1 (Usability), ISO 14971 (Risk Management), IEC 62304 (Software Life Cycle), NEMA PS 3.1-3.20 (DICOM), ISO/IEC 10918-1 (Digital Compression).
    CybersecurityAdherence to cybersecurity requirements defined by FDA guidance."Siemens Healthineers adheres to the cybersecurity requirements as defined FDA Guidance 'Content of Premarket Submissions for Management for Cybersecurity in Medical Devices,' issued October 2, 2014 by implementing a process of preventing unauthorized access..."
    Safety and EffectivenessDevice is safe and effective for its intended use, comparable to the predicate."The subject device is as safe and effective when compared to the predicate device that is currently marketed for the same intended use and is therefore substantially equivalent to the predicate device." Risk management ensured via ISO 14971 compliance.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size:
      • Clinical Data: "No clinical tests were conducted to test the performance and functionality of the modifications introduced within AI-Rad Companion Engine VA20." Therefore, no clinical test set sample size is reported.
      • Non-Clinical Data: The document mentions "Unit, System and Integration" bench testing and "software verification and regression testing." The exact number of test cases or "samples" for these software tests is not specified in the provided text.
    • Data Provenance: Not applicable as clinical data was not used for this submission. The origin of the data used for software verification (e.g., simulated data, internal test data) is not detailed.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Experts

    • Not applicable for this submission, as no clinical test set requiring expert ground truth was used. The submission relies on non-clinical software validation and equivalence to a predicate.

    4. Adjudication Method for the Test Set

    • Not applicable for this submission, as no clinical test set requiring adjudication was used.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

    • No MRMC study was done for this specific 510(k) submission. The document explicitly states: "No clinical tests were conducted to test the performance and functionality of the modifications introduced within AI-Rad Companion Engine VA20."

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Study was done

    • No standalone clinical performance study was done for this specific 510(k) submission. As stated: "No clinical tests were conducted..."

    7. The Type of Ground Truth Used

    • Not applicable for clinical ground truth: Since no clinical studies were performed, there was no need for clinical ground truth (e.g., expert consensus, pathology, outcome data).
    • Operational Ground Truth: For the software validation, the "ground truth" would be established against the defined software requirements and specifications, which are verified through various levels of testing (unit, system, integration) designed to confirm the software performs as expected.

    8. The Sample Size for the Training Set

    • Not applicable: The AI-Rad Companion Engine is described as a "software platform that provides basic visualization and enables external post-processing extensions." It is the engine that interfaces with clinical processing extensions. The nature of this "Engine" (a platform) and the fact that it's an update means it's unlikely to have its own "training set" in the sense of a machine learning model. If ML models are part of the "clinical processing extensions," those would have their own training sets and validation processes, but they are not the subject of this 510(k) submission (which cleared the engine). The document doesn't mention any de novo training of AI/ML models for this submission.

    9. How the Ground Truth for the Training Set Was Established

    • Not applicable: As there's no mentioned training set for this device (the Engine), the establishment of ground truth for a training set is not covered.

    Summary of Device and Approval Strategy:

    The AI-Rad Companion Engine is a Class II Picture Archiving and Communication System (PACS) with specific functionalities providing a platform for connecting to "clinical processing extensions." This 510(k) (K193294) is for an update (software version VA20) to an already cleared predicate device (K183272).

    The core of this 510(k) submission is to demonstrate substantial equivalence to the predicate device. This is achieved by showing that despite enhancements (e.g., support for additional clinical extensions, modified workflow), the updated device maintains an equivalent safety and performance profile. This demonstration primarily relies on non-clinical bench testing and software validation, as explicitly stated: "No clinical tests were conducted to test the performance and functionality of the modifications introduced within AI-Rad Companion Engine VA20."

    Ask a Question

    Ask a specific question about this device

    K Number
    K183272
    Date Cleared
    2019-02-01

    (70 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    AI Rad Companion (Engine)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AI-Rad Companion (Engine) is a software platform that provides basic visualization and enables external post-processing extension for medical images used for diagnostic purposes. The software platform is designed to support technicians and trained physicians in qualitative and quantitative measurement and analysis of clinical data. The software platform provides means for storing of data and for transferring data into other systems such as PACS systems. The software platform provides an interface to integrate processing extensions.

    Device Description

    AI-Rad Companion (Engine) is a software platform that provides basic visualization and enables external post-processing extension for medical images used for diagnostic purposes. The software platform provides means for storing of data and for transferring data into other systems such as PACS systems. The software platform provides an interface to integrate processing extensions by supporting:

    • Interface for multi-modality and multi-vendor Input / Output of DICOM Data ●
    • Check of data validity using information for DICOM tags ●
    • Interface for extensions that provide post-processing functionality
    • Confirmation user interface for visualization of medical images processed by extensions ●
    • Configuration user interface for configuration of the medical device and extensions ●

    As an update to the previously cleared device, the following modifications have been made:

    • Modified Indications for Use Statement 1)
    • Support of software version VA10A: 2)
      • a. Deployment of software on Siemens cloud infrastructure
      • b. Improved method to access and configure optional post-processing extensions
      • Modified workflow to visualize and confirm output of optional post-processing C. extension
      1. Subject device claims list

    AI-Rad Companion (Engine) is designed to support the operating user in qualitative and quantitative analysis of clinical data

    AI/ML Overview

    The provided text focuses on the 510(k) summary for the AI-Rad Companion (Engine) software platform and its substantial equivalence to a predicate device. It explicitly states that the device is a "software platform that provides basic visualization and enables external post-processing extension for medical images used for diagnostic purposes" and is "designed to support technicians and trained physicians in qualitative and quantitative measurement and analysis of clinical data."

    However, the document does not contain the detailed information necessary to answer all aspects of your request, particularly regarding specific acceptance criteria for performance metrics (like accuracy, sensitivity, specificity for a particular clinical task), the results of a study proving those criteria are met, sample sizes for test sets, data provenance, expert ground truth details, MRMC studies, or training set information.

    The document emphasizes:

    • Non-Clinical Testing Summary: Performance tests were conducted for functionality and conformity to industry standards (DICOM, Medical Device Software, Risk Management, Usability Engineering).
    • Verification and Validation: Mentions unit, subsystem, and system integration testing, and successful verification and regression testing against predetermined acceptance criteria.
    • Risk Analysis and Cybersecurity.

    Crucially, it states: "The performance data demonstrates continued conformance with special controls for medical devices containing software." and "The testing results support that all the software specifications have met the acceptance criteria." This implies that acceptance criteria were defined and met, but the details of those criteria and the specific performance results against them are not provided in this 510(k) summary.

    The summary concludes that "The result of all testing conducted was found acceptable to support the claim of substantial equivalence." and "Siemens believes that the data generated from the AI-Rad Companion (Engine) software testing supports a finding of substantial equivalence." This means the product was cleared based on its equivalence to a predicate device, and the testing focused on ensuring that the platform's functionality and safety (as a generic medical image processing and viewing engine) are equivalent and meet general software standards, rather than proving performance on a specific clinical task with AI algorithms.

    Therefore, many of the requested details about specific AI performance metrics are not available in this document because the AI-Rad Companion (Engine) is described as a platform for integrating extensions, not the AI extension itself.


    Here's a breakdown of what can be extracted and what information is missing:

    1. Table of acceptance criteria and the reported device performance:

    Acceptance Criteria (Implied)Reported Device Performance (Implied from summary)
    Conformity to industry standardsComplies with DICOM (PS 3.1 – 3.20), Medical Device Software - Software Life Cycle Processes (62304:2006), Medical devices - Application of risk management to medical devices (14971 Second Edition 2007-03-01), Medical devices - Part 1: Application of usability engineering to medical devices (IEC 62366-1:2015).
    Software specifications met"All testable requirements in the Engineering Requirements Specifications, Subsystem Requirements Specifications, and the Risk Management Hazard keys have been successfully verified and traced." "The software verification and regression testing have been performed successfully to meet their previously determined acceptance criteria as stated in the test plans."
    Functionality (platform's features)Bench testing (Unit, Subsystem, System Integration testing) performed to evaluate performance and functionality of new features and software updates. Functional claims include: Interface for multi-modality and multi-vendor Input / Output of DICOM Data, Check of data validity using DICOM tags, Interface for extensions that provide post-processing functionality, Confirmation user interface for visualization of processed images, Configuration user interface for the device and extensions, Standard visualization tools, Image distribution and archiving capabilities, DICOM compatibility, Cybersecurity measures.
    Safety and Effectiveness for intended users and use environments"AI-Rad Companion (Engine) was tested and found to be safe and effective for intended users, uses and use environments through the design control verification and validation process and clinical data based software validation." Human Factor Usability Validation showed human factors addressed. Risk analysis completed and controls implemented.

    Specific quantitative performance metrics (e.g., sensitivity, specificity, accuracy for an AI task) are NOT provided in this document. The document defines the "Engine" as a platform that "enables external post-processing extension," implying that the engine itself does not perform specific diagnostic AI tasks that would require such metrics to be cleared.

    2. Sample size used for the test set and the data provenance:

    • Sample size: Not specified.
    • Data provenance: Not specified (e.g., country of origin, retrospective/prospective). The software is a platform; specific AI applications that would use such data are external extensions.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Not specified. The document does not detail expert involvement for ground truth establishment for clinical performance.

    4. Adjudication method for the test set:

    • Not specified.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • An MRMC study is not mentioned in this document. The document describes the "AI-Rad Companion (Engine)" as a software platform for visualization and post-processing extensions, not as a specific AI-powered diagnostic tool that augments human readers for a specific clinical task.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • Not explicitly stated in terms of a specific AI algorithm's standalone performance. The document focuses on the platform's functionality and safety, not the performance of an integrated AI algorithm.

    7. The type of ground truth used:

    • Not specified for any specific clinical performance evaluation.

    8. The sample size for the training set:

    • Not applicable/Not specified. This document is about the "Engine" platform, not a specific AI model that would have a training set.

    9. How the ground truth for the training set was established:

    • Not applicable/Not specified, as it's a platform, not an AI model requiring a training set.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1