(70 days)
AI-Rad Companion (Engine) is a software platform that provides basic visualization and enables external post-processing extension for medical images used for diagnostic purposes. The software platform is designed to support technicians and trained physicians in qualitative and quantitative measurement and analysis of clinical data. The software platform provides means for storing of data and for transferring data into other systems such as PACS systems. The software platform provides an interface to integrate processing extensions.
AI-Rad Companion (Engine) is a software platform that provides basic visualization and enables external post-processing extension for medical images used for diagnostic purposes. The software platform provides means for storing of data and for transferring data into other systems such as PACS systems. The software platform provides an interface to integrate processing extensions by supporting:
- Interface for multi-modality and multi-vendor Input / Output of DICOM Data ●
- Check of data validity using information for DICOM tags ●
- Interface for extensions that provide post-processing functionality
- Confirmation user interface for visualization of medical images processed by extensions ●
- Configuration user interface for configuration of the medical device and extensions ●
As an update to the previously cleared device, the following modifications have been made:
- Modified Indications for Use Statement 1)
- Support of software version VA10A: 2)
- a. Deployment of software on Siemens cloud infrastructure
- b. Improved method to access and configure optional post-processing extensions
- Modified workflow to visualize and confirm output of optional post-processing C. extension
-
- Subject device claims list
AI-Rad Companion (Engine) is designed to support the operating user in qualitative and quantitative analysis of clinical data
The provided text focuses on the 510(k) summary for the AI-Rad Companion (Engine) software platform and its substantial equivalence to a predicate device. It explicitly states that the device is a "software platform that provides basic visualization and enables external post-processing extension for medical images used for diagnostic purposes" and is "designed to support technicians and trained physicians in qualitative and quantitative measurement and analysis of clinical data."
However, the document does not contain the detailed information necessary to answer all aspects of your request, particularly regarding specific acceptance criteria for performance metrics (like accuracy, sensitivity, specificity for a particular clinical task), the results of a study proving those criteria are met, sample sizes for test sets, data provenance, expert ground truth details, MRMC studies, or training set information.
The document emphasizes:
- Non-Clinical Testing Summary: Performance tests were conducted for functionality and conformity to industry standards (DICOM, Medical Device Software, Risk Management, Usability Engineering).
- Verification and Validation: Mentions unit, subsystem, and system integration testing, and successful verification and regression testing against predetermined acceptance criteria.
- Risk Analysis and Cybersecurity.
Crucially, it states: "The performance data demonstrates continued conformance with special controls for medical devices containing software." and "The testing results support that all the software specifications have met the acceptance criteria." This implies that acceptance criteria were defined and met, but the details of those criteria and the specific performance results against them are not provided in this 510(k) summary.
The summary concludes that "The result of all testing conducted was found acceptable to support the claim of substantial equivalence." and "Siemens believes that the data generated from the AI-Rad Companion (Engine) software testing supports a finding of substantial equivalence." This means the product was cleared based on its equivalence to a predicate device, and the testing focused on ensuring that the platform's functionality and safety (as a generic medical image processing and viewing engine) are equivalent and meet general software standards, rather than proving performance on a specific clinical task with AI algorithms.
Therefore, many of the requested details about specific AI performance metrics are not available in this document because the AI-Rad Companion (Engine) is described as a platform for integrating extensions, not the AI extension itself.
Here's a breakdown of what can be extracted and what information is missing:
1. Table of acceptance criteria and the reported device performance:
Acceptance Criteria (Implied) | Reported Device Performance (Implied from summary) |
---|---|
Conformity to industry standards | Complies with DICOM (PS 3.1 – 3.20), Medical Device Software - Software Life Cycle Processes (62304:2006), Medical devices - Application of risk management to medical devices (14971 Second Edition 2007-03-01), Medical devices - Part 1: Application of usability engineering to medical devices (IEC 62366-1:2015). |
Software specifications met | "All testable requirements in the Engineering Requirements Specifications, Subsystem Requirements Specifications, and the Risk Management Hazard keys have been successfully verified and traced." "The software verification and regression testing have been performed successfully to meet their previously determined acceptance criteria as stated in the test plans." |
Functionality (platform's features) | Bench testing (Unit, Subsystem, System Integration testing) performed to evaluate performance and functionality of new features and software updates. Functional claims include: Interface for multi-modality and multi-vendor Input / Output of DICOM Data, Check of data validity using DICOM tags, Interface for extensions that provide post-processing functionality, Confirmation user interface for visualization of processed images, Configuration user interface for the device and extensions, Standard visualization tools, Image distribution and archiving capabilities, DICOM compatibility, Cybersecurity measures. |
Safety and Effectiveness for intended users and use environments | "AI-Rad Companion (Engine) was tested and found to be safe and effective for intended users, uses and use environments through the design control verification and validation process and clinical data based software validation." Human Factor Usability Validation showed human factors addressed. Risk analysis completed and controls implemented. |
Specific quantitative performance metrics (e.g., sensitivity, specificity, accuracy for an AI task) are NOT provided in this document. The document defines the "Engine" as a platform that "enables external post-processing extension," implying that the engine itself does not perform specific diagnostic AI tasks that would require such metrics to be cleared.
2. Sample size used for the test set and the data provenance:
- Sample size: Not specified.
- Data provenance: Not specified (e.g., country of origin, retrospective/prospective). The software is a platform; specific AI applications that would use such data are external extensions.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not specified. The document does not detail expert involvement for ground truth establishment for clinical performance.
4. Adjudication method for the test set:
- Not specified.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- An MRMC study is not mentioned in this document. The document describes the "AI-Rad Companion (Engine)" as a software platform for visualization and post-processing extensions, not as a specific AI-powered diagnostic tool that augments human readers for a specific clinical task.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Not explicitly stated in terms of a specific AI algorithm's standalone performance. The document focuses on the platform's functionality and safety, not the performance of an integrated AI algorithm.
7. The type of ground truth used:
- Not specified for any specific clinical performance evaluation.
8. The sample size for the training set:
- Not applicable/Not specified. This document is about the "Engine" platform, not a specific AI model that would have a training set.
9. How the ground truth for the training set was established:
- Not applicable/Not specified, as it's a platform, not an AI model requiring a training set.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).