(101 days)
syngo plaza is a Picture Archiving and Communication System (PACS) intended to display, process, read, report, communicate, distribute, store and archive digital medical images. It supports the physician in diagnosis and treatment planning. syngo.plaza also supports storage and archiving of DICOM Structured Reports. In a comprehensive imaging suite syngo.plaza integrates Hospital/Radiology Information Systems (HIS/RIS) to enable customer specific workflows. syngo.plaza optionally uses a variety of advanced postprocessing applications. Note: Web-based image distribution is not intended for reporting. In the U.S.A., syngo.plaza is not intended for the reporting of digital mammography images.
syngo. plaza is a Picture Archiving and Communication System (PACS) intended to display, process, read, report, communicate, distribute, store and archive digital medical images. It supports the physician in diagnosis and treatment planning. syngo.plaza also supports storage and archiving of DICOM Structured Reports. In a comprehensive imaging suite syngo.plaza integrates Hospital/Radiology Information Systems (HIS/RIS) to enable customer specific workflows. syngo. plaza optionally uses a variety of advanced post processing applications. syngo.plaza is a "software only"-system, which will be delivered on CD-ROM / DVD or also preconfigured on hardware. syngo.plaza will be installed by Siemens service engineers. Defined Hardware requirements are to be met. The herewith described syngo.plaza supports DICOM formatted images and objects.
This 510(k) summary for syngo.plaza is quite brief and primarily focuses on establishing substantial equivalence to a predicate device rather than detailing specific performance studies with acceptance criteria for a new or significantly modified feature. It is a PACS system, and the submission emphasizes its functionality as a medical diagnostic and viewing workstation, along with its integration capabilities.
Therefore, much of the requested information regarding acceptance criteria and performance studies for a device meeting specific quantitative performance metrics is not explicitly stated in this document. The submission focuses on verification and validation (V&V) against established standards for software development and medical devices.
Here's an analysis of the provided text in relation to your request:
1. A table of acceptance criteria and the reported device performance
The document does not provide a table of acceptance criteria with reported device performance in the manner typically seen for devices that perform a specific diagnostic measurement or AI-driven analysis. The "acceptance criteria" here refer to the successful completion of software verification and validation (V&V) activities and the demonstration of substantial equivalence.
The closest statement to an acceptance criterion is:
- "After completion of the system test and comparison of the test results with the software release acceptance criteria, Siemens is of the opinion, that syngo.plaza is substantially equivalent to and performs as well as the predicate device."
This implies that the overall system performance as a PACS met internal "software release acceptance criteria" to demonstrate substantial equivalence to the predicate device. However, these criteria are not detailed in terms of specific quantitative metrics (e.g., sensitivity, specificity, processing time).
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
This information is not provided in the document. The non-clinical testing discussion focuses on software V&V against standards, not on clinical performance testing with a specific test set of medical images from patients.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This information is not provided. As there's no mention of a clinical test set with ground truth, there's no information about experts.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not provided.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
There is no mention of an MRMC study or any study evaluating human reader improvement with or without AI assistance. This submission describes a PACS system, not an AI-assisted diagnostic tool in the sense of a standalone algorithm for interpretation.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
There is no mention of a standalone algorithm performance study. The device is a PACS system, which is inherently a human-in-the-loop system for image display, processing, reading, and reporting.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
This information is not provided. Given the nature of the device (PACS), the "ground truth" for V&V would likely involve ensuring accurate image display, processing functions, data integrity, and compliance with standards, rather than diagnostic accuracy against a clinical ground truth.
8. The sample size for the training set
This information is not provided. There is no mention of a "training set" as would be relevant for machine learning or AI models with learned parameters. The software V&V process for a PACS does not typically involve a training set.
9. How the ground truth for the training set was established
This information is not provided.
Summary of what is present:
- Acceptance Criteria (Implicit): Substantial equivalence to the predicate device and successful completion of software V&V against specified standards (DICOM, JPEG, SMPTE, ISO 14971, IEC 60601-1-4, IEC 60601-1-6, HL7, IEC 62304). The "reported device performance" is that it "performs as well as the predicate device" and successfully passed these V&V activities.
- Study: The "study" referenced is the non-clinical testing which comprises "software verification and validation (Unit Test Level, Integration Test Level and System Test Level)." This is a demonstration of adherence to quality systems and regulatory standards, not a comparative clinical performance study.
- Key focus: The submission's primary goal is to demonstrate substantial equivalence to an existing PACS device (Siemens VO 1190). It is a "software only"-system delivered on CD-ROM/DVD or preconfigured hardware, meeting defined hardware requirements, and supporting DICOM formatted images and objects.
In conclusion, the provided document describes a PACS system and its regulatory submission focused on substantial equivalence based on software V&V. It does not contain the detailed quantitative performance metrics, test set characteristics, or AI-specific study designs that are typically requested when evaluating advanced diagnostic algorithms or AI-powered devices.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).