(182 days)
Ziostation2 is an image processing application software available for installation onto customer owned hardware. This application software can be networked to provide for sharing of resources.
This application software receives medical images from modalities (mage scanning devices such as CT) or image archives such as PACS through network or media and provides for the viewing, quantification, manipulation, communication, printing, and management of medical images.
This application software is intended for use by trained medical professionals to supplement generally accepted methods of interpreting radiological images.
Lossy compressed mammographic images and digitized film screen images must not be reviewed for primary image interpretations. Mammographic images may only be interpreted using a monitor whose characteristics are approved by the regulatory agency governing the market within which Ziostation2 is being offered.
Note: The clinician retains the ultimate responsibility for making the proper diagnosis based on standard radiological practices and visual comparison of the separate, unprocessed images. Ziostation2 is a tool to be used in support of those standard practices and visual comparisons.
ZIOSTATION2 is a basic DICOM image management system to further aid clinicians in their analysis of anatomy, physiology and pathology. Universal functions such as data retrieval, storage, management, querying and listing, and output are handled by the basic Ziostation2 software. Various imaging tools and techniques can be invoked to process images from the following image types: CT, MRI, Ultrasound, Digital X-ray X-ray Angiography, PET, SPECT, NM, SC, Mammography, X-ray Radiofluoroscopic image, RT Image.
The provided text is a 510(k) premarket notification for the Ziostation2, an image processing application software. It focuses on establishing substantial equivalence to existing predicate devices rather than directly presenting explicit acceptance criteria and a detailed study proving device performance against those criteria in a typical clinical performance study format.
However, based on the information provided, we can infer some aspects relevant to your request, especially concerning the "Testing Summary" section.
Here's an analysis based on your questions, extracting what's available and noting what is not explicitly stated in the document:
1. A table of acceptance criteria and the reported device performance
The document does not provide a table of acceptance criteria with corresponding performance metrics (e.g., sensitivity, specificity, accuracy) for the Ziostation2 for specific clinical tasks. The submission focuses on demonstrating substantial equivalence to predicate devices for various image processing functionalities.
The "Testing Summary" states: "The ZIOSTATION2 software package successfully completed integration testing/verification testing prior to Beta validation. Regression testing was also performed on all functionality present on Ziostation. Software Beta testing/validation was successfully completed prior to final testing and release. In addition, potential hazards have been addressed by the Qi Imaging Risk Management process."
This statement confirms that internal testing was performed, but it lacks specific quantitative acceptance criteria and their corresponding results. The acceptance criteria for these internal tests would likely be related to software functionality, accuracy of calculations (e.g., volume, perfusion parameters), visualization correctness, data integrity, and system stability, demonstrating that the new features perform as intended and comparably to the predicate devices. However, these specific criteria and results are not detailed in this public document.
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
This information is not explicitly stated in the provided text. The document refers to "integration testing/verification testing" and "Software Beta testing/validation," which would have used some form of test data, but the sample size, provenance, or type of data (e.g., real patient data, synthetic data, specific types of scans) are not disclosed. Given the nature of a 510(k) for an image processing system, it's probable that DICOM datasets were used, but details are absent.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
This information is not explicitly stated in the provided text. For a device like Ziostation2, which is an image processing application, ground truth for verification testing would likely involve validation against known phantom measurements or expert measurements performed on clinical images, but the details of such expert involvement are not provided.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
This information is not explicitly stated in the provided text.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
A multi-reader multi-case (MRMC) comparative effectiveness study is not mentioned in the provided text. The document focuses on showing substantial equivalence of the software's processing and visualization capabilities to those of predicate devices, not on the impact of the device on human reader performance in a controlled study. The device is intended "to supplement generally accepted methods of interpreting radiological images," implying it's a tool, not an AI for diagnosis.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
While the "Testing Summary" mentions "integration testing/verification testing," "regression testing," and "Software Beta testing/validation," these are described as internal software tests. It's highly probable these included "standalone" evaluations of the algorithms for their intended functions (e.g., accuracy of measurements, correct rendering of images, proper application of filters). However, specific metrics and results of such standalone performance (e.g., a standalone AUC for a diagnostic task) are not provided, as the device is not presented as an AI diagnostic algorithm, but rather an image processing and visualization tool.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
The type of ground truth used for the internal testing (integration, verification, beta testing) is not explicitly stated. Given the functionalities of Ziostation2 (e.g., CT Coronary Analysis, CT Colon Analysis, CT Perfusion Analysis, MR Tractography), ground truth could involve:
- Known phantom data: for quantitative measurements (e.g., volumes, distances).
- Expert measurements/annotations: on clinical images for comparison with the software's automated or semi-automated tools.
- Previous gold standard software outputs: especially for regression testing against the predecessor Ziostation.
- Pathology or follow-up outcomes data: less likely for general image processing tools, but could be relevant for specific modules if they had a diagnostic claim, which is not the primary focus here.
8. The sample size for the training set
This information is not applicable or not explicitly stated. Ziostation2 is described as an "image processing application software," and its features are discussed in terms of "workflow enhancements" and equivalency to existing functionalities (e.g., data reconstruction, vessel labeling, measurement, display tools). There is no indication that this product is a machine learning or AI model trained on a specific dataset that would require a "training set" in the conventional sense of AI/ML development. Its functionality seems to be based on established algorithms in image processing.
9. How the ground truth for the training set was established
This information is not applicable or not explicitly stated, for the same reasons as in point 8.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).