(49 days)
The DIVA ZSP5812CMI with QUBYX PerfectLum is intended to be used for displaying and viewing medical images, for review and analysis by trained medical practitioners. The DIVA ZSP5812CMI can be used only in conjunction with QUBYX PerfectLum. The device can not be used in primary image diagnosis in mammography. The device can not be used for a life-support system. The device does not contact with the patient. The device is intended for prescription use.
The DIVA ZSP5812CMI with QUBYX PerfectLum is a 58" color display for medical viewing. It is combined with QUBYX PerfectLum and PerfectLum remote management, a userfriendly DICOM calibration and AAPM TG18 verification software suite. The software allows setting the display function to DICOM, displaying test pattern and performing acceptance and constancy tests.
The provided document describes a 510(k) premarket notification for the DIVA ZSP5812CMI with QUBYX PerfectLum bundle, a medical image display system. The information outlines the device's technical specifications and compares it to a predicate device, the EIZO RadiForce RX650. The core of the submission revolves around demonstrating substantial equivalence, primarily through compliance with DICOM Part 14 GSDF and AAPM TG18 standards.
Here's an analysis based on your requested information:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly state "acceptance criteria" in a quantitative table format that shows pass/fail values. Instead, it describes general compliance with established medical display standards. However, based on the testing performed, the implicit acceptance criteria are full compliance with these standards.
Acceptance Criterion (Implicit) | Reported Device Performance (DIVA ZSP5812CMI with QUBYX PerfectLum bundle) |
---|---|
DICOM Part 14 GSDF Conformance | Successfully passed DICOM conformance test. Compliant. |
AAPM TG18 Acceptance Test Conformance | Successfully passed AAPM TG18 acceptance test. Compliant. |
Primary Category Display for Medical Image Interpretation | Can be used as a primary category display for interpretation of medical images. |
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
The testing described does not involve patient data or clinical images from a "test set" in the traditional sense of AI/CADe device evaluation. Instead, the tests are primarily technical evaluations of the display's performance using software-generated patterns and measurements.
- Sample size: Not applicable in the context of patient data. The "sample" consists of a single device (DIVA ZSP5812CMI) paired with the QUBYX PerfectLum software.
- Data provenance: Not applicable. The "data" used for testing are standardized patterns and measurement results generated by the QUBYX PerfectLum software and an X-Rite i1 Display Pro measurement device. There is no mention of geographical origin or retrospective/prospective nature as this is a device performance test, not a clinical study.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
This question is not directly applicable to the type of device and testing described. The "ground truth" for display performance is defined by the objective metrics and standards set forth in DICOM Part 14 GSDF and AAPM TG18.
- For the DICOM conformance test, validation is purely automated; the software compares measured values to predefined standard targets.
- For the AAPM TG18 acceptance test, it involves "measurement and visual parts." While the measurement steps are automated, the "visual steps" would involve a user analyzing test patterns. However, the document does not specify that these users are "experts" (e.g., radiologists) in the sense of establishing a clinical ground truth. It's more about subjective assessment of visual quality against known patterns, guided by the software. The document does not specify the number or qualifications of these "users."
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not applicable. There is no multi-reader review or adjudication process described for establishing clinical ground truth or resolving discrepancies, as the testing focuses on technical compliance with display standards rather than diagnostic interpretation.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC comparative effectiveness study was conducted or described in this submission. This device is a display monitor, not an AI/CADe system. Therefore, the concept of "human readers improving with AI vs. without AI assistance" is not relevant to this 510(k) summary.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
A standalone performance test was done, but it pertains to the technical performance of the display device and its calibration software, not a diagnostic algorithm. The DICOM conformance tests and the measurement portions of the AAPM TG18 tests are essentially standalone algorithms (within the PerfectLum software) verifying the display's adherence to standards without human intervention in the measurement and comparison phases. The "visual parts" of the AAPM test involve a human, but not as an "algorithm."
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The "ground truth" for this submission is adherence to established technical standards and specifications:
- DICOM Part 14 GSDF (Grayscale Standard Display Function): This standard defines the photometric response curve for medical image displays. The ground truth here is the mathematically defined curve.
- AAPM TG18 (American Association of Physicists in Medicine Task Group 18) standards: These provide guidelines and test patterns for evaluating medical display performance. The ground truth consists of the specified ideal characteristics (e.g., luminance, uniformity, spatial resolution) and visual criteria for the test patterns.
8. The sample size for the training set
Not applicable. This device is a display monitor and its calibration software, not an AI/machine learning algorithm that requires a "training set" of data.
9. How the ground truth for the training set was established
Not applicable, as there is no training set for this device.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).