(14 days)
The DSA 2000ex device is used in vascular imaging applications. During X-ray exposures, the DSA 2000ex is used to acquire video images from the video display chain provided by the X-ray manufacturer's system. The images are stored in the DSA 2000ex solid state memory, and written to the hard disk medium. Images are processed in real-time to provide increased image usability. The processing is primarily subtraction, but also includes window and level adjustments, as well as optional noise reduction, landscaping, image rotation and pixel shifting. The Eigen DSA 2000ex device is used in X-ray cardiology and radiology labs to enhance diagnostic capabilities of radiologists, and cardiologists, with minimal intervention required by users to perform basic capture, playback, and archiving functions. Additional functions include allowing measurements to be made for quantizing stenosis and guidance of catheters in the Roadmapping mode.
The DSA 2000ex device is used in vascular imaging applications. During X-ray exposures, the DSA 2000ex is used to acquire video images from the video display chain provided by the X-ray manufacturer's system. The images are stored in the DSA 2000ex solid state memory, and written to the hard disk medium. Images are processed in real-time to provide increased image usability. The processing is primarily subtraction, but also includes window and level adjustments, as well as optional noise reduction, landscaping, image rotation and pixel shifting. The Eigen DSA 2000ex device is used in X-ray cardiology and radiology labs to enhance diagnostic capabilities of radiologists, with minimal intervention required by users to perform basic capture, playback, and archiving functions. Additional functions include allowing measurements to be made for quantizing stenosis and guidance of catheters in the Roadmapping mode.
The provided text describes the Eigen DSA 2000ex, a Digital Subtraction Angiography device, and its substantial equivalence to predicate devices. However, the document does not contain a detailed study with specific acceptance criteria, reported device performance metrics, or information about sample sizes, ground truth establishment, expert involvement, or MRMC studies that are typically associated with AI/ML device evaluations.
The relevant section, "Testing and Performance Data," states: "All product and engineering specifications were verified and validated. Test images as well as test phantoms incorporating simulated stenosis were developed and used to verify system performance through verification, validation and benchmarking." This is a very high-level statement and lacks the specificity required to answer the questions thoroughly.
Therefore, for aspects related to detailed performance studies and acceptance criteria as you've requested, the information is not available in the provided document.
Here's a breakdown of what can be extracted and what is not available:
1. Table of acceptance criteria and the reported device performance
Acceptance Criteria | Reported Device Performance |
---|---|
Not Available | Not Available (beyond general statement of "system performance through verification, validation and benchmarking") |
The document mentions "product and engineering specifications were verified and validated," and "Test images as well as test phantoms incorporating simulated stenosis were developed and used to verify system performance." However, specific quantitative acceptance criteria (e.g., sensitivity, specificity, accuracy thresholds for stenosis detection) and the corresponding measured performance values are not provided.
2. Sample size used for the test set and the data provenance
- Sample Size (test set): Not Available. The document mentions "test images" and "test phantoms incorporating simulated stenosis" but does not specify the number of these.
- Data Provenance: Not Available. Given the nature of "test images" and "test phantoms," these are likely internally generated or simulated, not clinical patient data from a specific country or retrospective/prospective study.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of experts: Not Available.
- Qualifications of experts: Not Available.
The document does not describe the establishment of ground truth by human experts for the test set.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Adjudication Method: Not Available. No information on expert review or adjudication is provided.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: No. This is not an AI/ML device in the modern sense; it's an image processing system. Therefore, an MRMC study comparing human readers with and without AI assistance is not described or relevant for this type of device according to the provided text. The device "enhances diagnostic capabilities of radiologists, and cardiologists" by improving image usability, but this is through image processing, not an AI-driven diagnostic aid.
- Effect Size: Not Applicable/Not Available.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Study: Not explicitly described. The device itself is an image processing system, which operates in a "standalone" fashion to process images. However, a formal "standalone performance study" with metrics like sensitivity/specificity for a diagnostic task, as would be expected for an AI algorithm, is not detailed. The system's "performance" is verified against engineering specifications and test phantoms, implying system-level functional performance rather than diagnostic accuracy.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Type of Ground Truth: "Test phantoms incorporating simulated stenosis." This suggests an engineered, known truth set for evaluating the device's ability to process and visualize specific features. It's not clinical ground truth derived from pathology or patient outcomes.
8. The sample size for the training set
- Sample Size (training set): Not Applicable/Not Available. This device is an image processing system, not an AI/ML device that undergoes "training" in the contemporary sense. It's built based on established algorithms for image subtraction, noise reduction, etc., not trained on a dataset.
9. How the ground truth for the training set was established
- Ground Truth Establishment (training set): Not Applicable/Not Available. As it's not an AI/ML device that requires training, the concept of a training set ground truth does not apply. The algorithms are predefined based on image processing principles rather than learned from data.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).