K Number
K080290
Device Name
XVIEWNET
Date Cleared
2008-02-22

(18 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The xViewNet software is intended as a display and analysis tool for the interpretation of diagnostic images by trained healthcare professionals, including radiologists, physicians, technologists, clinicians and nurses. It is also intended to provide access to covered entities for clinical review, throughout the healthcare facility and at the point of image acquisition.

This device is not intended for mammography use

Device Description

x ViewNet is a secured imaging system that is used to view, edit, manipulate, annotate, analyze, and store images and data that are stored and managed in the web-based RIS. This software-based product provides capabilities for the acceptance, transmission, printing, display, storage, editing and digital processing of medical images and associated data.

Images sent to xViewNet are converted into formats suitable for viewing in its framework, and temporarily stored in a local cache memory. The algorithms used by xViewNet to view JPEG and JPEG 2000 images follow known and acceptable protocols. Changes may be made to the presentation of the images. These changes are saved as display definitions only and do not alter the acquired image pixel data. Any and all display definitions applied to an image can always be reversed to the acquired state.

The xViewNet software includes advanced visualization such as 3-D multiplanar reconstruction of the 2-D images.

x ViewNet uses standard "off-the-shelf" PC hardware and communicates using the standard TCP/IP stack. The network hardware used to support the TCP/IP stack is superfluous to XViewNet.

AI/ML Overview

The provided text describes a Picture Archiving and Communications System (PACS) called xViewNet. However, it does not contain detailed information about specific acceptance criteria, a comprehensive study proving device performance against those criteria, or a comparative effectiveness study with human readers.

Here's an analysis based on the available information:

1. Table of Acceptance Criteria and Reported Device Performance:

The document broadly states: "The xViewNet program complies with the voluntary standards as detailed in Section 9 of this submission." and "There are no substantial differences between the xViewNet defined in this 510(k) submission and the stated predicate devices. They are similar to the technologies that are currently used in other similar medical devices."

Without access to "Section 9 of this submission" or specific performance metrics from the predicate devices, it's impossible to create a table of acceptance criteria and reported device performance. The document focuses on showing substantial equivalence to predicate devices rather than proving specific performance against quantitative acceptance criteria for things like diagnostic accuracy, speed, or image quality as one might find in a study specifically designed for AI/algorithm performance.

2. Sample Size Used for the Test Set and Data Provenance:

The document does not specify a test set sample size or data provenance (e.g., country of origin of data, retrospective/prospective). It mentions "Testing on unit level (Module Verification)", "Integration testing (System Verification)", and "Final Acceptance Testing (Validation)", but these are general quality assurance steps, not detailed descriptions of a clinical or performance study with a defined test set.

3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications:

This information is not provided. The document does not describe the establishment of a ground truth for a test set.

4. Adjudication Method for the Test Set:

This information is not provided as no specific test set or ground truth establishment is described.

5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

No MRMC comparative effectiveness study is mentioned. The document focuses on the xViewNet PACS as a display and analysis tool, not as an AI-powered diagnostic aide that would augment human readers in a quantitative way requiring such a study for a 510(k) submission of this nature. The "AI" term wasn't as prevalent in this context at the time of this filing (2000/2008).

6. Standalone (i.e., algorithm only without human-in-the-loop performance) Study:

No standalone study is described. The device is intended as a "display and analysis tool for the interpretation of diagnostic images by trained healthcare professionals," clearly indicating a human-in-the-loop system.

7. Type of Ground Truth Used:

Not applicable, as no specific ground truth for a performance study is described. The document discusses system verification and validation regarding functionality and safety, not diagnostic accuracy against a ground truth.

8. Sample Size for the Training Set:

Not applicable. This device is a PACS, not an AI/machine learning algorithm requiring a "training set" in the modern sense. Its primary functions are image display, manipulation, storage, and communication.

9. How the Ground Truth for the Training Set Was Established:

Not applicable, for the same reasons as above.

Summary of Study Information Available in the Document:

The document outlines the following general testing and quality assurance measures applied during the development of xViewNet:

  • Risk Analysis
  • Requirements Reviews
  • Design Reviews
  • Testing on unit level (Module Verification)
  • Integration testing (System Verification)
  • Final Acceptance Testing (Validation)
  • Performance Testing
  • Safety Testing

The manufacturer certifies that the software was "designed, developed, tested and validated according to written procedures" and refers to "Section 16. Software, item 7. Verification & Validation Testing" for details, which are not included in the provided text.

Conclusion:

The provided 510(k) summary for xViewNet PACS focuses on establishing substantial equivalence to predicate devices (iSite Radiology and iPACS Prism) based on its intended use, technical characteristics, and quality assurance processes, rather than presenting a detailed clinical or performance study with specific acceptance criteria, ground truth, or statistical analysis of algorithmic performance. This approach is typical for PACS systems seeking 510(k) clearance, where the primary concern is safe and effective display, storage, and communication of medical images, and not necessarily the diagnostic accuracy of an embedded AI diagnostic algorithm.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).