K Number
K974474
Device Name
PEGASYS INTOUCH
Manufacturer
Date Cleared
1998-02-24

(90 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
N/A
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Pegasys InTouch™ software is intended to transfer and provide interpretive displays of Pegasys and DICOM image data either compressed or uncompressed to physician workstations via a local area network (LAN) interconnected with Ethernet or via a wide area network (WAN) comprised of two or more LANs in different locations, using a hospital intranet, not the Internet.

Device Description

Pegasys InTouch™ has been developed to provide a physician with the ability to review Pegasys and DICOM images from locations outside the original study area. Pegasys InTouch™ has been designed to operate on standard, off-the-shelf hardware and software components. Using standard WinTel PC systems and the hospital proprietary LAN/WAN (either local or wide area), physicians will have the ability to review and interact with patient data from their desks.

Pegasys and other digital imaging communications systems consist of a computer with a high speed data modem connected to a standard telephone line or computer network that can transmit or receive digitized images. The system is programmed to send, receive, store, copy and display these images in a manner that assures that images are not altered. Operator interface features are also computer controlled so as to enhance the system's capabilities and make the system more user friendly. Images can be displayed with different resolutions and colors; the same image set can be viewed simultaneously at different locations on designated physician workstations.

The physician has the capability to select a variety of display formats for viewing. For example, the physician may select up to 9 separate datasets to be displayed simultaneously, including whole body and snapshot displays; comparative (tomographic) displays of oblique or transverse files; comparative cardiac SPECT data; and SPECT and gated SPECT data in a 10 frame-per-second cine mode.

AI/ML Overview

Here's an analysis of the provided information, focusing on the acceptance criteria and the study details:

This 510(k) summary for Pegasys InTouch™ provides limited detail about specific performance acceptance criteria and a formal study to prove those criteria. It seems to focus more on describing the device's functionality and its substantial equivalence to predicate devices rather than rigorous performance testing.

Acceptance Criteria and Reported Device Performance

Given the nature of the provided document, explicit performance metrics with numerical targets are not stated. The document indicates that the device's main function is to "transfer and provide interpretive displays of Pegasys and DICOM image data." The "Testing" section describes a basic functional test rather than a performance study against specific criteria.

Acceptance Criteria CategorySpecific Criteria (Inferred from document)Reported Device Performance
Image TransferAbility to transfer DICOM-format images.DICOM-format image downloaded using standard web-based data transfer.
Image DisplayAbility to display transferred images.Image displayed in the image viewer, and across four display regions.
Functional EquivalenceOperates similarly to predicate PACS.Similar to a wide variety of Picture Archive and Communication Systems (PACS) on the market.

Study Information

The "II. Testing" section describes a very basic test. It does not constitute a formal study with statistical power, expert readers, or ground truth establishment in the way current clinical performance studies for AI/CAD devices are conducted.

  1. Sample size used for the test set and the data provenance:

    • Sample Size: A single image (An image was generated).
    • Data Provenance: Not specified, but the image was "generated using a prototype of the display applications," suggesting it was synthetic or specially prepared, rather than from real patient data. It is neither retrospective nor prospective in the typical sense of clinical data.
  2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Number of Experts: Not applicable. No ground truth was established by experts for this basic functional test. The test was just to see if an image could be transferred and displayed.
    • Qualifications of Experts: Not applicable.
  3. Adjudication method for the test set:

    • Adjudication Method: Not applicable. There was no complex assessment requiring adjudication.
  4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done:

    • MRMC Study: No, an MRMC comparative effectiveness study was not done. The "testing" described is a single-image, single-user functional check, not a clinical effectiveness study.
    • Effect size of how much human readers improve with AI vs without AI assistance: Not applicable, as no MRMC study was conducted.
  5. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    • Standalone Study: Yes, in a very rudimentary sense. The "testing" described is an algorithm-only functional check (displaying an image). However, it's not a performance study in terms of accuracy or clinical effectiveness, but rather a verification of basic functionality.
  6. The type of ground truth used:

    • Type of Ground Truth: Not applicable. The test involved generating a specific image and verifying its display, not evaluating diagnostic accuracy against a ground truth.
  7. The sample size for the training set:

    • Sample Size: Not applicable. This device is a PACS system for image transfer and display, not an AI or CAD algorithm that requires a training set. There is no indication of machine learning or AI models being trained here.
  8. How the ground truth for the training set was established:

    • Ground Truth Establishment: Not applicable, as there was no training set.

Summary of Limitations:

This document describes a device from 1998, a time when regulatory expectations for software-as-a-medical-device, particularly those involving AI or complex performance claims, were vastly different from today's standards. The "testing" section is a basic functional verification rather than a clinical performance study. The focus of the 510(k) in this era for such devices was largely on establishing substantial equivalence to legally marketed predicate devices based on technological characteristics and intended use, rather than extensive clinical performance data against specific acceptance criteria.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).