(15 days)
Physician's Desktop Review is a medical image display workstation that provides software applications used for review and interpretation of medical images/data. The results obtained may be used as a tool in interpretation of data derived from any medical imaging procedures. The Physician's Desktop Review system should only be operated by qualified healthcare professionals (e.g., radiologists, cardiologists, oncologists, or general nuclear medicine physicians) trained in the use of medical imaging equipment.
Physician's Desktop Review (PDT) is a Windows®-based physician workstation. The product's design and features improve physician workflow by integrating image and information into his/her desktop environment. The comprehensive tools and features provided with this product allow the physician to review, interpret, and report results and not have to leave the office environment. The connectivity package allows the physician to download image data to his/her location reducing the time for travel and improving turn-around time for patient results.
The provided text does not contain a study that proves the device meets specific acceptance criteria with reported performance metrics. Instead, it describes a 510(k) premarket notification for a medical device called "Physician's Desktop Review."
The core of this submission is a claim of substantial equivalence to previously cleared predicate devices (Pegasys Ultra™ K993946 and Pegasys InTouch (WebView™) K974474). This means the manufacturer is asserting that their new device performs similarly and has the same intended use as these already approved devices, rather than submitting new performance data against a set of specific acceptance criteria.
Therefore, many of the requested details about acceptance criteria, study design, sample sizes, expert ground truth, adjudication methods, MRMC studies, and standalone performance are not present in this document. The FDA's letter (K021669) confirms that the device is deemed substantially equivalent, allowing it to be marketed, but does not detail a new performance study for this specific device.
Based on the provided text, here's what can be extracted:
1. Table of Acceptance Criteria and Reported Device Performance:
- Acceptance Criteria: Not explicitly stated as quantifiable performance metrics for this specific device. The implicit "acceptance criteria" is that the device should demonstrate similar performance and functionality to the predicate devices.
- Reported Device Performance: Not quantified. The general statement is that it performs "in a similar manner with respect to, display, review applications, data storage, and system utilities" as the predicate devices.
2. Sample size used for the test set and the data provenance:
- Sample Size: Not applicable/not provided. No new test set for performance evaluation is described.
- Data Provenance: Not applicable/not provided.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not applicable/not provided. No new ground truth establishment is described for a performance study of this device.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not applicable/not provided.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study is not mentioned. This device is a PACS workstation for review and interpretation, not an AI-powered diagnostic tool, so such a study would not typically be performed or described for this type of submission.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- No, a standalone performance study is not described. This device is a workstation for human healthcare professionals to use for interpretation.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Not applicable/not provided for a new performance study. The substantial equivalence relies on the established performance and safety of the predicate devices.
8. The sample size for the training set:
- Not applicable/not provided. This device is a software workstation, not an algorithm trained on a dataset.
9. How the ground truth for the training set was established:
- Not applicable/not provided.
Summary of the basis for clearance:
The basis for the FDA's clearance (K021669) of the "Physician's Desktop Review" device is its substantial equivalence to two predicate devices: Pegasys Ultra™ (K993946) and Pegasys InTouch (WebView™) (K974474). The manufacturer asserts that the new device has "similar indications for use and overall function and perform in a similar manner with respect to, display, review applications, data storage, and system utilities." This type of submission (510(k)) does not typically require a new, independent performance study against novel acceptance criteria if substantial equivalence to existing cleared devices can be demonstrated.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).