(111 days)
Pegasys Ultra is a nuclear medicine image processing and display workstation that provides software applications used to process, analyze, and display medical images/data. The results obtained may be used as a tool, by a nuclear physician, in determining the diagnosis of patient disease conditions in various organs, tissues, and other anatomical structures. The data processed may be derived from any nuclear medicine procedure. The Pegasys Ultra system should only be operated by qualified healthcare professionals trained in the use of nuclear medicine equipment.
Pegasys Ultra is a UNIX-based SUN display workstation and central processor, a high resolution processing console and TeleLOGIC remote service package. It provides for operator interaction with comprehensive ADAC PEGASYS clinical software provided by ADAC Laboratories and includes On-Line User Documentation.
The workstation allows a qualified operator to process and enhance the data, reconstruct data sets, produce quantitative data from regions of interest, display images, curves and text. Images can be displayed in color, show fluorescence intensity, and different zoom factors. Identification parameters include volume, size, and shape. Use of multiple viewports allows the operator to display images with different color maps, intensities, and zoom factors simultaneously. The PEGASYS desktop contains graphic icons allowing selection of an application such as Renal Analysis, and a menu selection, which consists of a main menu and submenus for image processing of the multiple applications.
The provided 510(k) summary for the PEGASYS Ultra nuclear medicine imaging workstation does not contain detailed acceptance criteria or a specific study proving the device meets said criteria in the format typically expected for medical device performance evaluation (e.g., sensitivity, specificity, accuracy against a gold standard).
Instead, the documentation focuses on demonstrating substantial equivalence to a predicate device per FDA 510(k) requirements. This means the device is considered safe and effective because it has similar technological characteristics and intended use to a device already legally marketed.
Here's a breakdown of the available information and what is missing based on your requested format:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Not explicitly defined in terms of quantitative performance metrics (e.g., sensitivity, specificity, accuracy). | "PEGASYS Ultra has met all its specifications, demonstrating substantially equivalent performance to its predicate device, and is safe and effective for its intended use." |
The document states "the basic algorithms and original calculations have not changed" from the predicate device. |
Missing Information:
- Specific quantitative performance metrics (e.g., accuracy, precision, processing speed benchmarks, image quality metrics) that would typically constitute acceptance criteria for a new medical image processing device.
- Measured performance values against those criteria.
2. Sample Size Used for the Test Set and Data Provenance
The document mentions "Non-clinical testing was performed for Verification and Validation testing" but does not specify a "test set" in the context of patient data or a clinical study. The testing seems to refer to internal engineering and software validation.
Missing Information:
- Sample size for any test set involving patient data.
- Data provenance (e.g., country of origin of data, retrospective or prospective).
3. Number of Experts Used to Establish Ground Truth and Qualifications
The document does not describe any study that involved establishing ground truth with experts. The "results obtained may be used as a tool, by a nuclear physician, in determining the diagnosis of patient disease conditions," but this refers to the intended use of the device, not a performance study where experts establish ground truth.
Missing Information:
- Number of experts.
- Qualifications of experts.
4. Adjudication Method for the Test Set
No adjudication method is described because no expert-based ground truth establishment is detailed.
Missing Information:
- Adjudication method (e.g., 2+1, 3+1, none).
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No MRMC study is mentioned. The submission focuses on substantial equivalence to a predicate device rather than a comparative effectiveness study with human readers.
Missing Information:
- Whether an MRMC study was done.
- Effect size of human reader improvement with AI vs. without AI assistance.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
The device is an "image processing and display workstation" that "provides software applications" to "process, analyze, and display medical images/data." The results are intended "as a tool, by a nuclear physician." This implies a human-in-the-loop system. While the "basic algorithms and original calculations have not changed" from the predicate, no standalone algorithm-only performance is independently reported as a primary safety and effectiveness metric. The "Summary of Testing" refers to verification and validation of the system against specifications.
Missing Information:
- Explicit reporting of standalone algorithm performance.
7. The Type of Ground Truth Used
No particular type of ground truth (e.g., expert consensus, pathology, outcomes data) is mentioned as being used for performance evaluation in the context of clinical accuracy. The "Summary of Testing" refers to meeting product specifications.
Missing Information:
- Type of ground truth used for performance studies.
8. The Sample Size for the Training Set
No training set is mentioned. The device's "basic algorithms and original calculations have not changed" from the predicate device, suggesting a more traditional software development approach rather than a machine learning model that would require a distinct training set.
Missing Information:
- Sample size for the training set.
9. How the Ground Truth for the Training Set Was Established
As no training set is mentioned, this information is not applicable.
Missing Information:
- How ground truth for the training set was established.
Summary of the Study that Proves the Device Meets Acceptance Criteria (as described in the 510(k) summary):
The provided 510(k) summary for K993946, PEGASYS Ultra, describes a primary method of demonstrating safety and effectiveness through substantial equivalence to a predicate device (PEGASYS Nuclear Medicine Imaging Computer, K892358).
Key Points from the document:
- Acceptance Criteria: Not explicitly defined in terms of clinical performance metrics. The implicit acceptance criterion is that the device "met all its specifications, demonstrating substantially equivalent performance to its predicate device, and is safe and effective for its intended use."
- Study Design: The "Summary of Testing" states that "Non-clinical testing was performed for Verification and Validation testing." This typically refers to internal software and hardware testing to ensure the device functions as designed and meets internal engineering specifications.
- The document emphasizes that "the basic algorithms and original calculations have not changed" from the predicate device. The improvements are in "upgraded hardware" and an "enhanced user interface," increasing "speed" and general sophistication.
- Proof of Meeting Criteria: The claim of meeting criteria is based on:
- Verification and Validation testing against ADAC Laboratories' internal procedures and specifications.
- The assertion that the core image processing algorithms and calculations are the same as those in the legally marketed predicate device.
- The FDA's decision of substantial equivalence through the 510(k) process, which relies on comparing the new device's intended use and technological characteristics to a predicate, and finding them similar enough that the new device raises no new questions of safety or effectiveness.
It is important to note that for a traditional 510(k) cleared device like this from 2000, particularly a workstation for image processing, the level of detailed clinical performance data (like sensitivity/specificity against a gold standard, or MRMC studies) was not typically required in the same way it might be for a novel diagnostic algorithm or AI-powered device today. The focus was on demonstrating that the new device performs its intended functions comparably to existing, proven technology.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).