K Number
K083673
Date Cleared
2008-12-30

(19 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The CARESTREAM PACS is an image management system whose intended use is to provide completely scaleable local and wide area PACS solutions for hospital and related institutions/sites, which will archive, distribute, retrieve and display images and data from all hospital modalities and information systems.

The system contains interactive tools in order to ease the process of analyzing and comparing three dimensional (3D) images. It is a single system that integrates the review, dictation and reporting tools that creates a productive work environment for the radiologists and physicians.

Device Description

Carestream PACS, a multi-modality radiology reading and reporting station, provides support for 3D registration of studies taken at different times or by different modalities (CT and MRI), and for reading PET-CT images. The volumetric data sets are synchronized allowing the user to view reformatted series side by side and superimposed images. In all methods the algorithm is only using a rigid space transformation.

The registration in the "Volume Matching" application can be created by these methods:

. Register Studies

Full Automatic registration: The user is able to register two datasets by simply clicking on the windows that present these datascts, with no other additional input. The algorithm will not take into account the exact clicking points.

  • Manually Register Studies .
    The user is able to "manually" register two datasets by providing additional inputs to the registration algorithm. The matching of the two volumes is done based on the planes of the two studies and on the position of the two clicking points. This input should be enough to match the two dataset. The user should try to click on the same anatomical location in both studies. while both planes are already swiveled to represent the same anatomical plane.

  • . Refine Registration
    The user is able to adjust the registration. The refine algorithm is actually a local registration algorithm that tries to match two volumes based on local, rather than global, similarity. Practically, it can serve the user when he/she wants to focus on a certain region and to compare it in the two datasets. In such cases, local registration is preferable, even if it means that the global registration will be of a poorer quality.

  • Treat Studies as Registered .
    The application supports the "treat studies as registered" feature - the ability for the user to select datasets / all datasets for which the system should treat as if they are fully registered to one another. This is needed to support places where although the data doesn't have the same frame of reference, they were registered in a "black-box" and only then copied to the PACS. The algorithm is ignoring the exact clicking point.

Reading PET-CT images require both specific image-manipulation capabilities and standardized uptake value presentation. PET scanning utilizes a radioactive molecule that is similar to glucose, called fluorodeoxyglucose (FDG). FDG accumulates within malignant cells because of their high rate of glucose metabolism, and is currently the radiotraccr most commonly used for PET' imaging.

The standardized uptake value (SUV) is a scmi-quantitative value that allows expression of FDG untake in a lesion relative to the injected dose. The standardized uptake value (SUV) is defined as the ratio of activity in tissue per millimeter to the activity in the injected dose per patient body weight.

AI/ML Overview

Here's an analysis of the provided text regarding the Carestream PACS, focusing on acceptance criteria and study details.

The provided document K083673 for Carestream PACS is a 510(k) summary, which for this particular device, focuses on substantial equivalence to a predicate device rather than a detailed clinical effectiveness study with specific performance metrics and acceptance criteria for a novel AI algorithm. The device, described as a multi-modality radiology reading and reporting station supporting 3D registration and PET-CT image reading, is a software modification to an existing PACS system.

Therefore, many of the requested points regarding acceptance criteria, AI performance, ground truth, and extensive study details typically associated with AI-driven diagnostic devices are not explicitly present in this type of submission. The focus is on verifying that the modified software performs as safely and effectively as the predicate device (Mirada Solutions Ltd, Fusion 7D).

Here's an attempt to extract the information based on the provided text, noting where information is absent:


Acceptance Criteria and Study Overview for Carestream PACS (K083673)

Summary:
The Carestream PACS device (K083673) is a modified PACS software, not a novel AI diagnostic algorithm. The 510(k) submission focuses on demonstrating substantial equivalence to a predicate device (Mirada Solutions Ltd, Fusion 7D) through performance testing under simulated use conditions. Explicit acceptance criteria in terms of specific performance metrics (e.g., sensitivity, specificity, accuracy) are not defined for the software's functionalities as a primary endpoint in this document. Instead, the acceptance criterion revolved around meeting design input requirements and conforming to user needs, demonstrating comparable safety and effectiveness to the predicate.


1. Table of Acceptance Criteria and Reported Device Performance

Acceptance Criteria (Implied)Reported Device Performance
Meet design input requirements and conform to defined user needs and intended uses."Predefined acceptance criteria was met and demonstrated that the device is as safe, as effective, and performs as well as or better than the predicate device."
Demonstrate comparable safety and effectiveness to the predicate device (Mirada Fusion 7D).The modifications to the Carestream PACS, specifically software changes supporting 3D registration and PET-CT image reading, "do not alter the fundamental scientific technology" and "raise no new issues of safety or effectiveness." This implies that the new functionalities performed comparably to, or within the expected range of, the predicate device's capabilities for similar functions (though direct quantitative metrics are not provided in this summary). The new functionality's support for rigid space transformation, auto/manual registration, refinement, and "treat studies as registered" were verified. PET-CT image manipulation and SUV presentation were also verified.

2. Sample Size Used for the Test Set and Data Provenance

  • Sample Size: Not specified in the provided text. The document refers to "nonclinical testing was conducted under simulated use conditions," but provides no details on the number of cases or studies used for testing.
  • Data Provenance: Not specified. It's likely simulated or internal test data given the "simulated use conditions" and non-clinical nature of the testing mentioned.

3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

  • This information is not provided in the summary. For this type of 510(k) for a PACS workstation modification, it's less likely that a formal expert-adjudicated ground truth dataset was created, as the focus is on software functionality and comparability rather than diagnostic accuracy of a new clinical claim. Technical experts and software testers would have been involved in verifying functionality.

4. Adjudication Method for the Test Set

  • Not applicable / Not specified. Given the nature of the device and testing, a formal adjudication method (like 2+1 or 3+1) for clinical performance metrics is not mentioned or implied. Testing likely involved verifying the correct output of the registration algorithms and display of PET-CT images against expected results, rather than interpreting a "positive" or "negative" clinical finding.

5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done

  • No. An MRMC study comparing human readers with AI assistance versus without AI assistance was not done and is not mentioned. This submission pertains to a PACS software update, not an AI-driven interpretive algorithm designed to directly improve human reader performance or make clinical decisions.

6. If a Standalone (Algorithm Only) Performance Study was done

  • Partially applicable, but for functionality rather than diagnostic accuracy. The "Discussion of Testing" mentions "Performance testing was conducted to verify the design output met the design input requirements." This implies standalone testing of the software functionalities (3D registration, PET-CT display and SUV calculation) to ensure they worked as intended. However, this is distinct from a standalone diagnostic performance study (e.g., measuring sensitivity/specificity of an AI model). The summary points to the algorithm's ability to perform rigid space transformations, automatically register datasets, allow manual registration and refinement, and calculate SUV values, which were verified functionalities.

7. The Type of Ground Truth Used

  • Internal specifications and engineering verification. For the functionalities described (3D registration, PET-CT display, SUV calculation), the "ground truth" would likely be defined by the technical specifications for these features. For example, for 3D registration, "ground truth" would involve known transformations or successful alignment verified by technical means. For SUV calculation, it would be the correct application of the defined formula and display. It is not expert consensus, pathology, or outcomes data related to disease diagnosis.

8. The Sample Size for the Training Set

  • Not applicable / Not specified. This device is a PACS system with image manipulation tools, not a machine learning model that requires a "training set" in the conventional sense of AI development for diagnostic tasks. The algorithms (e.g., for rigid registration) are likely rule-based or optimized through engineering, not trained on large clinical datasets in a machine learning paradigm.

9. How the Ground Truth for the Training Set Was Established

  • Not applicable. As a training set for an AI model is not indicated, the method for establishing ground truth for such a set is irrelevant in this context.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).