Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K082269
    Date Cleared
    2008-12-12

    (123 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K011142

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Visage PACS/CS is a system for distributing, viewing, processing, and archiving medical images within and outside health care environments.

    The Visage PACS/CS server receives image data in DICOM format via the hospital network. This provides universal connections to archives, modalities, and workstations. The supported modalities are listed in the DICOM Conformance Statement.

    Besides general image interpretation and processing tools, Visage PACS/CS provides specific tool sets for several clinical applications, including:

    • CT/MR angiography, e.g. for vascular analysis and stent planning
    • Cardiac analysis, including calcium scoring and functional assessment of cardiac CT data
    • Neuroradiology, including CT and MR brain perfusion analysis
    • Oncology, including SUV analysis and lesion marking and analysis

    Visage PACS/CS is to be used only by trained and instructed health care professionals. It can support physicians and/or their medical staff in providing their own diagnosis for medical cases. The final decision regarding diagnoses, however, resides with the doctors and/or their medical staff in their own area of responsibility.

    Although the web and thin client technologies allow the software to be run on a variety of hardware platforms, for diagnostic purposes the user must make sure that the display hardware used for reading the images complies with state-of-the-art diagnostic requirements and currently valid laws.

    Only DICOM for presentation images can be used on an FDA approved monitor for mammography for primary image diagnosis.

    Only uncompressed or non-lossy compressed images must be used for primary image diagnosis in mammography.

    Device Description

    Visage PACS is a system to distribute, view, and process medical images and reports within and outside of health care environments. It consists of the following components:

    • Visage PACS Storage
    • Visage PACS Web
    • Visage CS

    Visage PACS Storage: A server receives image data in DICOM format via the hospital network. This provides universal connections to archives, modalities and workstations. The modalities that are supported by Visage PACS Storage are listed in the DICOM Conformance Statement. Visage PACS Storage offers an archiving option for long-term storage of image data.

    Visage PACS Web: Data that are stored on the Visage PACS Storage server can be accessed simultaneously by multiple web-based viewing stations within a healthcare enterprise or from elsewhere outside through web clients. The image data transfer is done in DICOM format via the Intranet or the Internet. Images can be viewed directly within a web browser (Internet Explorer). The system offers simple functions for image manipulation and measurements. Reports can be viewed together with the images on one page.

    Visage CS: Visage CS is a client server system that uses thin client technology for distribution of 3D image data generated from image data of state-of-the-art scanning modalities. The thin client viewer allows to view and process 3D image data. No DICOM data is transferred to the client.

    AI/ML Overview

    The provided text does not contain specific acceptance criteria or a study that explicitly proves the device meets such criteria in terms of performance metrics (e.g., sensitivity, specificity, accuracy). The submission primarily focuses on establishing substantial equivalence to a predicate device, describing device functionality, and outlining the manufacturer's internal validation processes.

    Here's an analysis based on the information available:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document does not present a table of acceptance criteria or specific quantitative performance metrics. The submission focuses on the functional equivalence of new features to those of a predicate device (Aquarius Workstation) and general safety and effectiveness claims based on internal validation and risk analysis.

    2. Sample Sizes and Data Provenance:

    • Test Set (Validation Data):
      • Sample Size: Not specified. The document states that for general software validation, "imaging data deriving from CT scanners of Siemens, Philips, and GE have been used." For cardiac CTA imaging validation, "data from Siemens and Philips have been used." No specific number of cases or images is provided.
      • Data Provenance: The data came from CT scanners of Siemens, Philips, and GE. The country of origin is not explicitly stated, but the company is based in the US and the official correspondent is in Germany. The data appears to be retrospective, as it's described as "data deriving from" existing scanners.

    3. Number of Experts and Qualifications for Ground Truth - Test Set:

    • Not specified. The document states "A physician, providing ample opportunity for competent human intervention interprets images and information delivered by VISAGE PACS/CS." and "It can support physicians and/or their medical staff in providing their own diagnosis for medical cases. The final decision regarding diagnoses, however, resides with the doctors and/or their medical staff in their own area of responsibility." This implies that the device is a tool for healthcare professionals, rather than an autonomous diagnostic system requiring extensive expert-adjudicated ground truth for its own performance validation beyond typical software testing.

    4. Adjudication Method for the Test Set:

    • Not specified. Given the nature of a PACS/CS system focusing on image distribution, viewing, and processing, the "test set" validation would likely involve functional testing and comparison against expected outputs or the performance of the predicate device, rather than a clinical adjudication of diagnostic accuracy.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    • No, an MRMC comparative effectiveness study is not mentioned. The submission is for a PACS/CS system that provides viewing, processing, and archiving tools for medical images, not a diagnostic AI algorithm intended to independently interpret images or directly assist human readers in a comparative effectiveness trial context. The device's improvements are described in terms of new features for cardiac analysis, oncology, and neuro options, which are then stated to be "substantially equivalent" to features found in the predicate Aquarius Workstation.

    6. Standalone Performance Study (Algorithm Only):

    • A standalone study in the sense of an algorithm making independent diagnoses and being evaluated against a ground truth is not described. The device is a "stand-alone software package" in terms of its execution environment, but its role is as a tool for "trained health care professionals" to interpret images. Its validation focuses on functionality, data handling (DICOM compliance), and safety through risk analysis, not on an AI algorithm's diagnostic performance.

    7. Type of Ground Truth Used:

    • For the validation described, the "ground truth" implicitly refers to the correct functioning of the software, accurate display of images, proper execution of measurements and processing functions, and consistency with DICOM standards. It is not about a clinical ground truth (e.g., pathology, outcomes data) for diagnostic accuracy. The validation ensured that "only the standard DICOM data will be read out and used for further processing and display" and that the software is "fully compatible with DICOM 3.0."

    8. Sample Size for the Training Set:

    • Not applicable. This device is a PACS/CS system, not an AI model that undergoes a "training" phase with a dataset. Therefore, there is no mention of a training set or its size.

    9. How Ground Truth for the Training Set Was Established:

    • Not applicable, as there is no training set for this type of software.
    Ask a Question

    Ask a specific question about this device

    K Number
    K023003
    Date Cleared
    2002-11-20

    (72 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K011142, K973010, K020483

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The ImageChecker-CT is indicated for use as a general imaging workstation, and is intended to be used to acquire, store, transmit and display images from medical scanning devices.

    Specific indications for use for the ImageChecker-CT Workstation are the display of a composite view of 2D cross-sections, and 3D volumes of chest CT images, including findings or regions of interest ("ROI") identified by the radiologist or Computer Assisted Detection ("CAD") findings.

    The general indications for use of the ImageChecker-CT Workstation are as a general imaging workstation to assist radiologists in reviewing digital computed Tomography (CT) images of the chest.

    Specific indications for use for the ImageChecker-CT Workstation are the display of a composite view of 2D cross-sections, and 3D volumes of chest CT images, including findings or regions of interest ("ROI") identified by the radiologist or Computer Assisted Detection ("CAD") findings.

    Device Description

    The ImageChecker-CT System is a combination of dedicated computer software and hardware. The System uses an off-the-shelf personal computer with Windows and Linux-based CPUs, a hard drive, and a single monitor.

    AI/ML Overview

    The provided document, K023003 for the ImageChecker-CT Workstation, does not contain information about acceptance criteria or a study proving the device meets specific performance criteria.

    Instead, the document primarily focuses on establishing substantial equivalence to predicate devices. This means that instead of presenting a stand-alone performance study with acceptance criteria, the manufacturer is arguing that their device is as safe and effective as other legally marketed devices with similar intended use and technological characteristics.

    Therefore, many of the requested details cannot be extracted from this specific 510(k) summary. I can only provide information directly mentioned or inferable from the document regarding the type of evaluation conducted.

    Here's a breakdown of the requested information based on the provided text:


    1. Table of acceptance criteria and the reported device performance

    The document does not specify performance acceptance criteria or report device performance metrics in the way a clinical performance study would for a new device. The "Studies" section states, "The ImageChecker-CT Workstation will undergo design verification tests for conformance with specifications." This implies internal testing against design specifications, not necessarily clinical performance metrics.

    2. Sample size used for the test set and the data provenance

    Not applicable. The document describes a comparison to predicate devices for substantial equivalence, not a performance study with a distinct test set.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    Not applicable. The document does not describe a performance study involving ground truth establishment by experts.

    4. Adjudication method for the test set

    Not applicable. No test set or expert adjudication is described for a performance study.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No. The document makes no mention of an MRMC study or any assessment of human reader improvement with AI assistance. The device is described as a "general imaging workstation" that can display CAD findings, but its effectiveness with CAD is not studied here.

    6. If a standalone (i.e. algorithm only, without human-in-the-loop performance) was done

    No. The document describes a workstation for displaying images and CAD findings, but it does not report on the standalone performance of any algorithm. The 510(k) focuses on the workstation itself, not the performance of a specific CAD algorithm.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    Not applicable. No performance study with ground truth is described.

    8. The sample size for the training set

    Not applicable. The document does not describe the development or training of an algorithm.

    9. How the ground truth for the training set was established

    Not applicable. The document does not describe the development or training of an algorithm.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1