Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K162868
    Device Name
    Exam Vue PACS
    Date Cleared
    2017-03-17

    (155 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ExamVue PACS is an image management system intended to be used by trained professionals, including physicians, radiologists, nurses and medical technicians.

    The software is a software package used with general purpose computing hardware to receive, store, distribute, process and display images and associated data throughout a clinical environment. The software performs digital image processing, measurement, communication and storage. This device is not indicated for use in mammography.

    ExamVue PACS supports receiving, sending, printing, storing and displaying studies received from the following modality types via DICOM: CR and DX.

    Device Description

    The ExamVue PACS is a software solution for the storage, sharing, display and viewing for diagnosis of medical images. It consists of two software components, a central database and server software that holds, receives and distributes images, and a viewer program that can be installed on multiple computers for viewing. modifying, making measurements on, and displaying the images in the database.

    AI/ML Overview

    The provided text is a 510(k) premarket notification for a medical device called "ExamVue PACS." This document primarily focuses on establishing substantial equivalence to predicate devices rather than proving the device meets specific performance acceptance criteria through a clinical study.

    Therefore, many of the requested details about acceptance criteria, specific performance metrics, sample sizes, expert involvement, and ground truth establishment from a clinical study are not present in the provided document.

    However, I can extract the information that is available and indicate what is missing.

    Missing Information Disclaimer: The provided document is a 510(k) summary and FDA clearance letter. It is a regulatory submission focused on demonstrating substantial equivalence to existing devices, not a detailed report of a clinical performance study with specific acceptance criteria and outcome metrics for a novel AI device. As such, most of the information required to answer your prompt, particularly regarding the performance evaluation of an AI (which this PACS is not explicitly), specific acceptance criteria, sample sizes, expert ground truthing, and MRMC studies, is not contained within this regulatory submission. The document states that "Software Verification and Validation documentation for ExamVue PACS, as well as bench and clinical testing, have been provided in this submission," but these details are not part of the publicly available text provided.


    Here's an attempt to answer your prompt based only on the provided text, highlighting the missing information.

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not present specific quantitative performance acceptance criteria or report performance metrics in the way a clinical study for a diagnostic AI would. Instead, it demonstrates "substantial equivalence" based on functional characteristics and intended use.

    Here's a table reflecting the functional equivalence points made in the document, which serve as the implicit "criteria" for substantial equivalence:

    Characteristic / "Acceptance Criteria" (Implicit)ExamVue PACS Performance (Reported as "Yes" for functionality)Outcome / Status (relative to predicates)
    Intended Use (as described)Image management for trained professionals, including processing, measurement, communication, storage. Not for mammography. Supports CR and DX DICOM.Substantially Equivalent (same as predicates)
    Performance Standard21 CFR 892.2050Meets standard (same as predicates)
    Operating System RequirementsWindows 7, 8, or 10Different specifies, but "does not represent a substantial difference"
    Image ArchiveYesEquivalent (same as predicates)
    Image Transfer (DICOM 3.0 Standard)YesEquivalent (same as predicates)
    Image DisplayYesEquivalent (same as predicates)
    Patient SearchYesEquivalent (same as predicates)
    Distance and Angle MeasurementsYesEquivalent (same as predicates)
    Window Level AdjustmentYesEquivalent (same as predicates)
    Zoom and Magnify FunctionsYesEquivalent (same as predicates)
    Line Profile and HistogramYesEquivalent (same as predicates)
    DICOM Directory ReadingYesEquivalent (where predicates also "Yes") / Improved (where predicate "No")
    DICOM Query/RetrieveYesEquivalent (where predicates also "Yes") / Improved (where predicate "No")
    DICOM ImportYesEquivalent (same as predicates)
    DICOM CD BurnYesEquivalent (same as predicates)
    DICOM PrintYesEquivalent (same as predicates)
    DICOM TagYesEquivalent (same as predicates)
    Display Patient Information EditingYesEquivalent (same as predicates)

    Study Proving Acceptance Criteria:

    The "study" proving the device meets its (implicit) acceptance criteria for substantial equivalence is the 510(k) submission process itself, which includes:

    • Comparison with Predicate Devices: A detailed comparison chart (Exhibit 1) highlighting functional similarities.
    • Software Verification and Validation (V&V): Stated that "Software Verification and Validation documentation for ExamVue PACS, as well as bench and clinical testing, have been provided in this submission." Specific details of this V&V are not provided in the public summary.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: Not specified in the provided document. The mention of "bench and clinical testing" is made, but no numbers are given for patient cases or images in any test set.
    • Data Provenance: Not specified. It does not mention the country of origin of data or whether it was retrospective or prospective.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Number of Experts: Not specified.
    • Qualifications of Experts: Not specified. The device is intended for "trained professionals, including physicians, radiologists, nurses and medical technicians," but this refers to the users of the PACS, not necessarily the experts defining ground truth for a performance study.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not specified. Given this is a PACS submission focused on functional equivalence, a formal adjudication process for diagnostic performance ground truth (like 2+1 or 3+1 for AI) is not detailed.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC Study: Not mentioned or implied. This device is a PACS, essentially an image management system. It is not presented as an AI-powered diagnostic aid that would assist human readers in improving their diagnostic accuracy or efficiency. Therefore, an MRMC study comparing human readers with and without "AI assistance" is not relevant to this type of device and is not reported.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Standalone Performance: Not applicable and not discussed. The ExamVue PACS is a fundamental image management system (software for receiving, storing, displaying images, etc.), not a diagnostic algorithm itself. Its "performance" is in its ability to correctly handle and display images and associated data according to DICOM standards and without defects, not in generating an AI-driven diagnosis.

    7. The Type of Ground Truth Used

    • Type of Ground Truth: Not explicitly stated as "expert consensus," "pathology," or "outcomes data." For a PACS system, the "ground truth" would likely relate to the integrity of image data, successful transmission and storage, accurate display of DICOM tags, and correct functioning of measurement tools, rather than medical diagnostic truth. It implies functional verification and validation.

    8. The Sample Size for the Training Set

    • Sample Size for Training Set: Not applicable and not reported. This device is not an AI algorithm that undergoes a "training phase" on a dataset. It's a software system.

    9. How the Ground Truth for the Training Set was Established

    • Ground Truth for Training Set Establishment: Not applicable. As it's not an AI/ML algorithm requiring a training set, the concept of establishing ground truth for training data does not apply. The development process would involve software engineering best practices, verification, and validation against functional requirements.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1