Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K133135
    Date Cleared
    2014-03-07

    (154 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    IMPAX VOLUME VIEWING 3.0

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    IMPAX Volume Viewing software is a visualization package for PACS workstations. It is intended to support radiographer, medical imaging technician, radiologist and referring physician in the reading, analysis and diagnosis of DICOM compliant volumetric medical datasets. The software is intended as a general purpose digital medical image processing tool, with optional functionality to facilitate visualization and measurement of vessel features.

    Other optional functionality is intended for the registration of anatomical (CT) on functional volumetric image data (MR) to facilitate the comparison of various lesions. Volume and distance measurements are intended for evaluation and quantification of tumor measurements, and other analysis and evaluation of both hard and soft tissues. The software also supports interactive segmentation of a region of interest (ROI).

    Web-browser access is available for review purposes. It should not be used to arrive at a diagnosis, treatment plan, or other decision that may affect patient care.

    Device Description

    The new device is similar to the predicate devices. All are PACS system accessories that allow the user to view and manipulate 3D image data sets. This new version includes automated removal of bone-like structures, stenosis measurement and web-browser access.

    Principles of operation and technological characteristics of the new and predicate devices are the same.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document describes verification and validation testing, with acceptance criteria for various functionalities. The reported performance consistently "met acceptance criteria" for all described tests. Specific quantitative performance values are generally not provided, instead emphasizing meeting the established criteria for equivalence and usability.

    Functionality TestedAcceptance CriteriaReported Device Performance
    Measurement AlgorithmsIdentical output to predicates (Volume Viewing 2.0 (K111638) and Registration and Fusion (K080013)); measurements within +/- scanner resolution.Results met the established acceptance criteria.
    Crosshair PositioningWithin half a voxel (for rounding differences across graphic video cards).Results met the established acceptance criteria.
    Centerline Computation/Vessel Visualization (Contrast-filled vessels in CT angiography)Adequate tracing and visualization (via side-by-side comparison with IMPAX Volume Viewing 2.2 predicate).Results met acceptance criteria.
    Stenosis MeasurementUser can determine the amount of stenosis (via side-by-side comparison with Voxar 3D predicate).Results met acceptance criteria.
    Bone-like Structure Removal (CT angiography of thorax, abdomen, pelvis, upper/lower extremities)Adequate removal from view (via side-by-side comparison with Voxar predicate).Results met acceptance criteria.
    Volume Measurements (Manual/Semi-automated)User can perform measurements in a user-friendly and intuitive way.Results met acceptance criteria.
    Image Quality (2D and 3D Rendering)Adequate (via side-by-side comparison with IMPAX Volume Viewing 2.2 predicate).Results met acceptance criteria.
    Web-browser component (XERO Clinical Applications 1.0) for non-diagnostic reviewUsability of features and functionalities for non-diagnostic review of CT and MR data sets using 3D and multi planar reconstructions.Validation successfully completed; scored as acceptable.
    Stereoscopic 3D ViewingEquivalent to "regular" 3D viewing, no distinct medical or clinical benefit over "regular" 3D viewing.Concluded that both viewing methods are equivalent.

    2. Sample Size for the Test Set and Data Provenance

    • Sample Size for Validation (Clinical Studies): 154 anonymized clinical studies.
    • Sample Size for Web-browser Component Validation: 42 anonymized clinical data sets.
    • Data Provenance: The anonymized clinical studies were used for validation in test labs and hospitals in the United Kingdom, the United States, Ireland, and Belgium. The document doesn't specify if the data originated from these countries or elsewhere, only where the validation was performed. The data is retrospective, as it consists of "anonymized clinical studies."

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Main Validation: 29 medical professionals participated. Their specific qualifications (e.g., years of experience, subspecialty) are not explicitly stated, beyond being "medical professionals."
    • Web-browser Component Validation: 11 medical professionals participated. Their specific qualifications are also not explicitly stated.
    • Stereoscopic 3D Viewing Concept Tests: 6 medical professionals from 4 different hospitals in Belgium and the Netherlands. Their specific qualifications are not explicitly stated.

    4. Adjudication Method for the Test Set

    The document does not explicitly describe a formal adjudication method (like 2+1 or 3+1 consensus) for the test sets. Instead, it mentions that a "scoring scale was implemented and acceptance criteria established" for the main validation. For the web-browser component, "examiners focused on the usability of features and functionalities." For the stereoscopic 3D viewing, "Tests by medical professionals showed" and "The tests concluded." This suggests a consensus-based approach or individual assessments against established scoring scales rather than a formal arbitration process.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No Multi-Reader Multi-Case (MRMC) comparative effectiveness study explicitly designed to measure the effect size of human readers improving with AI vs without AI assistance was done.

    The validation involved side-by-side comparisons with predicate devices, where medical professionals evaluated whether the new device's functionality (e.g., centerline computation, stenosis measurement, bone removal, image quality) was "adequate" or allowed the user to "determine the amount of stenosis" comparable to the predicates. This is more of a non-inferiority or equivalence assessment against predicate functionality, rather than an explicit measure of human reader performance improvement with the new device (AI assistance, in this context) versus without it.

    The testing of stereoscopic 3D viewing concluded "no specific medical or clinical benefits to using the stereoscopic 3D view" over the "regular" 3D view, indicating no improvement for human readers in that specific aspect.

    6. Standalone (i.e., algorithm only without human-in-the-loop performance) Study

    Yes, standalone performance was implicitly tested, particularly for "measurement algorithms" and "crosshair positioning."

    • Measurement Accuracy: Regression testing assured "the different measurement algorithms still provide the same output as the predicates." Testers "made identical measurements of diameters, areas and volumes and compared those against reference values."
    • Crosshair Positioning: Tests verified "whether viewports link to the same location in every dataset."

    While human testers initiated these measurements and observations, the assessment was of the algorithm's output against a defined reference or predicate, rather than human diagnostic performance with the algorithm's output.

    7. Type of Ground Truth Used

    • Reference Values / Predicate Comparisons: For measurement accuracy tests (diameters, areas, volumes), the ground truth was established by "reference values" and comparison against "equivalent" measurements from predicate devices (Voxar 3D Enterprise for vessel measurements, Mirage 5.5 for semi-automatic region growing volumes).
    • Expert Consensus/Qualitative Assessment: For many validation objectives (e.g., adequacy of centerline tracing, vessel visualization, bone removal, image quality, usability of volume measurements), the ground truth was essentially a qualitative assessment by medical professionals against established scoring scales and side-by-side comparisons with predicate devices. For stereoscopic 3D viewing, "Concept tests involving 6 medical professionals... were asked to score" and "The tests concluded."
    • Technical Specifications: For crosshair positioning, the ground truth was defined by technical specifications (half a voxel).

    8. Sample Size for the Training Set

    The document does not provide any information about the sample size used for a training set. The testing described focuses on verification and validation of specific functionalities in comparison to predicates or against predefined criteria. This product appears to be a PACS accessory with advanced viewing and manipulation tools, and while it involves "automated" features (like bone removal), the process description suggests a rule-based or algorithmic approach rather than a machine learning model that would typically require a distinct training set. If machine learning was used, the training data information is absent.

    9. How the Ground Truth for the Training Set Was Established

    As no training set is mentioned or implied for machine learning, there is no information on how its ground truth would have been established. The ground truth described in the document pertains to the validation and verification sets.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1