Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K161061
    Date Cleared
    2016-06-22

    (68 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    IMPAX Volume Viewing 4.0

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Volume Viewing software is a visualization package for PACS workstations. It is intended to support the medical professional in the reading, analysis and diagnosis of DICOM compliant volumetric medical datasets. The software is intended as a general purpose digital image processing tool, with optional functionality to facilitate visualization and measurement of vessel features.

    Other optional functionality is intended for the registration of anatomical (CT) on a second CT dataset or on functional volumetric image data (MR) to facilitate the comparison of various lesions. Volume and distance measurements are intended for evaluation and quantification of tumour measurements, and evaluation of both hard and soft tissues. The software also supports interactive segmentation of a region of interest (ROI), has a dedicated tool set for lung lesion segmentation, quantification and follow-up of lesions selected by the user and provides tools to define and edit paths such as centerlines through structures, which may be used to analyze cross-sections of structures, or to provide flythrough visualizations rendered along such centerline.

    Caution: Web-browser access is available for review purposes. Images accessed through a web-browser (via a mobile device or by other means) should not be used to create a diagnosis, treatment plan, or other decision that may affect patient care.

    Device Description

    IMPAX Volume Viewing is a general purpose medical image processing tool for the reading and analysis of 3D image datasets. It is also intended for the registration of anatomical (CT) image data onto functional (MR) data to facilitate the comparison of various lesions. Volume and distance measurements facilitate the quantification of lesions and the analysis of both soft and hard tissue.

    A variant of the software also provides web-browser access for review purposes. Images accessed through a web-browser (via a mobile device or by other means) should not be used to create a diagnosis, treatment plan, or other decision that may affect patient care.

    The new device is similar to the predicate devices. All are PACS system accessories that allow the user to view and manipulate 3D image data sets. This new version adds a dedicated tool set for lesion management and flythrough visualizations rendered along a centerline for endoscopic view of vessels and airways.

    Principles of operation and technological characteristics of the new and predicate devices are the same.

    AI/ML Overview

    The provided text describes the acceptance criteria and the study that proves the device meets those criteria for the IMPAX Volume Viewing 4.0 system.

    1. Table of Acceptance Criteria and Reported Device Performance:

    Feature/AspectAcceptance CriteriaReported Device Performance
    Measurement Accuracy (diameters, areas, volumes)+/- scanner resolution (for dataset uncertainty)Results met the established acceptance criteria of +/- scanner resolution (for dataset uncertainty).
    Crosshair Position Checks (viewport linking)Half a voxel (for rounding differences across graphic video cards)Results met the established acceptance criteria of half a voxel (for rounding differences across graphic video cards).
    New Functionality Evaluation (Endoscopic Viewing, Lung Nodule Segmentation, Lesion Management Module)Clinical utility and performance deemed adequate by experts. Substantially equivalent to predicate devices.Endoscopic viewing of tubular structures (vessels and airways): Found to be substantially equivalent to the predicate iNtuition 4.4.11.
    Accuracy of lung nodule segmentation and capabilities of the lesion management module: Found to be adequate to segment lesions, analyze them, and follow-up on their growth over time.
    General Performance, Safety, Usability, SecurityMeets requirements established by in-house SOPs conforming to various ISO and IEC standards.Verification and validation testing confirmed the device meets performance, safety, usability, and security requirements. (Specific metrics for these are not detailed beyond "met requirements" but implicitly covered by the standards listed: ISO 13485, ISO 14971, ISO 27001, ISO 62366, IEC 62304).

    2. Sample size used for the test set and the data provenance:

    • Sample Size: The document states that "representative clinical datasets" were selected and loaded by the radiologists. The exact number of cases/datasets in the test set is not specified.
    • Data Provenance: The radiologists were invited to Agfa's facilities, implying the clinical datasets were likely from retrospective cases, although this is not explicitly stated. The country of origin of the data is not specified but given the location of the testing (Belgium), it is plausible the data also originated from Belgium or nearby European countries.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Number of Experts: 3 radiologists
    • Qualifications of Experts: The document states "3 radiologists from several Belgian hospitals." Specific years of experience or subspecialty certification (e.g., neuroradiologist, interventional radiologist) are not provided.

    4. Adjudication method for the test set:

    • The document states that the radiologists "executed typical workflows and scored the features under investigation. A scoring scale was implemented and acceptance criteria established." This implies a consensus-based scoring or independent scoring followed by a determination of whether acceptance criteria were met. An explicit adjudication method like "2+1" or "3+1" is not mentioned.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No, an MRMC comparative effectiveness study was not done. The study involved radiologists evaluating the functionality of the device, rather than comparing human reader performance with and without AI assistance on specific diagnostic tasks. The focus was on demonstrating the functionality and subjective adequacy of the new features and equivalence to predicate devices, not on improving human reader performance. This device is primarily a visualization and processing tool, not an AI-powered diagnostic aid that would typically be evaluated in an MRMC study for reader improvement.

    6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:

    • Yes, in part. The "Verification" section describes tests on the algorithms themselves:
      • Regression testing on measurement algorithms to ensure they provide the same output as the previous version.
      • Crosshair position checks to verify viewport linking.
      • The "accuracy of the lung nodule segmentation" was scored, suggesting a standalone performance aspect of this algorithm was evaluated against some reference.
    • The "Validation" section involved human readers evaluating the new functionality, but the underlying measurements and segmentations are performed by the algorithms. So, the algorithms' standalone performance was assessed for accuracy and functionality, and then confirmed by human interaction during validation.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • For Measurement Accuracy: "Reference values" were used, which implies a pre-established, highly accurate measurement for the specific datasets (e.g., from a precisely measured phantom or a highly accurate prior measurement). The exact nature of these reference values is not explicitly stated.
    • For Crosshair Position Checks: The ground truth was based on expected precise pixel/voxel alignment ("half a voxel").
    • For New Functionality (Endoscopic Viewing, Lung Nodule Segmentation, Lesion Management Module): The ground truth was expert evaluation/consensus by the 3 radiologists regarding the adequacy and equivalence of the functionality.

    8. The sample size for the training set:

    • The document does not provide any information about a training set. This device is presented more as an advanced image processing and visualization tool rather than a machine learning/AI diagnostic algorithm that typically requires a large training set. While some algorithms (like segmentation) may inherently involve learned parameters, no details on their training are given.

    9. How the ground truth for the training set was established:

    • As no information on a training set is provided, the method for establishing its ground truth is also not specified.
    Ask a Question

    Ask a specific question about this device

    K Number
    K133135
    Date Cleared
    2014-03-07

    (154 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    IMPAX VOLUME VIEWING 3.0

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    IMPAX Volume Viewing software is a visualization package for PACS workstations. It is intended to support radiographer, medical imaging technician, radiologist and referring physician in the reading, analysis and diagnosis of DICOM compliant volumetric medical datasets. The software is intended as a general purpose digital medical image processing tool, with optional functionality to facilitate visualization and measurement of vessel features.

    Other optional functionality is intended for the registration of anatomical (CT) on functional volumetric image data (MR) to facilitate the comparison of various lesions. Volume and distance measurements are intended for evaluation and quantification of tumor measurements, and other analysis and evaluation of both hard and soft tissues. The software also supports interactive segmentation of a region of interest (ROI).

    Web-browser access is available for review purposes. It should not be used to arrive at a diagnosis, treatment plan, or other decision that may affect patient care.

    Device Description

    The new device is similar to the predicate devices. All are PACS system accessories that allow the user to view and manipulate 3D image data sets. This new version includes automated removal of bone-like structures, stenosis measurement and web-browser access.

    Principles of operation and technological characteristics of the new and predicate devices are the same.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document describes verification and validation testing, with acceptance criteria for various functionalities. The reported performance consistently "met acceptance criteria" for all described tests. Specific quantitative performance values are generally not provided, instead emphasizing meeting the established criteria for equivalence and usability.

    Functionality TestedAcceptance CriteriaReported Device Performance
    Measurement AlgorithmsIdentical output to predicates (Volume Viewing 2.0 (K111638) and Registration and Fusion (K080013)); measurements within +/- scanner resolution.Results met the established acceptance criteria.
    Crosshair PositioningWithin half a voxel (for rounding differences across graphic video cards).Results met the established acceptance criteria.
    Centerline Computation/Vessel Visualization (Contrast-filled vessels in CT angiography)Adequate tracing and visualization (via side-by-side comparison with IMPAX Volume Viewing 2.2 predicate).Results met acceptance criteria.
    Stenosis MeasurementUser can determine the amount of stenosis (via side-by-side comparison with Voxar 3D predicate).Results met acceptance criteria.
    Bone-like Structure Removal (CT angiography of thorax, abdomen, pelvis, upper/lower extremities)Adequate removal from view (via side-by-side comparison with Voxar predicate).Results met acceptance criteria.
    Volume Measurements (Manual/Semi-automated)User can perform measurements in a user-friendly and intuitive way.Results met acceptance criteria.
    Image Quality (2D and 3D Rendering)Adequate (via side-by-side comparison with IMPAX Volume Viewing 2.2 predicate).Results met acceptance criteria.
    Web-browser component (XERO Clinical Applications 1.0) for non-diagnostic reviewUsability of features and functionalities for non-diagnostic review of CT and MR data sets using 3D and multi planar reconstructions.Validation successfully completed; scored as acceptable.
    Stereoscopic 3D ViewingEquivalent to "regular" 3D viewing, no distinct medical or clinical benefit over "regular" 3D viewing.Concluded that both viewing methods are equivalent.

    2. Sample Size for the Test Set and Data Provenance

    • Sample Size for Validation (Clinical Studies): 154 anonymized clinical studies.
    • Sample Size for Web-browser Component Validation: 42 anonymized clinical data sets.
    • Data Provenance: The anonymized clinical studies were used for validation in test labs and hospitals in the United Kingdom, the United States, Ireland, and Belgium. The document doesn't specify if the data originated from these countries or elsewhere, only where the validation was performed. The data is retrospective, as it consists of "anonymized clinical studies."

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Main Validation: 29 medical professionals participated. Their specific qualifications (e.g., years of experience, subspecialty) are not explicitly stated, beyond being "medical professionals."
    • Web-browser Component Validation: 11 medical professionals participated. Their specific qualifications are also not explicitly stated.
    • Stereoscopic 3D Viewing Concept Tests: 6 medical professionals from 4 different hospitals in Belgium and the Netherlands. Their specific qualifications are not explicitly stated.

    4. Adjudication Method for the Test Set

    The document does not explicitly describe a formal adjudication method (like 2+1 or 3+1 consensus) for the test sets. Instead, it mentions that a "scoring scale was implemented and acceptance criteria established" for the main validation. For the web-browser component, "examiners focused on the usability of features and functionalities." For the stereoscopic 3D viewing, "Tests by medical professionals showed" and "The tests concluded." This suggests a consensus-based approach or individual assessments against established scoring scales rather than a formal arbitration process.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No Multi-Reader Multi-Case (MRMC) comparative effectiveness study explicitly designed to measure the effect size of human readers improving with AI vs without AI assistance was done.

    The validation involved side-by-side comparisons with predicate devices, where medical professionals evaluated whether the new device's functionality (e.g., centerline computation, stenosis measurement, bone removal, image quality) was "adequate" or allowed the user to "determine the amount of stenosis" comparable to the predicates. This is more of a non-inferiority or equivalence assessment against predicate functionality, rather than an explicit measure of human reader performance improvement with the new device (AI assistance, in this context) versus without it.

    The testing of stereoscopic 3D viewing concluded "no specific medical or clinical benefits to using the stereoscopic 3D view" over the "regular" 3D view, indicating no improvement for human readers in that specific aspect.

    6. Standalone (i.e., algorithm only without human-in-the-loop performance) Study

    Yes, standalone performance was implicitly tested, particularly for "measurement algorithms" and "crosshair positioning."

    • Measurement Accuracy: Regression testing assured "the different measurement algorithms still provide the same output as the predicates." Testers "made identical measurements of diameters, areas and volumes and compared those against reference values."
    • Crosshair Positioning: Tests verified "whether viewports link to the same location in every dataset."

    While human testers initiated these measurements and observations, the assessment was of the algorithm's output against a defined reference or predicate, rather than human diagnostic performance with the algorithm's output.

    7. Type of Ground Truth Used

    • Reference Values / Predicate Comparisons: For measurement accuracy tests (diameters, areas, volumes), the ground truth was established by "reference values" and comparison against "equivalent" measurements from predicate devices (Voxar 3D Enterprise for vessel measurements, Mirage 5.5 for semi-automatic region growing volumes).
    • Expert Consensus/Qualitative Assessment: For many validation objectives (e.g., adequacy of centerline tracing, vessel visualization, bone removal, image quality, usability of volume measurements), the ground truth was essentially a qualitative assessment by medical professionals against established scoring scales and side-by-side comparisons with predicate devices. For stereoscopic 3D viewing, "Concept tests involving 6 medical professionals... were asked to score" and "The tests concluded."
    • Technical Specifications: For crosshair positioning, the ground truth was defined by technical specifications (half a voxel).

    8. Sample Size for the Training Set

    The document does not provide any information about the sample size used for a training set. The testing described focuses on verification and validation of specific functionalities in comparison to predicates or against predefined criteria. This product appears to be a PACS accessory with advanced viewing and manipulation tools, and while it involves "automated" features (like bone removal), the process description suggests a rule-based or algorithmic approach rather than a machine learning model that would typically require a distinct training set. If machine learning was used, the training data information is absent.

    9. How the Ground Truth for the Training Set Was Established

    As no training set is mentioned or implied for machine learning, there is no information on how its ground truth would have been established. The ground truth described in the document pertains to the validation and verification sets.

    Ask a Question

    Ask a specific question about this device

    K Number
    K111638
    Date Cleared
    2011-06-24

    (11 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    IMPAX VOLUME VIEWING

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Impax Volume Viewing is a visualization package for PACS workstations. It is intended to support radiographer, medical imaging technician, radiologist and referring physician in the reading, analysis and diagnosis of DICOM compliant volumetric medical datasets. The software is intended as a general purpose digital medical image processing tool, with optionality to facilitate visualization and measurement of vessel features.

    Other optional functionality is intended for the registration of anatomical (CT) on functional volumetric image data (MR) to facilitate the comparison of various lesions. Volume and distance measurements are intended for evaluation and quantification of tumor measurements, and other analysis and evaluation of both hard and soft tissues. The software also supports interactive segmentation of a region of interest (ROI).

    Device Description

    Impax Volume Viewing software is a visualization package for PACS workstations. It is intended to support radiographer, medical imaging technician, radiologist and referring physician in the reading, analysis and diagnosis of DICOM compliant volumetric medical datasets. The software is a general purpose digital medical image processing tool, with optional functionality to facilitate visualization and measurement of vessel features.

    Other optional functionality is intended for the registration of anatomical (CT) on functional volumerric image data (MR) to facilitate the comparison of various lesions. Volume and distance measurements are intended for evaluation and quantification of tumor measurements, and other analysis and evaluation of both hard and soft tissues. The software also supports interactive segmentation of a region of interest (ROI).

    Impax Volume Viewing is a software accessory to Agfa's Impax Picture Archiving and Communications system.

    It is a tool for conveniently viewing and manipulating cross-sectional image series' for display in any orientation and slice thickness. A second series can be registered or fused to the first automatically, manually or with user defined landmarks. Segmentation of blood vessels and airfilled structures facilitate the visualization of vessel features. Color maps, subtraction views, multiple screen layouts and tools for measurement, calculations and annotations are provided.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study information for the Impax Volume Viewing device, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The provided 510(k) summary does not explicitly define quantitative acceptance criteria for the Impax Volume Viewing software. Instead, it relies on demonstrating substantial equivalence to predicate devices through functional and technological comparisons, along with verification and validation testing. The reported "performance" is that the device "meets performance, measurement and usability requirements" and conforms to certain standards.

    Therefore, the table will reflect the general performance claims rather than specific numerical acceptance criteria.

    Acceptance Criteria CategorySpecific Acceptance Criteria (as implied)Reported Device Performance
    Functional EquivalenceDevice functions similarly to predicates."Similar to the predicate devices." "Principles of operation and technological characteristics... are the same." "Has an Indications For Use Statement largely similar to the statements for the two predicates."
    Intended Use EquivalenceDevice has the same intended use as predicates."Intended uses... are the same."
    Technological CharacteristicsSoftware processes 3D image data, offers viewing/manipulation, registration, segmentation, measurement, etc."Tool for conveniently viewing and manipulating cross-sectional image series," "second series can be registered or fused," "Segmentation of blood vessels and air-filled structures," "Color maps, subtraction views, multiple screen layouts and tools for measurement, calculations and annotations are provided."
    PerformanceMeets general performance, measurement, and usability requirements."Verification and validation testing confirm the device meets performance, measurement and usability requirements."
    Standards ConformanceConforms to specified medical device and quality management standards.Conforms to EN12435:2006, ISO 14971:2007, and ISO 13485:2003.
    Safety and EffectivenessDifferences from predicates do not alter therapeutic/diagnostic effects."Differences in the new device and the predicates do not alter their intended therapeutic/diagnostic effects."

    2. Sample Size Used for the Test Set and Data Provenance:

    • Sample Size for Test Set: Not specified. The document states "Verification and validation testing" was performed but does not detail the size of the dataset used for these tests.
    • Data Provenance: Not specified. It's not mentioned if the data was retrospective or prospective, or its country of origin.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:

    • Number of Experts: Not specified. Given that no clinical trials were performed, it's unlikely that external experts were used for establishing a "ground truth" in a clinical context for the test set as typically understood in a performance study. The testing seems more focused on engineering verification and validation.
    • Qualifications of Experts: Not specified.

    4. Adjudication Method for the Test Set:

    • Adjudication Method: Not specified. Without explicit information on expert review or clinical studies, there's no mention of an adjudication method like 2+1 or 3+1.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    • MRMC Study Done: No. The document explicitly states: "No clinical trials were performed in the development of the device." An MRMC study would fall under clinical trials.
    • Effect Size of Human Readers Improvement with AI vs. without AI assistance: Not applicable, as no MRMC study was conducted.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study:

    • Standalone Study Done: Yes, implicitly. The "Verification and validation testing" would primarily assess the software's ability to correctly perform its functions (e.g., rendering, registration accuracy, measurement accuracy, segmentation capabilities) without necessarily involving human interpretation as a primary performance metric beyond usability. However, the document doesn't provide specific standalone performance metrics.

    7. Type of Ground Truth Used:

    • Type of Ground Truth: For the "verification and validation testing," the ground truth would likely be established through:
      • Engineering specifications and requirements: The software's output would be compared against predefined correct behaviors, mathematical principles, and known accurate measurements.
      • Known good datasets: Using datasets with established characteristics to confirm the software's processing and display accuracy.
      • Comparative analysis with predicate devices: Ensuring the new device's output is consistent with the established performance of the predicate devices.
      • Given the nature of the device (visualization and manipulation tool), "pathology" or "outcomes data" are less likely to be the primary ground truth for its technical performance rather than its diagnostic utility.

    8. Sample Size for the Training Set:

    • Sample Size for Training Set: Not applicable. This device is described as a "visualization package" and a "general purpose digital medical image processing tool." It is not leveraging machine learning or AI models in the sense that requires a "training set" for model development. The software's functionality (rendering, registration, segmentation, measurement) is based on deterministic algorithms and user interaction rather than learned patterns from a training dataset.

    9. How the Ground Truth for the Training Set Was Established:

    • How Ground Truth for Training Set Was Established: Not applicable, as there is no mention of a training set for machine learning.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1