Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K162955
    Date Cleared
    2016-12-19

    (56 days)

    Product Code
    Regulation Number
    892.2050
    Why did this record match?
    Reference Devices :

    K151353, K123920, K113620, K160315, K150665, K023785

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Multi-Modality Tumor Tracking (MMTT) application is a post processing software application used to display, process, analyze , quantify and manipulate anatomical and functional images, for CT, MR PET/CT and SPECT/CT images and/or multiple time-points. The MMTT application is intended for use on tumors which are known/confirmed to be pathologically diagnosed cancer. The results obtained may be used as a tool by clinicians in determining the diagnosis of patient disease conditions in various organs, tissues, and other anatomical structure.

    Device Description

    Philips Medical Systems' Multi-Modality Tumor Tracking (MMTT) application is a post - processing software. It is a non-organ specific, multi-modality application which is intended to function as an advanced visualization application. The MMTT application is intended for displaying, processing, analyzing, quantifying and manipulating anatomical and functional images, from multi-modality of CT ,MR PET/CT and SPECT/CT scans.

    The Multi-Modality Tumor Tracking (MMTT) application allows the user to view imaging, perform segmentation and measurements and provides quantitative and characterizing information of oncology lesions, such as solid tumor and lymph node, for a single study or over the time course of several studies (multiple time-points). Based on the measurements, the MMTT application provides an automatic tool which may be used by clinicians in diagnosis, management and surveillance of solid tumors and lymph node, conditions in various organs, tissues, and other anatomical structures, based on different oncology response criteria.

    AI/ML Overview

    The provided text does not contain detailed information about a study that proves the device meets specific acceptance criteria, nor does it include a table of acceptance criteria and reported device performance.

    The submission is a 510(k) premarket notification for the "Multi-Modality Tumor Tracking (MMTT) application." For 510(k) submissions, the primary goal is to demonstrate substantial equivalence to a legally marketed predicate device, rather than proving a device meets specific, pre-defined performance acceptance criteria through a rigorous clinical or non-clinical study that would be typical for a PMA (Premarket Approval) application.

    Here's what can be extracted and inferred from the document regarding the device's validation:

    Key Information from the Document:

    • Study Type: No clinical studies were required or performed to support equivalence. The validation was based on non-clinical performance testing, specifically "Verification and Validation (V&V) activities."
    • Demonstration of Compliance: The V&V tests were intended to demonstrate compliance with international and FDA-recognized consensus standards and FDA guidance documents, and that the device "Meets the acceptance criteria and is adequate for its intended use and specifications."
    • Acceptance Criteria (Implied): While no quantitative table is provided, the acceptance criteria are implicitly tied to:
      • Compliance with standards: ISO 14971, IEC 62304, IEC 62366-1, DICOM PS 3.1-3.18.
      • Compliance with FDA guidance documents for software in medical devices.
      • Addressing intended use, technological characteristics claims, requirement specifications, and risk management results.
      • Functionality requirements and performance claims as described in the device description (e.g., longitudinal follow-up, multi-modality support, automated/manual registration, segmentation, measurement calculations, support for oncology response criteria, SUV calculations).
    • Performance (Implied): "Testing performed demonstrated the Multi-Modality Tumor Tracking (MMTT) meets all defined functionality requirements and performance claims." Specific quantitative performance metrics are not given.

    Information NOT present in the document:

    The following information, which would typically be found in a detailed study report proving acceptance criteria, is not available in this 510(k) summary:

    1. A table of acceptance criteria and the reported device performance: This document states the device "Meets the acceptance criteria and is adequate for its intended use and specifications," but does not list these criteria or the test results.
    2. Sample sizes used for the test set and the data provenance: No details on the number of images, patients, or data characteristics used for non-clinical testing.
    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience): Since it was non-clinical testing, there's no mention of expert involvement in establishing ground truth for a test set.
    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set: Not applicable as no expert-adjudicated clinical test set is described.
    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: No MRMC study was performed as no clinical studies were undertaken.
    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: The V&V activities would have included testing the software's functionality, which could be considered standalone performance testing, but specific metrics are not provided. The device is a "post processing software application" used "by clinicians," implying a human-in-the-loop tool rather than a fully autonomous AI diagnostic device.
    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.): Not detailed for the non-clinical V&V testing. For the intended use, the device is for "tumors which are known/confirmed to be pathologically diagnosed cancer," suggesting that the "ground truth" for the intended use context is pathological diagnosis. However, this is not the ground truth for the V&V testing itself.
    8. The sample size for the training set: Not applicable; this is a 510(k) for a software application, not specifically an AI/ML product where a training set size would be relevant for model development. The document does not describe any machine learning model training.
    9. How the ground truth for the training set was established: Not applicable for the same reason as above.

    In summary, this 510(k) submission relies on a demonstration of substantial equivalence to existing predicate devices and internal non-clinical verification and validation testing, rather than a clinical study with specific, quantifiable performance metrics against an established ground truth.

    Ask a Question

    Ask a specific question about this device

    K Number
    K142316
    Device Name
    IMPAX Agility
    Date Cleared
    2015-01-06

    (140 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K111945,K133135,K123920

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    IMPAX Agility is a Picture Archiving and Communications System (PACS). It provides an interface for the acquisition. display, digital processing, annotation, review, printing, storage and distribution of multimodality medical images, reports and demographic information for diagnostic purposes within the system and across computer networks. IMPAX Aglity is intended to be used by trained healthcare professionals including, but not limited to physicians, radiologists, orthopaedic surgeons, cardiologists, mammographers, technologists, and clinicians for diagnosis and treatment planning using DICOM compliant medical images and other healthcare data.

    MPR, MIP and 3D rendering functionality allows the user to view image data from perspectives different from that in which it was acquired. Other digital image processing functionality such as multi-scale window leveling and image registration can be used to enhance image viewing. Automatic spine labelling provides the capability to semiautomatically label vertebrae or disks.

    As a comprehensive imaging suite, IMPAX Agility integrates with servers, archives. Radiology Information Systems (RIS), Hospital Information Systems (HIS), reporting and 3rd party applications for customer specific workflows.

    Lossy compressed mammography images and digitized film images should not be used for primary image interpretation. Uncompressed or non-lossy compressed "for presentation" images may be used for diagnosis or screening on monitors that are FDA-cleared for mammographic use.

    Device Description

    Agfa's IMPAX Agility system is a picture archiving and communication system (PACS), product code LLZ, intended to provide an interface for the acquisition, digital processing, annotation, review, printing, storage and distribution of multimodality medical images, reports and demographic information for diagnostic purposes within the system and across computer networks.

    The new device is substantially equivalent to the predicate devices (K111945, K133135, & K123920). It is a comprehensive PACS system that allows the user to view and manipulate 3D image data sets. The new device includes some of clinical tools of the predicate devices specifically the functionality to perform image registration, and automatic spine labeling.

    The image registration functionality allows comparison studies to be registered with active study data to align them for reading. Registration only works for volumetric CT and MR data.

    Segmentation of volumetric datasets allows the automatic removal of bones and the CT table. Bone and table removal is only available for CT datasets. Users can also manually define parts of the volume which should be removed, as well as highlight certain structures in volumes.

    Automatic spine labeling tools provide the ability to label the vertebrae or the intervertebral discs of the spine. Automatic spine labeling automatically calculates the position of the vertebrae or discs after the user selects and labels an initial starting point. The user is required to confirm the automatic placement of the labels.

    Principles of operation and technological characteristics of the new and predicate devices are the same. There is no change to the intended use of the device vs. the predicate devices. Laboratory data, stability and performance assessments, usability tests, and functionality evaluations conducted with qualified radiologists confirm that performance is equivalent to the predicates.

    AI/ML Overview

    The provided document is a 510(k) summary for the IMPAX Agility Picture Archiving and Communications System (PACS). This document details the product's features and its substantial equivalence to predicate devices, but it does not contain the specific detailed acceptance criteria or a comprehensive study report with quantitative performance metrics for the new features (automatic spine labeling, segmentation, image registration).

    The document focuses on demonstrating that the new features are substantially equivalent to those of the predicate devices and that the overall device performs as expected for a PACS system.

    Here's a breakdown of the information that is available related to your request:


    1. Table of acceptance criteria and the reported device performance

    The document mentions that "All results met acceptance criteria" for the tests conducted. However, the specific quantitative acceptance criteria are not explicitly detailed in the provided text. The performance is reported qualitatively as meeting these unspecified criteria.

    Feature AreaAcceptance Criteria (Not explicitly stated in document)Reported Device Performance
    Segmentation Accuracy(Implied: equivalent to predicate K133135)All results met acceptance criteria.
    Automatic Spine Labeling(Implied: accurate placement, user confirmation ability)All results met acceptance criteria.
    3D Registration(Implied: linked viewports, aligned data, linked navigation)All results met acceptance criteria.

    2. Sample size used for the test set and the data provenance

    • Segmentation: Not explicitly stated. The algorithm was reused from a predicate device (K133135), and "a simple regression test to confirm the algorithm was integrated correctly" was performed. No sample size for images or cases is given for this regression test.
    • Automatic Spine Labeling and 3D Registration: Not explicitly stated. The testing involved "anonymized studies" but the number of studies or images is not provided.
    • Data Provenance: The studies used for testing were "anonymized studies" and "Laboratory data." The country of origin for the data is not specified beyond "Agfa's testing lab in Belgium" for the spine labeling and 3D registration tests. The data appears to be retrospective due to the use of "anonymized studies" for validation.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Segmentation: Regression testing for bone removal was performed by "an Agfa HealthCare employee who is a qualified medical professional." The number of professionals is not specified, but it implies one.
    • Automatic Spine Labeling and 3D Registration: "Validation was carried out by three medical professionals at Agfa's testing lab in Belgium." Their specific qualifications (e.g., "radiologist with 10 years of experience") are not detailed beyond "medical professionals."

    4. Adjudication method for the test set

    The document does not describe a formal adjudication method (like 2+1 or 3+1 consensus). The testing for spine labeling and 3D registration involved three medical professionals, but it doesn't specify if their results were adjudicated in case of discrepancies. The segmentation validation mentions a single "qualified medical professional" performing the regression test, implying no multi-reader adjudication for that specific test.


    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    The document does not describe an MRMC comparative effectiveness study where human readers' performance with and without AI assistance was evaluated. The studies mentioned are primarily focused on validating the functionality of the new features (image registration, segmentation, automatic spine labeling) themselves rather than physician performance improvement.


    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, standalone performance was assessed for the new features.

    • Segmentation: The "accuracy of segmentation bone removal" was compared to the predicate device. This implies an evaluation of the algorithm's output.
    • Automatic Spine Labeling: The "accuracy of the semi-automatically placed spine labels" was evaluated. This is a standalone assessment of the algorithmic output, noting the user's requirement to confirm.
    • 3D Registration: The evaluation focused on the technical performance of the registration (e.g., whether viewports link and data aligns), which is a standalone algorithm assessment.

    7. The type of ground truth used

    The ground truth implicitly used for the validation of the new features appears to be:

    • Segmentation Accuracy: The performance of the predicate device's (K133135) segmentation algorithm.
    • Automatic Spine Labeling: The accurate placement as determined by the "medical professionals" performing the validation. This is expert consensus/judgment acting as the ground truth.
    • 3D Registration: The expected technical behavior as defined by the product requirements and judged by the "medical professionals."

    The document states "No animal or clinical studies were performed," "No patient treatment was provided or withheld," and refers to "laboratory data" and "anonymized studies." This suggests that ground truth was established by expert review of existing anonymized medical images, rather than pathology, follow-up outcomes data, or prospective clinical trials.


    8. The sample size for the training set

    The document does not provide any information about the sample size used for training the algorithms. It mostly refers to "reused" algorithms or functionality validation.


    9. How the ground truth for the training set was established

    The document does not provide any information on how the ground truth for the training set was established. It primarily focuses on the validation of the new features post-development.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1