Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K180192
    Manufacturer
    Date Cleared
    2018-03-21

    (56 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Emprint™ Ablation Visualization is a stand-alone software product that allows physicians to visualize and compare CT imaging data. The display, annotation, and volume rendering of medical images aids in ablation procedures conducted using Emprint™ ablation systems. The software is not intended for diagnosis.

    Device Description

    The Emprint™ Ablation Visualization Application is a software product that achieves its medical purpose without being part of the hardware of a medical device (SaMD). The visualization application is a DICOM image viewer that allows surgeons and interventionists to utilize a health care facility's PACS (Picture Archiving and Communications System) server (or other digital media transfer process) to view and interact with CT images. Using preprocedure, intra-procedure and post-procedure CT images, physicians can both plan and evaluate soft-tissue ablation procedures conducted with the Emprint™ Ablation System (K133821) and the Emprint™ SX Ablation Platform (K171358). The application is designed to streamline and enhance the procedure planning, execution and evaluation process; it is not required for the safe and effective conduct of an Emprint™ ablation procedure.

    Using the application's three ablation workflows (Liver, Lung and Kidney Ablation), physicians can prepare for or evaluate an Emprint™ system ablation procedure by viewing and annotating patient-specific anatomy. The visualization's Compare Workflow facilitates the comparison of images across patients, or the comparison of images from the same patient before and after an ablation procedure. Physicians can use the software to perform the following, key, workflow-based tasks:

    • Import standard DICOM images and render them in 3-dimensions
    • Select and view specific anatomical features (e.g., soft-tissue lesions, anatomical landmarks)
    • Measure and mark critical anatomical features / areas of interest
    • Overlay and position virtual images of the Emprint ablation antenna and the anticipated thermal ablation zone onto the medical image
    • Add textual annotations to images
    • Save annotated plans for future viewing
    • Export annotated plans for the patient's medical record or for use in a radiology or operating suite
    • View and compare any 2 of the imported DICOM datasets, simultaneously

    The visualization application is designed for installation and use on Windows™-based computers (Windows™ 7 or 10) and is compatible with procedure plans that were generated with the predicate device (Emprint™ Procedure Planning Application, K142048).

    AI/ML Overview

    The provided text describes a software device called "Emprint™ Ablation Visualization Application" and its performance testing to demonstrate substantial equivalence to a predicate device. However, it does not contain specific acceptance criteria or detailed results of a study that proves the device meets those criteria, especially in terms of metrics like sensitivity, specificity, accuracy, or comparative performance with human readers if it were an AI-driven diagnostic aid.

    The information provided focuses on the device's function as a DICOM image viewer, its intended use for visualizing and comparing CT images for ablation procedures, and general software verification testing. It explicitly states, "The software is not intended for diagnosis." This is crucial. Since it's not a diagnostic AI, the typical performance metrics associated with AI devices (like sensitivity, specificity, or MRMC studies) are not pertinent or captured in this submission.

    Therefore, many of the requested points regarding acceptance criteria and performance against those criteria cannot be directly extracted from the provided text. I will address the points that can be inferred or explicitly stated.

    Here's an analysis based on the provided text:

    1. A table of acceptance criteria and the reported device performance

    The document does not provide a table of performance-based acceptance criteria beyond general statements about software functioning and measurement accuracy.

    Acceptance Criteria (Inferred from text)Reported Device Performance
    Proper functioning of user interface for visualization workflows"System-level verification was conducted to fully evaluate the user interface for the visualization's workflows." (Implies successful verification)
    Measurement accuracy"System-level testing was conducted to verify the application's measurement accuracy (+/- 2 voxels)." (Passes this accuracy)
    Compliance with IEC 62304:2006 (Software Life Cycle Processes)"Performance testing demonstrated the Emprint™ Ablation Visualization Application's compliance with... IEC 62304:2006"
    Compliance with NEMA PS 3.1-3.20:2014 (DICOM)"Performance testing demonstrated the Emprint™ Ablation Visualization Application's compliance with... NEMA PS 3.1-3.20:2014" AND "The subject and predicate devices are both DICOM image viewers and comply with the associated NEMA DICOM Standard."
    Meeting user needs and expectations (Human Factors)"During the product's development, Covidien followed a human factors engineering (HFE) process and conducted simulated-use, validation testing to confirm that the visualization application met user needs and expectations." (Implies successful validation)

    2. Sample size used for the test set and the data provenance

    The document does not specify a "test set sample size" in the context of a dataset of patient images for diagnostic performance evaluation, as the device is not for diagnosis. The testing seems to be functional and human factors related.

    • Data Provenance: Not specified, but given it's a visualization tool and not a diagnostic AI, the provenance of "data" in the typical sense (e.g., patient cases) is not detailed. It mentions using "preprocedure, intra-procedure and post-procedure CT images".

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    Not applicable. Since the device is a visualization tool and not a diagnostic AI, there is no mention of establishing ground truth using experts for diagnostic purposes. The ground truth for its functional performance would be self-evident (e.g., does it display the image correctly, does it measure accurately within a defined tolerance).

    4. Adjudication method for the test set

    Not applicable. No diagnostic ground truth was established by experts requiring adjudication.

    5. If a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No. The document explicitly states: "The software is not intended for diagnosis." Therefore, a comparative effectiveness study assessing human reader improvement with AI assistance would not be relevant or performed for this type of device.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The device itself is standalone software ("SaMD"). Its performance was evaluated through "software subsystem and system-level verification" and "simulated-use, validation testing." These are essentially "algorithm only" tests in the sense that they evaluate the software's inherent functions (e.g., display, annotation, measurement accuracy) rather than its interaction within a diagnostic human-in-the-loop workflow.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    For the measurement accuracy, the "ground truth" would likely be the known voxel dimensions or a calibrated phantom. For other functional aspects (display, annotation, saving), the "ground truth" is simply whether the software performs the intended action correctly according to its specifications. It is not a diagnostic ground truth based on clinical findings or pathology.

    8. The sample size for the training set

    Not applicable. This is not an AI/Machine Learning device that undergoes a "training set" in the conventional sense (i.e., learning from data to make predictions or classifications). It's a DICOM viewer with visualization and measurement tools.

    9. How the ground truth for the training set was established

    Not applicable, as there is no "training set."

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1