K Number
K223639
Device Name
VisAble.IO
Manufacturer
Date Cleared
2023-08-28

(266 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

VisAble.IO is a Computed Tomography (CT) image processing software package available for use with liver ablation procedures.

VisAble.IO is controlled by the user via a user interface.

VisAble.IO imports images from CT scanners and facility PACS systems for display and processing during liver ablation procedures.

VisAble.IO is used to assist physicians in planning liver ablation procedures, including identifying ablation targets and virtual ablation needle placement. VisAble.IO is used to assist physicians in confirming ablation zones.

The software is not intended for diagnosis. The software is not intended to predict ablation volumes or predict ablation success.

Device Description

VisAble.IO is a stand-alone software application with tools and features designed to assist users in planning ablation procedures as well as tools for treatment confirmation. The use environment for VisAble.IO is the Operating Room and the hospital healthcare environment such as interventional radiology control room.

VisAble.IO has five distinct workflow steps:

  • Data Import
  • Anatomic Structures Segmentation (Liver, Hepatic Vein, Portal Vein, Ablation Target)
  • Instrument Placement (Needle Planning)
  • Ablation Zone Segmentation
  • Treatment Confirmation (Registration of Pre- and Post-Interventional Images; Quantitative Analysis)

Of these workflow steps, two (Anatomic Segmentation and Instrument Placement) make use of the planning image. These workflow steps contain features and tools designed to support the planning of ablation procedures. The other two (Ablation Zone Seqmentation, and Treatment Confirmation) make use of the confirmation image volume. These workflow steps contain features and tools designed to support the evaluation of the ablation procedure's technical performance in the confirmation image volume.

Key features of the VisAble.IO Software include:

  • Workflow steps availability
  • Manual and automated tools for anatomic structures and ablation zone segmentation
  • Overlaying and positioning virtual instruments (ablation needles) and user-selected estimates of the ablation regions onto the medical images
  • Image fusion and registration
  • Compute achieved margins and missed volumes to help the user assess the coverage of the ablation target by the ablation zone
  • Data saving and secondary capture generation

The software components provide functions for performing operations related to image display, manipulation, analysis, and quantification, including features designed to facilitate segmentation of the ablation target and ablation zones.

The software system runs on a dedicated computer and is intended for display and processing, of a Computed Tomography (CT), including contrast enhanced images.

The system can be used on patient data for any patient demographic chosen to undergo the ablation treatment.

VisAble.IO uses several algorithms to perform operations to the user in order for them to evaluate the planned and post ablation zones. These include:

  • Seamentation
  • Image Registration
  • Measurement and Quantification

VisAble.IO is intended to be used for ablations with the following ablation instruments:
For needle planning, the software currently supports the following needle models:

  • Medtronic: Emprint Antenna 15CM, 20CM, 30CM
  • NeuWave Medical: PR Probe 15CM, 20CM: PR XT Probe 15CM, 20CM: LK ー Probe 15CM, 20CM; LK XT Probe 15CM, 20CM

For treatment confirmation (including seqmentation and registration), the software is compatible with all ablation devices as these functions are independent from probes/power settings.

AI/ML Overview

The provided text describes the VisAble.IO device and its performance testing for FDA 510(k) clearance. Here's a breakdown of the requested information based on the document:

1. A table of acceptance criteria and the reported device performance

The document uses "Primary Performance Goal" as the acceptance criterion and "Primary Endpoint" as the reported device performance.

AlgorithmPrimary Performance Goal (Acceptance Criteria)Primary Endpoint (Reported Performance)
Liver SegmentationMean DICE = 0.92Mean DICE = 0.98
Ablation Target SegmentationMean DICE = 0.70Mean DICE = 0.80
Ablation Zone SegmentationMean DICE = 0.70Mean DICE = 0.86
Liver Vessels SegmentationMean DICE = 0.70Mean DICE = 0.72
PrePost Ablation Image RegistrationMCD* = 6.06 mmMCD* = 4.11 mm
*MCD=Mean Corresponding Distance

Note: The document states that segmentation tools provide manual and semi-automated segmentation, and post-processing. The clinical accuracy of segmentation is referred to as "a user operation and the clinical accuracy of segmentation is the responsibility of the user and not a VisAble.IO function." Similarly, for registration, it states "Final accuracy of registration is dependent on user assessment and manual modification of the registration prior to acceptance, and not a VisAble.IO function." This suggests that the reported performance metrics (DICE scores and MCD) likely reflect the algorithm's capability to provide good initial segmentations and registrations for user refinement.

2. Sample sizes used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)

The sample sizes for the test sets are provided in the table. The provenance for the training/validation datasets are described generally as:

  • Liver Segmentation Algorithm Test Set Size: N=50
    • Provenance for training/validation (not explicitly test set data): 1091 contrast-enhanced CT images from arterial or venous phases.
    • Location of clinical sites: Germany, France, Turkey, Japan, Israel, Netherlands, Canada, USA, UK (38 clinical sites)
  • Ablation Target Segmentation Test Set Size: N=59
  • Ablation Zone Segmentation Test Set Size: N=59
  • Liver Vessels Segmentation Test Set Size: N=100
    • Provenance for training/validation (not explicitly test set data): N=393 contrast-enhanced CT images from the portal-venous or late venous phases.
    • Location of clinical sites: Central Europe, North America, East Asia (36 clinical sites)
  • PrePost Ablation Image Registration Test Set Size: N=46

The document doesn't explicitly state whether the test set data was retrospective or prospective. Given that it's performance data for a 510(k) submission, it is typically retrospective data collected for validation.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

The document does not specify the number of experts used to establish the ground truth for the test set or their qualifications.

4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

The document does not specify the adjudication method used for the test set, nor does it explicitly mention a process of expert adjudication for the ground truth.

5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study or any effect size related to human reader improvement with AI assistance. The study focuses on the standalone algorithmic performance.

6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

Yes, the performance data presented (DICE scores and MCD) are for the standalone algorithmic performance. The text explicitly states that the "clinical accuracy of segmentation is the responsibility of the user and not a VisAble.IO function" and "final accuracy of registration is dependent on user assessment and manual modification... and not a VisAble.IO function," suggesting the provided metrics are for the initial algorithmic output prior to user intervention.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

The document does not explicitly state the type of ground truth used (e.g., expert consensus, pathology, etc.) for the segmentation and registration algorithms. It implies that the "Primary Performance Goal" was set for these algorithms, suggesting a pre-defined or expert-derived ground truth was used for comparison, but the methodology for establishing it is not detailed.

8. The sample size for the training set

The document provides the sample sizes for the training and model validation datasets as:

  • Liver Segmentation Algorithm: 1091 contrast-enhanced CT images.
  • Liver Vessel Segmentation Algorithm: N=393 contrast-enhanced CT images.
  • The sample sizes for training of Ablation Target Segmentation, Ablation Zone Segmentation, and PrePost Ablation Image Registration algorithms are not explicitly stated in the provided text.

9. How the ground truth for the training set was established

The document does not explicitly describe how the ground truth for the training set was established. It only mentions the characteristics of the images used for training (e.g., contrast-enhanced CT, arterial/venous phases, age/sex distribution, location of clinical sites, imaging procedure).

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).