K Number
K210807
Date Cleared
2021-10-22

(219 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
Predicate For
N/A
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

FlightPlan for Liver is a post processing software package that helps the analysis of 3D X-ray angiography images of the liver. Its output is intended as an adjunct means to help visualize vasculature and identify arteries leading to the vicinity of hypervascular lesions in the liver. This adjunct information may be used by physicians to aid them in their evaluation of hepatic arterial anatomy during the planning phase of embolization procedures.

Device Description

FlightPlan for Liver with the Parenchyma Analysis option is a post-processing, software-only application using 3D X-ray angiography images (CBCT) as input. It helps physicians visualize and analyze vasculature to aid in the planning of endovascular embolization procedures in the liver. It was developed from modifications to the predicate device, GE's FlightPlan for Liver [K121200], including the addition of 2 new algorithms supporting the Parenchyma Analysis option. The Parenchyma Analysis option is what triggered this 510k. The subject device also includes a feature, Live Tracking, that was cleared in the reference device, GE's FlightPlan for Embolization. The software operates on GE's Advantage Workstation [K110834] platform and AW Server [K081985] platform and is an extension to the GE's Volume Viewer application [K041521].

AI/ML Overview

Here's a breakdown of the acceptance criteria and study information for FlightPlan for Liver, based on the provided text:

1. Table of Acceptance Criteria and Reported Device Performance

The document states that "The test results of both of the algorithms met their predefined acceptance criteria" for the deep learning-based Liver Segmentation algorithm and the non-deep learning Virtual Parenchyma Visualization (VPV) algorithm. However, the specific quantitative acceptance criteria and their corresponding reported performance values are not explicitly detailed in the provided text.

The clinical assessment also "demonstrated that the proposed device FlightPlan for Liver with the Parenchyma Analysis option met its predefined acceptance criteria," but again, the specifics are not provided.

Therefore, a table cannot be fully constructed without this missing information.

Missing Information:

  • Specific quantitative acceptance criteria for Liver Segmentation algorithm (e.g., Dice score, IoU, boundary distance).
  • Specific quantitative reported performance for Liver Segmentation algorithm.
  • Specific quantitative acceptance criteria for VPV algorithm (e.g., accuracy of distal liver region estimation).
  • Specific quantitative reported performance for VPV algorithm.
  • Specific quantitative or qualitative acceptance criteria for the clinical assessment using the 5-point Likert scale.

2. Sample Sizes and Data Provenance

For the Deep Learning Liver Segmentation Algorithm:

  • Test Set Sample Size: Not explicitly stated, but derived from a "database of contrast injected CBCT liver acquisitions."
  • Data Provenance: Clinical sites in the USA and France. The data was prospective (implied by "clinical sites") and for the purpose of training and testing.

For the Non-Deep Learning Virtual Parenchyma Visualization (VPV) Algorithm:

  • Test Set Sample Size: "a test set of proximal CBCT acquisitions." The exact number is not provided.
  • Data Provenance: From the USA and France.

For the Clinical Testing:

  • Test Set Sample Size: "A sample of 3D X-ray angiography image pairs." The exact number is not provided.
  • Data Provenance: From France and the USA.

3. Number of Experts and Qualifications for Ground Truth

For the Deep Learning Liver Segmentation Algorithm:

  • Number of Experts: Not specified.
  • Qualifications: Not specified. The ground truth method itself (how segmentation was established for training and testing) is not fully detailed beyond using existing "contrast injected CBCT liver acquisitions."

For the Non-Deep Learning Virtual Parenchyma Visualization (VPV) Algorithm:

  • Number of Experts: Not specified.
  • Qualifications: Not specified.
  • Ground Truth Basis: "selective contrast injected CBCT exams from same patients." It's implied that these were expert-interpreted or based on a recognized clinical standard, but the specific expert involvement is not detailed.

For the Clinical Testing:

  • Number of Experts: Not specified.
  • Qualifications: "interventional radiologists." No experience level (e.g., years of experience) is provided.

4. Adjudication Method for the Test Set

The document does not explicitly mention an adjudication method (like 2+1 or 3+1 consensus) for any of the test sets.

5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

  • The document describes a "Clinical Testing" section where "interventional radiologists using a 5-point Likert scale" assessed image pairs. This suggests a reader study.
  • However, it does not explicitly state that it was an MRMC comparative effectiveness study comparing human readers with AI assistance vs. without AI assistance. The study assessed whether the device "met its predefined acceptance criteria and helps physicians visualize and analyze... and aids in the planning..." which seems to be an evaluation of the device's utility rather than a direct comparison of reader performance with and without the AI.
  • Therefore, no effect size of human readers improving with AI vs. without AI assistance is reported.

6. Standalone (Algorithm Only) Performance

  • Yes, standalone performance was done for both new algorithms.
    • The "deep learning-based Liver Segmentation algorithm" was evaluated, although specific performance metrics are not given.
    • The "non-deep learning Virtual Parenchyma Visualization algorithm's performance was evaluated [...] compared to selective contrast injected CBCT exams from same patients used as the ground truth."
  • This indicates that the algorithms themselves were tested for their inherent performance.

7. Type of Ground Truth Used

For the Deep Learning Liver Segmentation Algorithm:

  • Based on "contrast injected CBCT liver acquisitions." The precise method for establishing the "correct" segmentation (e.g., manual expert tracing, pathology, outcome data) is not detailed. It's implicitly clinical data.

For the Virtual Parenchyma Visualization (VPV) Algorithm:

  • "selective contrast injected CBCT exams from same patients used as the ground truth." This implies a gold standard of directly observed, selective angiography, which is a clinical reference.

For the Clinical Testing:

  • The "ground truth" here is the perception and evaluation of interventional radiologists using a 5-point Likert scale regarding the device's utility ("helps physicians visualize," "aids in the planning"). This is a form of expert consensus/subjective assessment of utility.

8. Sample Size for the Training Set

For the Deep Learning Liver Segmentation Algorithm:

  • "a database of contrast injected CBCT liver acquisitions from clinical sites in the USA and France was used for the training and testing." The exact sample size for training is not specified, only that a database was used.

For the Virtual Parenchyma Visualization (VPV) Algorithm:

  • The VPV algorithm is described as "non-deep learning," so it would not have a traditional "training set" in the same way a deep learning model would. It likely relies on predefined anatomical/physiological models or rules.

9. How Ground Truth for the Training Set Was Established

For the Deep Learning Liver Segmentation Algorithm:

  • The document states "a database of contrast injected CBCT liver acquisitions [...] was used for the training." However, it does not explicitly detail how the ground truth labels (i.e., the correct liver segmentations) were established for these training images. This is a critical piece of information for a deep learning model. It's usually through expert manual annotation for segmentation tasks.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).