Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K210807
    Date Cleared
    2021-10-22

    (219 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    FlightPlan for Liver

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    FlightPlan for Liver is a post processing software package that helps the analysis of 3D X-ray angiography images of the liver. Its output is intended as an adjunct means to help visualize vasculature and identify arteries leading to the vicinity of hypervascular lesions in the liver. This adjunct information may be used by physicians to aid them in their evaluation of hepatic arterial anatomy during the planning phase of embolization procedures.

    Device Description

    FlightPlan for Liver with the Parenchyma Analysis option is a post-processing, software-only application using 3D X-ray angiography images (CBCT) as input. It helps physicians visualize and analyze vasculature to aid in the planning of endovascular embolization procedures in the liver. It was developed from modifications to the predicate device, GE's FlightPlan for Liver [K121200], including the addition of 2 new algorithms supporting the Parenchyma Analysis option. The Parenchyma Analysis option is what triggered this 510k. The subject device also includes a feature, Live Tracking, that was cleared in the reference device, GE's FlightPlan for Embolization. The software operates on GE's Advantage Workstation [K110834] platform and AW Server [K081985] platform and is an extension to the GE's Volume Viewer application [K041521].

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for FlightPlan for Liver, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document states that "The test results of both of the algorithms met their predefined acceptance criteria" for the deep learning-based Liver Segmentation algorithm and the non-deep learning Virtual Parenchyma Visualization (VPV) algorithm. However, the specific quantitative acceptance criteria and their corresponding reported performance values are not explicitly detailed in the provided text.

    The clinical assessment also "demonstrated that the proposed device FlightPlan for Liver with the Parenchyma Analysis option met its predefined acceptance criteria," but again, the specifics are not provided.

    Therefore, a table cannot be fully constructed without this missing information.

    Missing Information:

    • Specific quantitative acceptance criteria for Liver Segmentation algorithm (e.g., Dice score, IoU, boundary distance).
    • Specific quantitative reported performance for Liver Segmentation algorithm.
    • Specific quantitative acceptance criteria for VPV algorithm (e.g., accuracy of distal liver region estimation).
    • Specific quantitative reported performance for VPV algorithm.
    • Specific quantitative or qualitative acceptance criteria for the clinical assessment using the 5-point Likert scale.

    2. Sample Sizes and Data Provenance

    For the Deep Learning Liver Segmentation Algorithm:

    • Test Set Sample Size: Not explicitly stated, but derived from a "database of contrast injected CBCT liver acquisitions."
    • Data Provenance: Clinical sites in the USA and France. The data was prospective (implied by "clinical sites") and for the purpose of training and testing.

    For the Non-Deep Learning Virtual Parenchyma Visualization (VPV) Algorithm:

    • Test Set Sample Size: "a test set of proximal CBCT acquisitions." The exact number is not provided.
    • Data Provenance: From the USA and France.

    For the Clinical Testing:

    • Test Set Sample Size: "A sample of 3D X-ray angiography image pairs." The exact number is not provided.
    • Data Provenance: From France and the USA.

    3. Number of Experts and Qualifications for Ground Truth

    For the Deep Learning Liver Segmentation Algorithm:

    • Number of Experts: Not specified.
    • Qualifications: Not specified. The ground truth method itself (how segmentation was established for training and testing) is not fully detailed beyond using existing "contrast injected CBCT liver acquisitions."

    For the Non-Deep Learning Virtual Parenchyma Visualization (VPV) Algorithm:

    • Number of Experts: Not specified.
    • Qualifications: Not specified.
    • Ground Truth Basis: "selective contrast injected CBCT exams from same patients." It's implied that these were expert-interpreted or based on a recognized clinical standard, but the specific expert involvement is not detailed.

    For the Clinical Testing:

    • Number of Experts: Not specified.
    • Qualifications: "interventional radiologists." No experience level (e.g., years of experience) is provided.

    4. Adjudication Method for the Test Set

    The document does not explicitly mention an adjudication method (like 2+1 or 3+1 consensus) for any of the test sets.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • The document describes a "Clinical Testing" section where "interventional radiologists using a 5-point Likert scale" assessed image pairs. This suggests a reader study.
    • However, it does not explicitly state that it was an MRMC comparative effectiveness study comparing human readers with AI assistance vs. without AI assistance. The study assessed whether the device "met its predefined acceptance criteria and helps physicians visualize and analyze... and aids in the planning..." which seems to be an evaluation of the device's utility rather than a direct comparison of reader performance with and without the AI.
    • Therefore, no effect size of human readers improving with AI vs. without AI assistance is reported.

    6. Standalone (Algorithm Only) Performance

    • Yes, standalone performance was done for both new algorithms.
      • The "deep learning-based Liver Segmentation algorithm" was evaluated, although specific performance metrics are not given.
      • The "non-deep learning Virtual Parenchyma Visualization algorithm's performance was evaluated [...] compared to selective contrast injected CBCT exams from same patients used as the ground truth."
    • This indicates that the algorithms themselves were tested for their inherent performance.

    7. Type of Ground Truth Used

    For the Deep Learning Liver Segmentation Algorithm:

    • Based on "contrast injected CBCT liver acquisitions." The precise method for establishing the "correct" segmentation (e.g., manual expert tracing, pathology, outcome data) is not detailed. It's implicitly clinical data.

    For the Virtual Parenchyma Visualization (VPV) Algorithm:

    • "selective contrast injected CBCT exams from same patients used as the ground truth." This implies a gold standard of directly observed, selective angiography, which is a clinical reference.

    For the Clinical Testing:

    • The "ground truth" here is the perception and evaluation of interventional radiologists using a 5-point Likert scale regarding the device's utility ("helps physicians visualize," "aids in the planning"). This is a form of expert consensus/subjective assessment of utility.

    8. Sample Size for the Training Set

    For the Deep Learning Liver Segmentation Algorithm:

    • "a database of contrast injected CBCT liver acquisitions from clinical sites in the USA and France was used for the training and testing." The exact sample size for training is not specified, only that a database was used.

    For the Virtual Parenchyma Visualization (VPV) Algorithm:

    • The VPV algorithm is described as "non-deep learning," so it would not have a traditional "training set" in the same way a deep learning model would. It likely relies on predefined anatomical/physiological models or rules.

    9. How Ground Truth for the Training Set Was Established

    For the Deep Learning Liver Segmentation Algorithm:

    • The document states "a database of contrast injected CBCT liver acquisitions [...] was used for the training." However, it does not explicitly detail how the ground truth labels (i.e., the correct liver segmentations) were established for these training images. This is a critical piece of information for a deep learning model. It's usually through expert manual annotation for segmentation tasks.
    Ask a Question

    Ask a specific question about this device

    K Number
    K121200
    Manufacturer
    Date Cleared
    2012-11-02

    (197 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    FLIGHTPLAN FOR LIVER

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    FlightPlan for Liver is a post processing software package that helps the analysis of 3D X-ray images of the liver arterial tree. Its output is intended as an adjunct means to help identify arteries leading to the vicinity of hypervascular lesions in the liver. This adjunct information may be used by physicians to aid them in their evaluation of hepatic arterial anatomy during embolization procedures.

    Device Description

    FlightPlan for Liver is a post-processing software application for use with interventional fluoroscopy procedures, using 3D rotational angiography images as input. It operates on the AW VolumeShare 4 [K052995] and AW VolumeShare 5 [K110834] platform. It is an extension to the Volume Viewer application [K041521] utilizing the rich set of the 3D processing features of Volume Viewer. FlightPlan for Liver delivers post-processing features that will aid physicians in their analysis of 3D X-ray images of the liver arterial tree. Additionally FlightPlan for Liver includes an algorithm to highlight the potential vessel(s) in the vicinity of a target.

    AI/ML Overview

    Here's an analysis of the provided text regarding the acceptance criteria and study for the FlightPlan for Liver device:

    Acceptance Criteria and Device Performance

    There is no explicit table of acceptance criteria or reported device performance metrics (e.g., sensitivity, specificity, AUC) in the provided document. The submission focuses on demonstrating substantial equivalence to a predicate device and confirming that the software functions as required and fulfills user needs.

    The "Performance testing" mentioned is described as "computing time of algorithm on several data," implying it's a speed or efficiency metric rather than a diagnostic performance metric. The "Verification confirms that the Design Output meets the Design Input (Product Specifications) requirements" and "Validation confirms that the product fulfills the user needs and the intended use under simulated use conditions," but specific, quantifiable acceptance criteria are not detailed.

    The "Summary of Clinical Tests" states that the study "demonstrate[d] the safety and effectiveness of FlightPlan for Liver" and compared its output "to a reference reading established by two senior interventional oncologists." However, the exact metrics used for comparison and the "acceptance criteria" for those metrics are not provided. The key takeaway is that the clinical data was not intended to support a claim of improved clinical outcomes.

    Study Details

    Here's what can be extracted about the study that proves the device meets the (unspecified quantitative) acceptance criteria:

    1. A table of acceptance criteria and the reported device performance

    Acceptance Criteria CategorySpecific Criteria (Implicit/General)Reported Device Performance
    Functional VerificationApplication works as required; Risk mitigations correctly implemented."Verification tests... performed to check whether the application works as required and whether the risk mitigations have been correctly implemented."
    Performance TestingAlgorithm computing time (specific targets not provided)."Performance testing consists of computing time of algorithm on several data."
    Design ValidationProduct fulfills user needs and intended use under simulated use conditions."Validation tests consist of typical use case scenario described by the sequence of operator actions. The Design Validation confirms that the product fulfills the user needs and the intended use under simulated use conditions."
    Clinical EffectivenessOutput provides adjunct information to aid physicians in evaluating hepatic arterial anatomy; output compared to reference reading.Output was compared to a reference reading established by two senior interventional oncologists. No specific quantitative performance metrics (e.g., accuracy, precision) are provided, nor are numerical results of this comparison.
    Substantial EquivalenceFunctionality, safety, and effectiveness are comparable to the predicate device."GE Healthcare considers the FlightPlan for Liver application to be as safe and as effective as its predicate device, and its performance is substantially equivalent to the predicate device."

    2. Sample size used for the test set and the data provenance

    • Test Set Size: 44 subjects, representing a total of 66 tumors.
    • Data Provenance: Retrospective study. The country of origin is not explicitly stated, but given the submitter's address (Buc, FRANCE) and the GE Healthcare global nature, it could be either European or multinational, but this is speculative.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Number of Experts: Two.
    • Qualifications: "Senior interventional oncologists." Specific experience (e.g., years) is not provided.

    4. Adjudication method for the test set

    • The ground truth was established by a "reference reading established by two senior interventional oncologists." While it states the two established the reference, it doesn't specify if this was by consensus, independent reads with adjudication, or another method. The phrasing "a reference reading established by two" suggests a single, agreed-upon ground truth, likely consensus or 2-reader agreement if initial reads differed.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No, a multi-reader, multi-case (MRMC) comparative effectiveness study comparing human readers with AI assistance vs. without AI assistance was not done. The study specifically states that the clinical data "was not designed nor intended to support a claim of an improvement in clinical outcomes of such procedures, and no such claim is being made." The study focused on comparing the device's output to an expert reference, not on human performance improvement.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Yes, the clinical study appears to evaluate the algorithm's standalone performance compared to a "reference reading." The "output of FlightPlan for Liver was compared to a reference reading," indicating the algorithm's direct output was assessed. No mention is made of human interaction or interpretation of the algorithm's output as part of this comparison.

    7. The type of ground truth used

    • Type of Ground Truth: Expert consensus/reference reading. Specifically, "a reference reading established by two senior interventional oncologists."

    8. The sample size for the training set

    • The document does not mention the sample size for the training set. It only describes the clinical study as a "retrospective study" used for verification and validation, implying it was a test set. There's no information about the data used to train the "algorithm to highlight the potential vessel(s)."

    9. How the ground truth for the training set was established

    • This information is not provided since the document does not detail the training set or its ground truth establishment.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1