K Number
K214066
Device Name
FEops HEARTguide
Manufacturer
Date Cleared
2022-02-25

(60 days)

Product Code
Regulation Number
870.1405
Reference & Predicate Devices
Predicate For
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

FEops HEARTguide™ is indicated for patient-specific simulation of transcatheter left atrial appendage occlusion (LAAO) device implantation during procedural planning.

The software performs computer simulation to predict implant frame deformation to support the evaluation for LAAO device size and placement.

FEops HEARTguide™ is intended to be used by qualified clinicians in conjunction with the simulated device instructions for use, the patient's clinical history, symptoms, and other preprocedural evaluations, as well as the clinician's professional judgment.

FEops HEARTguide™ is not intended to replace the simulated device instructions for use for final LAAO device selection and placement.

FEops HEARTguide™ is prescription use only.

Device Description

FEops HEARTguide™ predicts implant frame deformation after percutaneous LAAO device implantation through computer simulation. The predicted deformation provides additional information during LAAO procedural planning.

The simulation is based on a 3D model of the patient anatomy which is generated from 2D medical images of the patient anatomy (multi-slice Cardiac Computed Tomography). The simulation is executed by FEops Case Analysts and run on FEops infrastructure.

The simulation report is created by combining a predefined device model with a patient-specific model of the patient anatomy. This is performed by trained operators at FEops using an internal software platform through an established workflow. The purposely qualified case analysts and quality control analysts process the received medical images of the patient to produce the simulation results.

The simulation results are provided as 2D and numerical data shown in a PDF report and 3D, 2D and numerical data shown in a web-based Viewer application accessible through a standard web browser.

AI/ML Overview

The information provided primarily describes the FEops HEARTguide™ device and its substantial equivalence to a predicate device, focusing on regulatory aspects rather than detailed study results. Given the available text, I can extract and infer some information, but many specific details regarding acceptance criteria and study findings are not explicitly provided.

Here's an attempt to answer your questions based on the provided text, with acknowledgments of what is not present:

1. Table of acceptance criteria and the reported device performance

The document states: "Acceptance criteria were defined using the same method as for the predicate device demonstrating the same clinical meaningfulness." However, the specific acceptance criteria (e.g., a numerical threshold for accuracy, precision, etc.) and the reported device performance values against these criteria are not provided in the given text.

The text generally states that the performance validation testing demonstrated "a similar performance level" and "the performance of the subject device is equivalent to the performance of the predicate device." It also mentions "an assessment of the agreement between the computational model results and clinical data across the full intended operating range."

Without the specific criteria and metrics, a table cannot be fully constructed.

2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

  • Sample Size for Test Set: The text states: "For both added LAAO devices, the performance study was performed on a cohort with a sample size equal to or larger than the predicate device." The exact number is not specified for either the predicate or the current device.
  • Data Provenance: The document mentions "clinical data" but does not specify the country of origin or whether the data was retrospective or prospective.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

This information is not provided in the given text. The text mentions "qualified clinicians" in the context of device usage and interpretation but not explicitly for ground truth establishment during a performance study.

4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

This information is not provided in the given text.

5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

The document describes the device as "Interventional cardiovascular implant simulation software," which predicts "implant frame deformation." It is intended to support procedural planning and "not intended to replace the simulated device instructions for use for final LAAO device selection and placement." This implies that it is a tool to assist clinicians.

The text mentions "a Human factors evaluation report was provided demonstrating the ability of the user interface and labeling to allow for intended and qualified users to correctly use the device and interpret the provided information." It also says, "To ensure consistency of modeling outputs, the validation was performed with multiple qualified operators using the procedure that will be implemented under anticipated conditions of use..."

However, the text does not explicitly state or present results from a Multi-Reader Multi-Case (MRMC) comparative effectiveness study comparing human readers with AI assistance versus without AI assistance, nor does it provide an effect size for such a comparison. The focus is on the device's predictive capability and its agreement with clinical data.

6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

The device is described as "simulation software" operated by "trained operators at FEops using an internal software platform through an established workflow." The "simulation results are provided as 2D and numerical data shown in a PDF report and 3D, 2D and numerical data shown in a web-based Viewer application accessible through a standard web browser." This indicates that the software performs the simulation independently, and its outputs are then presented.

The "performance study" assesses the "agreement between the computational model results and clinical data." This implies a standalone evaluation of the algorithm's predictions against real-world clinical outcomes or measurements. Therefore, it is highly likely that a standalone performance evaluation of the algorithm's predictive capabilities was performed, but the results in raw form are not in the provided text.

7. The type of ground truth used (expert concensus, pathology, outcomes data, etc.)

The document mentions "a comparison of the results to clinical data supporting the indications for use" and "an assessment of the agreement between the computational model results and clinical data." This strongly suggests that clinical data (which could include imaging, procedural findings, or direct measurements from patients after implantation) was used as the ground truth. It does not specify if this clinical data was corroborated by expert consensus, pathology, or specific outcomes data, but "clinical data" is a broad term that would encompass such information.

8. The sample size for the training set

The document discusses "computational modeling verification and validation activities" and "performance validation testing data," but it does not mention a separate training set or its sample size. The focus is on the validation of the models against clinical data. It describes the software primarily as a "simulation software" based on a "3D model of the patient anatomy," "predefined device model," and "patient-specific model," rather than a machine learning model that would typically have a distinct training set.

9. How the ground truth for the training set was established

Since a training set is not explicitly mentioned for a machine learning context, this question is not applicable based on the provided text. The device performs simulations based on computational models, not necessarily learned from a training set in the typical AI sense.

§ 870.1405 Interventional cardiovascular implant simulation software device.

(a)
Identification. An interventional cardiovascular implant simulation software device is a prescription device that provides a computer simulation of an interventional cardiovascular implant device inside a patient's cardiovascular anatomy. It performs computational modeling to predict the interaction of the interventional cardiovascular implant device with the patient-specific anatomical environment.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Software verification, validation, and hazard analysis, with identification of appropriate mitigations, must be performed, including a full verification and validation of the software according to the predefined software specifications.
(2) Computational modeling verification and validation activities must be performed to establish the predictive capability of the device for its indications for use.
(3) Performance validation testing must be provided to demonstrate the accuracy and clinical relevance of the modeling methods for the intended implantation simulations, including the following:
(i) Computational modeling results must be compared to clinical data supporting the indications for use to demonstrate accuracy and clinical meaningfulness of the simulations;
(ii) Agreement between computational modeling results and clinical data must be assessed and demonstrated across the full intended operating range (
e.g., full range of patient population, implant device sizes and patient anatomic morphologies). Any selection criteria or limitations of the samples must be described and justified;(iii) Endpoints (
e.g., performance goals) and sample sizes established must be justified as to how they were determined and why they are clinically meaningful; and(iv) Validation must be performed and controls implemented to characterize and ensure consistency (
i.e., repeatability and reproducibility) of modeling outputs:(A) Testing must be performed using multiple qualified operators and using the procedure that will be implemented under anticipated conditions of use; and
(B) The factors (
e.g., medical imaging dataset, operator) must be identified regarding which were held constant and which were varied during the evaluation, and a description must be provided for the computations and statistical analyses used to evaluate the data.(4) Human factors evaluation must be performed to evaluate the ability of the user interface and labeling to allow for intended users to correctly use the device and interpret the provided information.
(5) Device labeling must be provided that describes the following:
(i) Warnings that identify anatomy and image acquisition factors that may impact simulation results and provide cautionary guidance for interpretation of the provided simulation results;
(ii) Device simulation inputs and outputs, and key assumptions made in the simulation and determination of simulated outputs; and
(iii) The computational modeling performance of the device for presented simulation outputs, and the supporting evidence for this performance.