Search Results
Found 3 results
510(k) Data Aggregation
(165 days)
FEops HEARTguide™ ALPACA enables visualization and measurement of structures of the heart and vessels for preprocedural planning and sizing of structural heart interventions.
To facilitate the above, FEops HEARTguide™ ALPACA provides general functionality such as:
- Segmentation of cardiovascular structures
- Visualization and image reconstruction techniques: 2D review, MPR
- Measurement and annotation tools
- Reporting tools
FEops HEARTguide™ ALPACA also allows visualization of output generated by other medical device software (e.g., FEops HEARTguide™ Simulation Application cleared as K214066).
The results are intended to be used by qualified clinicians in conjunction with the patient's clinical history, symptoms, and other preprocedural evaluations, as well as the clinician's professional judgment.
FEops HEARTguide™ ALPACA is not intended to replace the implant device instructions for use for final LAAO and TAVI device selection and placement.
FEops HEARTguide™ ALPACA enables visualization and measurement of structures of the heart and vessels for preprocedural planning and sizing of structural heart interventions.
The software is used in a service-based business model: the customer (clinician) provides the necessary input data, FEops prepares the anatomical analysis, and delivers the results to the customer.
The results of the anatomical analysis are provided to the clinician via FEops HEART guide™ ALPACA's web application. They are available in a PDF report and as interactive 3D and DICOM MPR visualizations. The web application is intended to be used by clinicians to review the results as well as to create additional landmarks and related measurements, if needed.
The FEops HEARTguide™ ALPACA device underwent non-clinical performance testing to demonstrate its substantial equivalence to a predicate device (3mensio Workstation/3mensio Structural Heart/3mensio Vascular, K153736) for preprocedural planning and sizing of structural heart interventions. The study details are provided below.
1. Acceptance Criteria and Reported Device Performance
The device's performance was evaluated for two primary applications: Left Atrial Appendage Occlusion (LAAO) and Transcatheter Aortic Valve Implantation (TAVI) procedures, focusing on quantitative measurements and segmentation accuracy.
Table: Acceptance Criteria and Reported Device Performance
Metric | Performance Goal (Acceptance Criteria) | Reported Device Performance (Semi-Automatic Output) | Reported Device Performance (Fully Automatic Output) |
---|---|---|---|
LAAO Procedures | |||
LAA Landing Zone Mean Diameter Difference (Bland-Altman) | Lower CI on inferior LoA within ±18% | Lower CI on inferior LoA: -10.5% | Lower CI on inferior LoA: -14.4% |
Upper CI on superior LoA within ±18% | Upper CI on superior LoA: 13.2% | Upper CI on superior LoA: 22.6% (Failed for Fully Automatic) | |
LAA Region Segmentation (Dice Score) | (Implicitly high accuracy for clinical utility) | Mean: 0.98 ± 0.01 | |
Minimum: 0.95 | Mean: 0.93 ± 0.04 | ||
Minimum: 0.83 | |||
TAVI Procedures | |||
Aortic Annulus Perimeter-Based Diameter Difference (Bland-Altman) | Lower CI on inferior LoA within ±10% | Lower CI on inferior LoA: -4.3% | Not applicable (Manual measurement required) |
Upper CI on superior LoA within ±10% | Upper CI on superior LoA: 5.3% | Not applicable (Manual measurement required) | |
Aortic Root Segmentation (Dice Score) | (Implicitly high accuracy for clinical utility) | Mean: 0.97 ± 0.01 | |
Minimum: 0.92 | Mean: 0.96 ± 0.01 | ||
Minimum: 0.92 |
Note on LAAO Fully Automatic Output: While the semi-automatic output met the ±18% performance goal for LAAO landing zone mean diameter, the fully automatic output's upper CI on superior LoA (22.6%) exceeded the 18% threshold, indicating it did not meet the performance goal. However, the overall submission focuses on the semi-automatic workflow which incorporates human supervision.
2. Sample Sizes and Data Provenance
- Test Set Sample Size:
- LAAO: 35 representative retrospective cases.
- TAVI: 35 representative retrospective cases.
- Data Provenance: The data consisted of "Recent datasets representative for the intended population," covering different CT manufacturers, imaging parameters (e.g., slice thickness), and regions. The text does not specify the country of origin but implies clinical relevance for the intended user base. All data used for testing were retrospective and specifically excluded any datasets used for training the AI models.
3. Number of Experts and Qualifications for Ground Truth
The document states that the ground truth was "manually annotated data." It does not explicitly specify the number of experts or their qualifications (e.g., radiologist with 10 years of experience). However, the context of regulatory submission for medical devices strongly implies that such manual annotations would be performed by qualified medical professionals.
4. Adjudication Method for the Test Set
The document does not explicitly describe an adjudication method (such as 2+1 or 3+1) for establishing the ground truth. It states that the ground truth was "manually annotated data." Given that the process for "semi-automatic output" involves "human supervision and a quality check by a FEops Case analyst," it suggests an internal review process, but not a formal multi-reader adjudication for the initial ground truth establishment.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No MRMC comparative effectiveness study comparing human readers with AI assistance vs. without AI assistance was reported in the provided text. The evaluation focused on the accuracy of the device's measurements and segmentations against manually established ground truth, rather than directly assessing human reader improvement. The "semi-automatic" workflow inherently describes a human-in-the-loop process where human supervision and quality checks are applied after AI segmentation.
6. Standalone (Algorithm Only) Performance
Standalone (algorithm only) performance was evaluated for:
- LAAO: "Fully automatic output" for mean diameter of the landing zone and dice score for segmentation.
- TAVI: "Fully automatic output" for dice score on aortic root segmentation.
- For TAVI perimeter-based diameter, the document explicitly states, "there is no automatically calculated perimeter-based diameter of the aortic annulus, as the algorithm only identifies the annular plane, and the measurement itself requires a manual action." This implies that a fully standalone measurement for this metric is not applicable.
7. Type of Ground Truth Used
The ground truth used for both LAAO and TAVI studies was expert manual annotation of imaging data, referred to as "manually annotated data."
8. Sample Size for the Training Set
The document explicitly states that the test datasets were "No datasets were included that were used for training the AI models." It does not provide the specific sample size for the training set used for the AI models.
9. How the Ground Truth for the Training Set Was Established
The document does not provide details on how the ground truth for the training set was established. It only mentions that the test set cases were not used for training.
Ask a specific question about this device
(60 days)
FEops HEARTguide™ is indicated for patient-specific simulation of transcatheter left atrial appendage occlusion (LAAO) device implantation during procedural planning.
The software performs computer simulation to predict implant frame deformation to support the evaluation for LAAO device size and placement.
FEops HEARTguide™ is intended to be used by qualified clinicians in conjunction with the simulated device instructions for use, the patient's clinical history, symptoms, and other preprocedural evaluations, as well as the clinician's professional judgment.
FEops HEARTguide™ is not intended to replace the simulated device instructions for use for final LAAO device selection and placement.
FEops HEARTguide™ is prescription use only.
FEops HEARTguide™ predicts implant frame deformation after percutaneous LAAO device implantation through computer simulation. The predicted deformation provides additional information during LAAO procedural planning.
The simulation is based on a 3D model of the patient anatomy which is generated from 2D medical images of the patient anatomy (multi-slice Cardiac Computed Tomography). The simulation is executed by FEops Case Analysts and run on FEops infrastructure.
The simulation report is created by combining a predefined device model with a patient-specific model of the patient anatomy. This is performed by trained operators at FEops using an internal software platform through an established workflow. The purposely qualified case analysts and quality control analysts process the received medical images of the patient to produce the simulation results.
The simulation results are provided as 2D and numerical data shown in a PDF report and 3D, 2D and numerical data shown in a web-based Viewer application accessible through a standard web browser.
The information provided primarily describes the FEops HEARTguide™ device and its substantial equivalence to a predicate device, focusing on regulatory aspects rather than detailed study results. Given the available text, I can extract and infer some information, but many specific details regarding acceptance criteria and study findings are not explicitly provided.
Here's an attempt to answer your questions based on the provided text, with acknowledgments of what is not present:
1. Table of acceptance criteria and the reported device performance
The document states: "Acceptance criteria were defined using the same method as for the predicate device demonstrating the same clinical meaningfulness." However, the specific acceptance criteria (e.g., a numerical threshold for accuracy, precision, etc.) and the reported device performance values against these criteria are not provided in the given text.
The text generally states that the performance validation testing demonstrated "a similar performance level" and "the performance of the subject device is equivalent to the performance of the predicate device." It also mentions "an assessment of the agreement between the computational model results and clinical data across the full intended operating range."
Without the specific criteria and metrics, a table cannot be fully constructed.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size for Test Set: The text states: "For both added LAAO devices, the performance study was performed on a cohort with a sample size equal to or larger than the predicate device." The exact number is not specified for either the predicate or the current device.
- Data Provenance: The document mentions "clinical data" but does not specify the country of origin or whether the data was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This information is not provided in the given text. The text mentions "qualified clinicians" in the context of device usage and interpretation but not explicitly for ground truth establishment during a performance study.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not provided in the given text.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document describes the device as "Interventional cardiovascular implant simulation software," which predicts "implant frame deformation." It is intended to support procedural planning and "not intended to replace the simulated device instructions for use for final LAAO device selection and placement." This implies that it is a tool to assist clinicians.
The text mentions "a Human factors evaluation report was provided demonstrating the ability of the user interface and labeling to allow for intended and qualified users to correctly use the device and interpret the provided information." It also says, "To ensure consistency of modeling outputs, the validation was performed with multiple qualified operators using the procedure that will be implemented under anticipated conditions of use..."
However, the text does not explicitly state or present results from a Multi-Reader Multi-Case (MRMC) comparative effectiveness study comparing human readers with AI assistance versus without AI assistance, nor does it provide an effect size for such a comparison. The focus is on the device's predictive capability and its agreement with clinical data.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The device is described as "simulation software" operated by "trained operators at FEops using an internal software platform through an established workflow." The "simulation results are provided as 2D and numerical data shown in a PDF report and 3D, 2D and numerical data shown in a web-based Viewer application accessible through a standard web browser." This indicates that the software performs the simulation independently, and its outputs are then presented.
The "performance study" assesses the "agreement between the computational model results and clinical data." This implies a standalone evaluation of the algorithm's predictions against real-world clinical outcomes or measurements. Therefore, it is highly likely that a standalone performance evaluation of the algorithm's predictive capabilities was performed, but the results in raw form are not in the provided text.
7. The type of ground truth used (expert concensus, pathology, outcomes data, etc.)
The document mentions "a comparison of the results to clinical data supporting the indications for use" and "an assessment of the agreement between the computational model results and clinical data." This strongly suggests that clinical data (which could include imaging, procedural findings, or direct measurements from patients after implantation) was used as the ground truth. It does not specify if this clinical data was corroborated by expert consensus, pathology, or specific outcomes data, but "clinical data" is a broad term that would encompass such information.
8. The sample size for the training set
The document discusses "computational modeling verification and validation activities" and "performance validation testing data," but it does not mention a separate training set or its sample size. The focus is on the validation of the models against clinical data. It describes the software primarily as a "simulation software" based on a "3D model of the patient anatomy," "predefined device model," and "patient-specific model," rather than a machine learning model that would typically have a distinct training set.
9. How the ground truth for the training set was established
Since a training set is not explicitly mentioned for a machine learning context, this question is not applicable based on the provided text. The device performs simulations based on computational models, not necessarily learned from a training set in the typical AI sense.
Ask a specific question about this device
(489 days)
FEops HEARTguide is indicated for patient-specific simulation of transcatheter left atrial appendage occlusion (LAAO) device implantation during procedural planning.
The software performs computer simulation to predict implant frame deformation to support the evaluation for LAAO device size and placement.
FEops HEARTguide is intended to be used by qualified clinicians in conjunction with the simulated device instructions-for-use, the patient's clinical history, symptoms, and other preprocedural evaluations, as well as the clinician's professional judgment.
FEops HEARTguide is not intended to replace the simulated device's instructions for use for final LAAO device selection and placement.
FEops HEARTguide is prescription use only.
FEops HEARTguide is a computer simulation device which provides a prediction of implant frame deformation (device-tissue interaction) post transcatheter LAAO device implantation. The device performs simulation by combining a predefined device model with a patient-specific model of the patient anatomy (Figure 1). The simulation results are intended to be used by qualified clinicians as a pre-procedural planning adjunct for LAAO implantation.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Quantitative Evaluation: Maximum allowed difference in percentage ((predicted Dmax - observed Dmax)/observed Dmax) must be less than the predetermined performance goal of ±15%. | Met: The mean difference was -1.9%, and the limits of agreement (95% CI) were 7.4% and -11.2%. Since the 95% CI of the agreement limits were within ±15%, the quantitative endpoint was met. |
Qualitative Evaluation: More than 75% of the verdicts should be "similar" or "acceptable" when 3 cardiology experts rated the similarity between the visualization of the simulated deployed device in the anatomy versus the geometry reconstructed from the postoperative CT images. | Met: Overall, 90.6% (169/180) of the grades from the experts were "acceptable" or better, thus meeting the qualitative performance goal. |
Study Details for Clinical Accuracy Validation
1. Sample Size and Data Provenance
- Test Set Sample Size: 60 retrospective Watchman left atrial appendage occlusion (LAAO) cases.
- Data Provenance: Retrospective, from 5 centers. The specific country of origin is not explicitly stated, but the sponsor is based in Belgium.
2. Number of Experts and Qualifications for Ground Truth
- Number of Experts: 3 cardiology experts.
- Qualifications: Referred to as "cardiology experts." No further specific details on years of experience or board certification are provided in the text.
3. Adjudication Method
- Adjudication Method: Not explicitly detailed as 2+1 or 3+1. The text states that "3 cardiology experts rated the similarity." It implies a consensus or majority rule for the qualitative endpoint, as 90.6% of the grades (plural) were "acceptable" or better. It doesn't specify if discrepancies were resolved.
4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- MRMC Study Done? No, a traditional MRMC comparative effectiveness study where human readers' performance with and without AI assistance is compared was not reported. The study focused on the accuracy of the device simulation's output compared to observed data and expert qualitative assessment of that output.
- Effect Size of Human Improvement (if applicable): Not applicable, as this type of MRMC study was not performed.
5. Standalone (Algorithm Only) Performance
- Standalone Performance: Yes, the study evaluates the performance of the FEops HEARTguide software in predicting device deformation without direct human-in-the-loop assistance during the simulation process. The experts evaluate the output of the simulation.
6. Type of Ground Truth Used
- Quantitative Ground Truth: Actual Watchman deformation observed on post-operative cardiac CT, measured by the maximum device diameter (Dmax).
- Qualitative Ground Truth: Geometry reconstructed from post-operative CT images, against which the simulated deployed device visualization was compared by cardiology experts.
7. Training Set Sample Size
- The text does not explicitly state the sample size for the training set. It mentions training, but not specific numbers for a "training set" in the context of a machine learning model. The "Model Development" section describes the creation of the computational models and material validation, but does not detail a separate training set for an AI/ML algorithm in the typical sense. It implies the model was developed based on CAD files and expansion tests from the manufacturer, and pre/post-operative CT images for material validation, but these "datasets" were not re-used in the clinical validation, suggesting they might have been part of development/training.
8. How Ground Truth for Training Set was Established
- As the training set size is not explicitly stated in the context of an AI/ML algorithm, the method for establishing its ground truth is also not detailed.
- For material validation (which could be considered analogous to part of model development/training), the text states: "Material properties validation of the LAA soft tissue used in the patient-specific simulations. This was achieved using datasets consisting of pre- and post-operative CT images." This implies the "ground truth" for material properties came from the observed changes between pre- and post-operative CT images. The "Implant Device Model" validation used "expansion tests received from the manufacturer" to validate the material model based on CAD files.
Ask a specific question about this device
Page 1 of 1