Search Results
Found 1 results
510(k) Data Aggregation
(197 days)
TruPlan enables visualization and measurement of structures of the heart and vessels for pre-procedural planning and sizing for the left atrial appendage closure (LAAC) procedure.
To facilitate the above, TruPlan provides general functionality such as:
- Segmentation of cardiovascular structures
- Visualization and image reconstruction techniques: 2D review, Volume Rendering. MPR
- Simulation of TEE views, ICE views, and fluoroscopic rendering
- Measurement and annotation tools
- Reporting tools
The TruPlan Computed Tomography (CT) Imaging Software application (referred to herein as "TruPlan") is a software as a medical device (SAMD) that helps qualified users with imagebased pre-operative planning of Left Atrial Appendage Closure (LAAC) procedure using CT data. The TruPlan device is designed to support the anatomical assessment of the Left Atrial Appendage (LAA) prior to the LAAC procedure. This includes the assessment of the LAA size, shape, and relationships with adjacent cardiac and extracardiac structures. This assessment helps the physician determine the size of a closure device needed for the LAAC procedure. The TruPlan application is a visualization software and has basic measurement tools. The device is intended to be used as an aid to the existing standard of care. It is not replacing the existing software applications physicians use for planning the Left Atrial Appendage Closure procedure.
Pre-existing CT images are uploaded in TruPlan application manually by the end-user. The images can be viewed by the user in the original CT image as well as simulated views. The software displays the views in a modular format as follows:
- LAA
- Fluoro (fluoroscopy, simulation)
- Trans Esophageal Echo (TEE, simulation)
- Intra Cardiac Echography (ICE, simulation)
- Thrombus
- Multiplanar Reconstruction (MPR)
Each of these views offer the user visualization and quantification capabilities for pre-procedural planning of the Left Atrial Appendage Closure procedure; none are intended for diagnosis. The quantification tools are based on user-identified regions of interest and are user-modifiable. The device allows users to perform the measurements (all done on MPR viewers) listed in Table 1.
Additionally, the device generates a 3D rendering of the heart (including left ventricle, left atrium, and LAA) using machine learning methodology. The 3D rendering is for visualization purposes only. No measurements or annotation can be done using this view.
TruPlan also provides reporting functionality to capture screenshots and measurements and to store them as a PDF document.
TruPlan is installed as a standalone software onto the user's Windows PC (desktop) or laptop (Windows is the only supported operating system). TruPlan does not operate on a server or cloud.
The provided text does not contain the detailed information required to describe the acceptance criteria and the comprehensive study that proves the device meets those criteria.
While the document (a 510(k) summary) mentions "Verification and validation activities were conducted to verify compliance with specified design requirements" and "Performance testing was conducted to verify compliance with specified design requirements," it does not provide any specific quantitative acceptance criteria or the actual performance data. It also states "No clinical studies were necessary to support substantial equivalence," which means there was no multi-reader multi-case (MRMC) study or standalone performance study in a clinical setting with human readers.
Therefore, I cannot fulfill most of the requested points from the input. However, based on the information provided, I can infer and state what is missing or not applicable.
Here's a breakdown of the requested information and what can/cannot be extracted from the provided text:
1. A table of acceptance criteria and the reported device performance
Cannot be provided. The document states that "performance testing was conducted to verify compliance with specified design requirements," and "Validated phantoms were used for assessing the quantitative measurement output of the device." However, it does not specify what those "specified design requirements" (i.e., acceptance criteria) were, nor does it report the actual quantitative performance results (e.g., accuracy, precision) of the device against those criteria.
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
Cannot be provided. The document refers to "Validated phantoms" for quantitative measurement assessment. This implies synthetic or controlled data rather than patient data. No details are given regarding the number of phantoms used or their characteristics. There is no mention of "test set" in the context of patient data, data provenance, or whether it was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
Cannot be provided. Since no clinical test set with patient data is described, there's no mention of experts establishing ground truth for such a set. The testing was done on "validated phantoms" for "quantitative measurement output," suggesting a comparison against known ground truth values inherent to the phantom design rather than expert consensus on medical images.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
Cannot be provided. Given the lack of a clinical test set and expert review, no adjudication method is mentioned or applicable.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No, a MRMC study was NOT done. The document explicitly states: "No clinical studies were necessary to support substantial equivalence." This means there was no MRMC study to show human reader improvement with AI assistance. The submission relies on "performance testing and predicate device comparisons" for substantial equivalence.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Likely yes, for certain aspects, but specific performance data is not provided. The document mentions "Validated phantoms were used for assessing the quantitative measurement output of the device." This implies an algorithmic, standalone assessment of the device's measurement capabilities against the known values of the phantoms. However, the exact methodology, metrics, and results of this standalone performance are not detailed.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The ground truth for the quantitative measurement assessment was based on "Validated phantoms." This means the ground truth for measurements (e.g., distances, areas) would be the known, precisely manufactured dimensions of the phantoms, not expert consensus, pathology, or outcomes data.
8. The sample size for the training set
Cannot be provided. The document mentions that the device "generates a 3D rendering of the heart (including left ventricle, left atrium, and LAA) using machine learning methodology." This indicates that a training set was used for this specific function. However, the size of this training set is not mentioned anywhere in the provided text.
9. How the ground truth for the training set was established
Cannot be provided. While it's implied that there was a training set for the "machine learning methodology" used for 3D rendering, the document does not explain how the ground truth for this training set was established.
Ask a specific question about this device
Page 1 of 1