Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K222593
    Date Cleared
    2023-01-18

    (145 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    TruPlan Computed Tomography (CT) Imaging Software

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    TruPlan enables visualization and measurement of structures of the heart and vessels for:

    • · Pre-procedural planning and sizing for the left atrial appendage closure (LAAC) procedure
    • · Post-procedural evaluation for the LAAC procedure

    To facilitate the above, TruPlan provides general functionality such as:

    • · Segmentation of cardiovascular structures
    • · Visualization and image reconstruction techniques: 2D review, Volume Rendering, MPR
    • · Simulation of TEE views, ICE views, and fluoroscopic rendering
    • · Measurement and annotation tools
    • · Reporting tools

    TruPlan's intended patient population is comprised of adult patients.

    Device Description

    The TruPlan Computed Tomography (CT) Imaging Software application ("TruPlan") is a software as a medical device that helps qualified users with image-based pre-procedural planning and post-procedural follow-up of the Left Atrial Appendage Closure (LAAC) procedure using CT data. TruPlan is designed to support the anatomical assessment of the Left Atrial Appendage (LAA) prior to and following the LAAC procedure. This includes the assessment of the LAA size, shape, and relationships with adjacent cardiac and extracardiac structures. This assessment helps the physician determine the size of a closure device needed for the LAAC procedure and evaluate LAAC device placement in a follow-up CT study. The TruPlan application is a visualization software and has basic measurement tools. The device is intended to be used as an aid to the existing standard of care and does not replace existing software applications physicians use for planning or follow-up for a LAAC procedure.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) submission for TruPlan Computed Tomography (CT) Imaging Software:


    Acceptance Criteria and Device Performance Study for TruPlan CT Imaging Software

    The TruPlan Computed Tomography (CT) Imaging Software by Circle Cardiovascular Imaging, Inc. underwent validation of its machine learning (ML) derived outputs to demonstrate its performance relative to pre-defined acceptance criteria. The device contains two primary ML algorithms: Left Heart Segmentation and Landing Zone Detection.

    1. Table of Acceptance Criteria and Reported Device Performance

    Feature / MetricAcceptance Criteria (Pre-defined)Reported Device Performance
    Left Heart Segmentation Algorithm
    Probability of Bone Removal (Segmentation Accuracy)Not explicitly stated as a numerical threshold, but implied to be high for correct segmentation.532/533 cases (99.81%) for bone removal
    Probability of LAA Visualization (Segmentation Accuracy)Not explicitly stated as a numerical threshold, but implied to be high for correct visualization.519/533 cases (97.37%) for LAA visualization
    Landing Zone Detection Algorithm
    Landing Zone Plane Distance MetricWithin 10 mm97/100 cases (97%) within 10 mm (mean distance: 3.87 mm)
    Landing Zone Contour Center Distance MetricWithin 12 mm99/100 cases (99%) within 12 mm (mean distance: 2.92 mm)

    Note: The document states that "All performance testing results met Circle's pre-defined acceptance criteria," indicating that the reported performance metrics met or exceeded the internal thresholds established by the manufacturer, even if the exact numerical acceptance percentages for segmentation accuracy were not explicitly listed as criteria in the provided text.

    2. Sample Sizes Used for the Test Set and Data Provenance

    • Test Set Sample Size:
      • Left Heart Segmentation: 533 anonymized patient images
      • Landing Zone Detection: 100 anonymized patient images
    • Data Provenance:
      • Country of Origin: The validation data was sourced from multiple sites across the U.S. and other urban regions. Specifically:
        • Left Heart Segmentation: U.S., Canada, South America, Europe, and Asia.
        • Landing Zone Detection: Various sites across the U.S.
      • Retrospective/Prospective: The data used for validation were pre-existing CT images, common for retrospective studies. The document states "All data used for validation were not used during the development of the training algorithms," ensuring independence.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    The document mentions that for the Landing Zone Detection algorithm, "The landing zone was manually contoured by multiple expert readers for evaluation."

    For the training data of the Left Heart Segmentation algorithm, it states "the left heart structures were manually annotated by multiple expert readers." While this refers to training, it implies a similar process and expert qualification for testing.

    The specific number of experts and their explicit qualifications (e.g., "radiologist with 10 years of experience") are not specified in the provided text for either training or validation ground truth establishment. It only states "expert readers."

    4. Adjudication Method for the Test Set

    The document indicates that for the Landing Zone Detection ground truth, the "landing zone was manually contoured by multiple expert readers." For the Left Heart Segmentation training data, "manually annotated by multiple expert readers." This implies a consensus or majority vote approach might have been used, but the specific adjudication method (e.g., 2+1, 3+1, none) is not explicitly detailed in the provided text.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done

    No, a MRMC comparative effectiveness study was not done. The document explicitly states: "No clinical studies were necessary to support substantial equivalence." The performance data presented is that of the algorithm's standalone performance against expert-defined ground truth, rather than a comparison of human readers with and without AI assistance. Therefore, an effect size of human reader improvement with AI vs. without AI assistance is not provided and was not part of this submission.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    Yes, standalone performance was evaluated. The metrics reported (Probability of Bone Removal, Probability of LAA Visualization, Landing Zone Plane Distance, Landing Zone Contour Center Distance) are direct measurements of how accurately the ML algorithms perform their specific tasks when processing the CT images. The "Validation of Machine Learning Derived Outputs" section focuses purely on the algorithm's performance against ground truth.

    7. The Type of Ground Truth Used

    The ground truth used was expert consensus/manual contouring/annotation.

    • For Left Heart Segmentation: "left heart structures were manually annotated by multiple expert readers."
    • For Landing Zone Detection: "the landing zone was manually contoured by multiple expert readers."

    This is observational data interpreted by human experts, not pathology or outcomes data.

    8. The Sample Size for the Training Set

    • Left Heart Segmentation: 113 cases
    • Landing Zone Detection: 273 cases

    9. How the Ground Truth for the Training Set Was Established

    • Left Heart Segmentation: "the left heart structures were manually annotated by multiple expert readers."
    • Landing Zone Detection: "the landing zone was manually contoured by expert readers."

    Similar to the test set, the ground truth for training was established through manual annotation and contouring by expert readers. The document emphasizes that "the separation into training versus validation datasets is made on the study level to ensure no overlap between the two sets."

    Ask a Question

    Ask a specific question about this device

    K Number
    K202212
    Device Name
    TruPlan
    Date Cleared
    2021-02-19

    (197 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    TruPlan

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    TruPlan enables visualization and measurement of structures of the heart and vessels for pre-procedural planning and sizing for the left atrial appendage closure (LAAC) procedure.

    To facilitate the above, TruPlan provides general functionality such as:

    • Segmentation of cardiovascular structures
    • Visualization and image reconstruction techniques: 2D review, Volume Rendering. MPR
    • Simulation of TEE views, ICE views, and fluoroscopic rendering
    • Measurement and annotation tools
    • Reporting tools
    Device Description

    The TruPlan Computed Tomography (CT) Imaging Software application (referred to herein as "TruPlan") is a software as a medical device (SAMD) that helps qualified users with imagebased pre-operative planning of Left Atrial Appendage Closure (LAAC) procedure using CT data. The TruPlan device is designed to support the anatomical assessment of the Left Atrial Appendage (LAA) prior to the LAAC procedure. This includes the assessment of the LAA size, shape, and relationships with adjacent cardiac and extracardiac structures. This assessment helps the physician determine the size of a closure device needed for the LAAC procedure. The TruPlan application is a visualization software and has basic measurement tools. The device is intended to be used as an aid to the existing standard of care. It is not replacing the existing software applications physicians use for planning the Left Atrial Appendage Closure procedure.

    Pre-existing CT images are uploaded in TruPlan application manually by the end-user. The images can be viewed by the user in the original CT image as well as simulated views. The software displays the views in a modular format as follows:

    • LAA
    • Fluoro (fluoroscopy, simulation)
    • Trans Esophageal Echo (TEE, simulation)
    • Intra Cardiac Echography (ICE, simulation)
    • Thrombus
    • Multiplanar Reconstruction (MPR)

    Each of these views offer the user visualization and quantification capabilities for pre-procedural planning of the Left Atrial Appendage Closure procedure; none are intended for diagnosis. The quantification tools are based on user-identified regions of interest and are user-modifiable. The device allows users to perform the measurements (all done on MPR viewers) listed in Table 1.

    Additionally, the device generates a 3D rendering of the heart (including left ventricle, left atrium, and LAA) using machine learning methodology. The 3D rendering is for visualization purposes only. No measurements or annotation can be done using this view.

    TruPlan also provides reporting functionality to capture screenshots and measurements and to store them as a PDF document.

    TruPlan is installed as a standalone software onto the user's Windows PC (desktop) or laptop (Windows is the only supported operating system). TruPlan does not operate on a server or cloud.

    AI/ML Overview

    The provided text does not contain the detailed information required to describe the acceptance criteria and the comprehensive study that proves the device meets those criteria.

    While the document (a 510(k) summary) mentions "Verification and validation activities were conducted to verify compliance with specified design requirements" and "Performance testing was conducted to verify compliance with specified design requirements," it does not provide any specific quantitative acceptance criteria or the actual performance data. It also states "No clinical studies were necessary to support substantial equivalence," which means there was no multi-reader multi-case (MRMC) study or standalone performance study in a clinical setting with human readers.

    Therefore, I cannot fulfill most of the requested points from the input. However, based on the information provided, I can infer and state what is missing or not applicable.

    Here's a breakdown of the requested information and what can/cannot be extracted from the provided text:


    1. A table of acceptance criteria and the reported device performance

    Cannot be provided. The document states that "performance testing was conducted to verify compliance with specified design requirements," and "Validated phantoms were used for assessing the quantitative measurement output of the device." However, it does not specify what those "specified design requirements" (i.e., acceptance criteria) were, nor does it report the actual quantitative performance results (e.g., accuracy, precision) of the device against those criteria.


    2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)

    Cannot be provided. The document refers to "Validated phantoms" for quantitative measurement assessment. This implies synthetic or controlled data rather than patient data. No details are given regarding the number of phantoms used or their characteristics. There is no mention of "test set" in the context of patient data, data provenance, or whether it was retrospective or prospective.


    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)

    Cannot be provided. Since no clinical test set with patient data is described, there's no mention of experts establishing ground truth for such a set. The testing was done on "validated phantoms" for "quantitative measurement output," suggesting a comparison against known ground truth values inherent to the phantom design rather than expert consensus on medical images.


    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

    Cannot be provided. Given the lack of a clinical test set and expert review, no adjudication method is mentioned or applicable.


    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No, a MRMC study was NOT done. The document explicitly states: "No clinical studies were necessary to support substantial equivalence." This means there was no MRMC study to show human reader improvement with AI assistance. The submission relies on "performance testing and predicate device comparisons" for substantial equivalence.


    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Likely yes, for certain aspects, but specific performance data is not provided. The document mentions "Validated phantoms were used for assessing the quantitative measurement output of the device." This implies an algorithmic, standalone assessment of the device's measurement capabilities against the known values of the phantoms. However, the exact methodology, metrics, and results of this standalone performance are not detailed.


    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    The ground truth for the quantitative measurement assessment was based on "Validated phantoms." This means the ground truth for measurements (e.g., distances, areas) would be the known, precisely manufactured dimensions of the phantoms, not expert consensus, pathology, or outcomes data.


    8. The sample size for the training set

    Cannot be provided. The document mentions that the device "generates a 3D rendering of the heart (including left ventricle, left atrium, and LAA) using machine learning methodology." This indicates that a training set was used for this specific function. However, the size of this training set is not mentioned anywhere in the provided text.


    9. How the ground truth for the training set was established

    Cannot be provided. While it's implied that there was a training set for the "machine learning methodology" used for 3D rendering, the document does not explain how the ground truth for this training set was established.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1