Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K223552
    Manufacturer
    Date Cleared
    2023-04-19

    (145 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Brainlab Elements Trajectory Planning software is intended for pre-, intra- and postoperative image-based planning and review of either open or minimally invasive neurosurgical procedures. Its use is indicated for any medical condition in which the use of stereotactic surgery may be appropriate for the placement of instruments/devices and where the position of the instrument/device can be identified relative to images of the anatomy.

    This includes, but is not limited to, the following cranial procedures (including frame-based stereotaxy and frame alternative-based stereotaxy):

    • Catheter placement
    • Depth electrode placement (SEEG procedures)
    • Lead placement and detection (DBS procedures)
    • Probe placement
    • Cranial biopsies
    Device Description

    Brainlab Elements Trajectory Planning is a software used to plan minimally invasive possible pathways (Trajectories') for surgical instruments on scanned images. It is used for the processing and viewing of anatomical images (for example: axial, coronal and sagittal reconstructions, etc.) and corresponding planning contents (for example: co-registrations, seqmentations, fiber tracts created by compatible applications and stored as DICOM data) and the planning of trajectories based on this data. The device is also used for the creation of coordinates and measurements that can be used as input data for surgical intervention (e.g.: stereotactic arc settings or FHC STarFix platform settings). Depending on the workflow and available licenses, Brainlab Elements Trajectory Planning might be used in different roles where only specific application features are available.

    The following roles are available for Trajectory Planning:

    • Trajectory (Element): allows the creation of trajectories
    • Stereotaxy (Element): allows the creation of trajectories and supports stereotactic procedures based on Stereotactic Arc Settings or FHC STarFix platform settings
    • Lead Localization (Element): allows the creation of trajectories and automatic detection of leads in post-operative images.

    All roles are enabled to be used for cranial trajectory planning procedures after installation of Trajectory as well as the corresponding workflow files for Cranial Planning, Stereotactic Planning or Post-Op Review.

    AI/ML Overview

    The provided text describes the 510(k) premarket notification for the Brainlab Elements - Trajectory Planning (2.6) device. It acts as a K-number summary, primarily comparing the new device to existing predicate devices to establish substantial equivalence. While it discusses certain "performance data" and "verification," it does not present a detailed study with acceptance criteria and reported device performance in the format of clinical trial results or a validation study with specific metrics.

    The document states that the objective of "validative tests" was to verify that the accuracy and robustness of automatically and semi-automatically detected WayPoint anchors used in FHC's 4mm and 5mm bone anchors were "non-inferior" to a specified predicate software application. It also states that "the acceptance criteria specified in the test plan regarding the Brainlab automatic and semi-automatic anchor detection algorithm were fulfilled." However, the specific acceptance criteria and the reported quantitative performance metrics are not included in this document.

    Therefore, I cannot fulfill your request for a table of acceptance criteria and reported device performance, sample sizes, expert details, adjudication methods, MRMC study details, or specific ground truth methodologies for a standalone performance study, as these details are not provided in the given text.

    Based on the information provided, here's what can be inferred and what remains unknown regarding the "study" mentioned:

    Known Information (based on the provided text):

    • Device Performance Focus: The performance data section focuses on "FHC anchor detection" and "summative usability evaluation."
    • FHC Anchor Detection Objective: To verify that the accuracy and robustness of Brainlab's automatic and semi-automatic WayPoint anchor detection on CT data for FHC's 4mm and 5mm bone anchors are non-inferior to the predicate "SW application WayPoint™ Planner Software."
    • Acceptance Criteria for Anchor Detection: The text states, "The acceptance criteria specified in the test plan regarding the Brainlab automatic and semi-automatic anchor detection algorithm were fulfilled." (However, the specific criteria are not provided.)
    • Summative Usability Evaluation: This evaluation covered the support of STarFix platforms for DBS procedures, including detection of WayPoint bone anchors and platform planning. New GUI functionalities (locking of plans, overlay/blending of fused images, and new interaction with coordinates) were also included.
    • Study Type Mentioned: "Validative tests" for anchor detection and "summative evaluation" for usability.
    • Conclusion: The manufacturer concluded that "the performed verification and validation activities established that the set requirements were met and that the device performs as intended."

    Unknown Information (not provided in the text):

    1. Table of Acceptance Criteria and Reported Device Performance: This critical information is missing. The document only states that acceptance criteria were fulfilled, not what they were or the quantitative results achieved.
    2. Sample Size Used for the Test Set and Data Provenance: The document does not specify the number of cases or images used in the FHC anchor detection or usability tests, nor does it state the country of origin of the data or if it was retrospective/prospective.
    3. Number of Experts Used to Establish Ground Truth and Qualifications: The number and qualifications of experts involved in establishing ground truth (for anchor detection) or participating in the usability evaluation are not mentioned.
    4. Adjudication Method: No information is provided regarding how disagreements or the ground truth was established for the test set.
    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study: The document does not describe an MRMC study comparing human readers with and without AI assistance. The non-inferiority claim for anchor detection is against a software application, not a human reader.
    6. Standalone (Algorithm Only) Performance: While the "FHC anchor detection" seems to refer to algorithm performance, specific metrics (e.g., sensitivity, specificity, accuracy for anchor detection) of the algorithm in a standalone capacity are not provided. The non-inferiority is stated against another software product, not against a defined ground truth with quantitative metrics.
    7. Type of Ground Truth Used: For the FHC anchor detection, it's implied that there's a ground truth for "accuracy and robustness" of anchor detection, likely based on known positions or expert annotations of anchors. However, the specific method (e.g., expert consensus, physical measurements) is not detailed. For usability, the ground truth would be user feedback and observed performance.
    8. Sample Size for the Training Set: No information is provided about the training set for any machine learning components within the device (if any). The context suggests this is more about software functionality and accuracy validation rather than a deep learning model requiring a large training set.
    9. How Ground Truth for the Training Set Was Established: This information is also absent.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1