Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K223288
    Manufacturer
    Date Cleared
    2023-07-21

    (269 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K192703, K162929, K213989

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Cranial Navigation: The Cranial Navigation is intended as image-guided planning and navigation system to enable navigated cranial surgery. It links instruments to a virtual computer image data being processed by the navigation platform. The system is indicated for any medical condition in which a reference to a rigid anatomical structure can be identified relative to images (CT. CTA. X-Ray, MR, MRA and ultrasound) of the anatomy, including: Cranial Resection (Resection of tumors and other lesions, Resection of skull-base tumor or other lesions, AVM Resection), Craniofacial Procedures (including cranial and midfacial bones) (Tumor Resection, Bone Tumor Defect Reconstruction, Bone Trauma Defect Reconstruction, Bone Congenital Defect Reconstructions, Orbital cavity reconstruction procedures), Removal of foreign objects.

    Cranial EM System: Cranial EM is intended as an image-guided planning and navigation system to enable neurosurgery procedures. The device is indicated for any medical condition in which a reference to a rigid anatomical structure can be identified relative to images (CT, CTA, X-Ray, MR, MRA and ultrasound) of the anatomy, such as: Cranial Resection (Resection of tumors and other lesions, Resection of skull-base tumor or other lesions), Intracranial catheter placement.

    Device Description

    The subject device consists of several devices: Cranial Navigation using optical tracking technology, its accessory Automatic Registration iMRI, and Cranial EM System using electromaqnetic tracking technology.

    Cranial Navigation is an image guided surgery system for navigated treatments in the field of cranial surgery, including the newly added Craniofacial indication. It offers different patient image registration methods and instrument calibration to allow surgical navigation by using optical tracking technology. The device provides different workflows guiding the user through preoperative and intraoperative steps. The software is installed on a mobile or fixed Image Guided Surgery (IGS) platform to support the surgeon in clinical procedures by displaying tracked instruments in patient's image data. The IGS platforms consist of a mobile Monitor Cart or a fixed ceiling mounted display and an infrared camera for image guided surgery purposes. There are three different product lines of the IGS platforms: "Curve", "Kick" and Buzz Navigation (Ceiling-Mounted). Cranial Navigation consists of: Several software modules for registration, instrument handling, navigation and infrastructure tasks (main software: Cranial Navigation 4.1 including several components), IGS platforms (Curve Navigation 17700, Kick 2 Navigation Station, Buzz Navigation (Ceiling-Mounted) and predecessor models), Surgical instruments for navigation, patient referencing and registration.

    Automatic Registration iMRI is an accessory to Cranial Navigation enabling automatic image registration for intraoperatively acquired MR imaging, The registration object can be used in subsequent applications (e.g. Cranial Navigation 4.1). It consists of the software Automatic Registration iMRI 1.0, a registration matrix and a reference adapter.

    Similarly, the Cranial EM System, is an image-guided planning and navigation system to enable neurosurgical procedures. It offers instrument handling as well as patient registration to allow surqical navigation by using electromagnetic tracking technology. It links patient anatomy (using a patient reference) and instruments in the real world or "patient space" to patient scan data or "imaqe space". This allows for the continuous localization of medical instruments and patient anatomy for medical interventions in cranial procedures. It uses the same software infrastructure components as the Cranial Navigation, and the software is also installed on IGS platforms consisting of a mobile monitor cart and an EM tracking unit. It consists of: Different software modules for instrument set up, registration and navigation (Main software: Cranial EM 1.1 including several components), EM IGS platforms (Curve Navigation 17700 and Kick 2 Navigation Station EM), Surgical instruments for navigation, patient referencing and registration.

    AI/ML Overview

    The provided text describes a 510(k) premarket notification for Brainlab AG's Cranial Navigation, Navigation Software Cranial, Navigation Software Craniofacial, Cranial EM System, and Automatic Registration iMRI. The document focuses on demonstrating substantial equivalence to predicate devices, particularly highlighting the introduction of Artificial Intelligence/Machine Learning (AI/ML) algorithms for specific features.

    Here's an analysis of the acceptance criteria and the study proving the device meets them, based on the provided text:

    1. A table of acceptance criteria and the reported device performance

    The document mentions acceptance criteria relating to system accuracy.

    Acceptance CriteriaReported Device Performance
    Mean Positional Error ≤ 2 mmFulfilled (details not explicitly quantified beyond "fulfilled")
    Mean Angular Error of instrument's axis ≤ 2°Fulfilled (details not explicitly quantified beyond "fulfilled")
    AI/ML performance for abnormity detectionPrecision and recall higher than atlas-based method.
    AI/ML performance for landmark detectionNo concerns regarding safety and effectiveness (implicitly met expectations).
    Software level of concern"Major" level of concern addressed by V&V testing.
    UsabilitySummative usability carried out according to IEC 62366-1.
    Electrical safety & EMCCompliance to IEC 60601-1, AIM 7351731, IEC 60601-1-2.
    InstrumentsBiocompatibility, cleaning/disinfection, mechanical properties, aging, MRI testing (where applicable) – all evaluated.

    2. Sample size used for the test set and the data provenance

    • Test Set Sample Size: The document does not explicitly state the numerical sample size for the test set used for evaluating the AI/ML algorithm or other performance metrics. It only mentions the "test pool data is set aside at the beginning of the project."
    • Data Provenance: The document does not specify the country of origin of the data or whether it was retrospective or prospective. It just refers to "training pool data" and "test pool data."

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    The document does not provide information on the number or qualifications of experts used to establish ground truth for the test set. It mentions the AI/ML algorithm was developed using a Supervised Learning approach and that its performance was evaluated against a test pool, but no details on human ground truth labeling are given.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    The document does not describe any adjudication methods used for the test set ground truth.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study. The AI/ML functionality described is for "pre-registration" and "centering of views," which are aids to the navigation system, but there is no indication of a study measuring human reader performance with and without AI assistance. The performance testing for AI/ML focuses on its own accuracy (precision and recall) compared to a previous atlas-based method, not human reader improvement.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, a standalone (algorithm only) performance evaluation was done for the AI/ML features. The text states: "For the two features now implemented using AI/ML (landmark detection in the pre-registration step and centering of views if no instrument is tracked to the detected abnormity), performance testing comparing conventional to machine learning based landmark detection and abnormity detection were performed showing equivalent performance as in the predicate device." It also highlights that for abnormity detection, "both precision and recall of the ML-based method are higher in comparison to the atlas-based method." This indicates an isolated evaluation of the algorithm's performance.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    The document does not explicitly state the type of ground truth used for the AI/ML algorithm. It mentions "Supervised Learning" and "training pool data," implying that the training data had pre-established labels (ground truth), but it doesn't specify if these labels came from expert consensus, pathology, or another source. Given the context of image-guided surgery, it's highly probable the ground truth for abnormalities and landmarks would be derived from expert annotations on medical images.

    8. The sample size for the training set

    The document does not explicitly state the numerical sample size for the training set. It only refers to a "training pool data."

    9. How the ground truth for the training set was established

    The document mentions that the AI/ML algorithm was developed using a "Supervised Learning approach." This means that the training data was pre-labeled. However, it does not specify how this ground truth was established (e.g., by manual annotation from a certain number of experts, based on surgical findings, etc.). It only states that the "training process begins with the model observing, learning, and optimizing its parameters based on the training pool data."

    Ask a Question

    Ask a specific question about this device

    K Number
    K223734
    Device Name
    ENT EM
    Manufacturer
    Date Cleared
    2023-04-27

    (135 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Reference Devices :

    K213989 Cranial EM System

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ENT EM is intended as an image-guided planning and navigation system to enable ENT procedures. The device is indicated for any medical condition in which a reference to a rigid anatomical structure can be identified relative to images (CT, CTA, X-Ray, MR, MRA and ultrasound) of the anatomy, such as:

    • Intranasal structures and Paranasal Sinus Surgery
      • Functional endoscopic sinus surgery (FESS)
      • Intranasal structures and paranasal sinus surgery, including revision and distorted anatomy
    • Anterior skull base procedures
    Device Description

    The Subject Device ENT EM is an image guided planning and navigation system to enable navigated surgery during ENT procedures. It offers guidance for setting up the EM equipment, different patient image registration methods and instrument selection and calibration to allow surgical navigation by using electromagnetic tracking (EM) technology. The device provides different workflows guiding the user through preoperative and intraoperative steps. To fulfill this purpose, it links patient anatomy (using a patient reference) and instruments in the real world or "patient space" to patient scan data or "image space". This allows for the continuous localization of medical instruments and patient anatomy for medical interventions in ENT procedures. The software is installed on a mobile Image Guided Surgery (IGS) platform (Kick 2 Navigation Station or Curve Navigation 17700) to support the surgeon in clinical procedures by displaying tracked instruments in patient's image data. The IGS platforms consist of a mobile Monitor Cart and an EM tracking unit for image guided surgery purposes. ENT EM consists of: Several software modules for registration, instrument handling, navigation and infrastructure tasks, IGS platforms and surgical instruments for navigation, patient referencing and registration.

    AI/ML Overview

    The provided text describes the acceptance criteria and the study that proves the device meets those criteria for the Brainlab ENT EM system, which incorporates an AI/ML-based function for pre-registration in surface matching.

    Here's the breakdown of the information requested:

    1. Table of Acceptance Criteria and Reported Device Performance

    Parameter/CharacteristicAcceptance CriteriaReported Device Performance
    System Accuracy
    Mean Positional Error≤ 2 mmAchieves the same accuracy performance (mean location error ≤ 2 mm) as both predicate and reference device.
    Mean Angular Error≤ 2ºAchieves the same accuracy performance (mean trajectory angle error ≤ 2 degrees) as both predicate and reference device.
    AI/ML Landmark DetectionEquivalent performance to conventional methodPerformance testing comparing conventional to machine learning based landmark detection were performed showing equivalent performance as in the reference device.
    UsabilitySafe and effective for intended user groupSummative usability evaluation in a simulated clinical environment showed ENT EM is safe and effective for use by the intended user group.
    Electrical Safety & EMCCompliance with standardsCompliance to IEC 60601-1, AIM 7351731, and IEC 60601-1-2. Tests showed the subject device performs as intended.
    Instrument BiocompatibilityBiologically safeBiocompatibility assessment considering different endpoints provided.
    Instrument ReprocessingAppropriateness of cleaning/disinfection/sterilizationCleaning and disinfection evaluation/reprocessing validation provided.
    Instrument Mechanical PropertiesWithstand typical torsional strengths/torquesEvaluated considering typical torsional strengths, torques, and conditions instruments can be subject to during use.

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not explicitly state the numerical sample size for the test set used for the AI/ML algorithm's performance evaluation. It mentions that "The model's prediction and performance are then evaluated against the test pool. The test pool data is set aside at the beginning of the project."

    The data provenance is not explicitly stated regarding country of origin or specific patient demographics. However, it indicates a "controlled internal process" for development and evaluation. It's a static algorithm (locked), suggesting it's developed and tested once rather than continuously learning. The context implies it's retrospective as data was "set aside."

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    This information is not provided in the text. The document refers to "landmarks delivered by a ML based calculation" and compares its performance to a "conventional" method in the reference device, but it doesn't detail how the ground truth for these landmarks was established for testing.

    4. Adjudication Method for the Test Set

    The adjudication method for establishing ground truth for the test set is not explicitly described.

    5. Multi Reader Multi Case (MRMC) Comparative Effectiveness Study

    There is no mention of a Multi Reader Multi Case (MRMC) comparative effectiveness study being performed with human readers to assess improvement with AI vs. without AI assistance. The testing primarily focuses on the AI/ML algorithm's performance equivalence to the predicate/reference device's conventional method, and overall system accuracy.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    Yes, a form of standalone performance evaluation was done for the AI/ML algorithm. The text states: "Performance testing comparing conventional to machine learning based landmark detection were performed showing equivalent performance as in the reference device." This implies an evaluation of the algorithm's output (landmark detection) without a human reader in the interpretation loop, by comparing its results directly to a "conventional" method.

    7. The Type of Ground Truth Used for Performance Testing

    The type of ground truth for the AI/ML landmark detection is implicitly based on the "conventional" landmark detection method used in the reference device. The document states "Performance testing comparing conventional to machine learning based landmark detection were performed showing equivalent performance as in the reference device." This suggests the conventional method's output serves as the reference ground truth, or there's an established "true" landmark position that both are compared against. For system accuracy, the ground truth is established through physical measurements of "Mean Positional Error" and "Mean Angular Error" against a known configuration.

    8. The Sample Size for the Training Set

    The document does not provide the numerical sample size for the training set. It mentions the algorithm was developed using a "Supervised Learning approach" and that "the training process begins with the model observing, learning, and optimizing its parameters based on the training pool data."

    9. How the Ground Truth for the Training Set Was Established

    The method for establishing ground truth for the training set is not explicitly detailed. It only states that the algorithm was developed using a "Supervised Learning approach" and a "controlled internal process" that defines activities from "inspection of input data to the training and verification." This implies that the training data included true labels or targets for the landmarks that the AI/ML algorithm was trained to detect, but the source or method of obtaining these true labels (e.g., expert annotation, manual registration results) is not specified.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1