Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    Why did this record match?
    Reference Devices :

    K192703, K162929, K213989

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Cranial Navigation: The Cranial Navigation is intended as image-guided planning and navigation system to enable navigated cranial surgery. It links instruments to a virtual computer image data being processed by the navigation platform. The system is indicated for any medical condition in which a reference to a rigid anatomical structure can be identified relative to images (CT. CTA. X-Ray, MR, MRA and ultrasound) of the anatomy, including: Cranial Resection (Resection of tumors and other lesions, Resection of skull-base tumor or other lesions, AVM Resection), Craniofacial Procedures (including cranial and midfacial bones) (Tumor Resection, Bone Tumor Defect Reconstruction, Bone Trauma Defect Reconstruction, Bone Congenital Defect Reconstructions, Orbital cavity reconstruction procedures), Removal of foreign objects.

    Cranial EM System: Cranial EM is intended as an image-guided planning and navigation system to enable neurosurgery procedures. The device is indicated for any medical condition in which a reference to a rigid anatomical structure can be identified relative to images (CT, CTA, X-Ray, MR, MRA and ultrasound) of the anatomy, such as: Cranial Resection (Resection of tumors and other lesions, Resection of skull-base tumor or other lesions), Intracranial catheter placement.

    Device Description

    The subject device consists of several devices: Cranial Navigation using optical tracking technology, its accessory Automatic Registration iMRI, and Cranial EM System using electromaqnetic tracking technology.

    Cranial Navigation is an image guided surgery system for navigated treatments in the field of cranial surgery, including the newly added Craniofacial indication. It offers different patient image registration methods and instrument calibration to allow surgical navigation by using optical tracking technology. The device provides different workflows guiding the user through preoperative and intraoperative steps. The software is installed on a mobile or fixed Image Guided Surgery (IGS) platform to support the surgeon in clinical procedures by displaying tracked instruments in patient's image data. The IGS platforms consist of a mobile Monitor Cart or a fixed ceiling mounted display and an infrared camera for image guided surgery purposes. There are three different product lines of the IGS platforms: "Curve", "Kick" and Buzz Navigation (Ceiling-Mounted). Cranial Navigation consists of: Several software modules for registration, instrument handling, navigation and infrastructure tasks (main software: Cranial Navigation 4.1 including several components), IGS platforms (Curve Navigation 17700, Kick 2 Navigation Station, Buzz Navigation (Ceiling-Mounted) and predecessor models), Surgical instruments for navigation, patient referencing and registration.

    Automatic Registration iMRI is an accessory to Cranial Navigation enabling automatic image registration for intraoperatively acquired MR imaging, The registration object can be used in subsequent applications (e.g. Cranial Navigation 4.1). It consists of the software Automatic Registration iMRI 1.0, a registration matrix and a reference adapter.

    Similarly, the Cranial EM System, is an image-guided planning and navigation system to enable neurosurgical procedures. It offers instrument handling as well as patient registration to allow surqical navigation by using electromagnetic tracking technology. It links patient anatomy (using a patient reference) and instruments in the real world or "patient space" to patient scan data or "imaqe space". This allows for the continuous localization of medical instruments and patient anatomy for medical interventions in cranial procedures. It uses the same software infrastructure components as the Cranial Navigation, and the software is also installed on IGS platforms consisting of a mobile monitor cart and an EM tracking unit. It consists of: Different software modules for instrument set up, registration and navigation (Main software: Cranial EM 1.1 including several components), EM IGS platforms (Curve Navigation 17700 and Kick 2 Navigation Station EM), Surgical instruments for navigation, patient referencing and registration.

    AI/ML Overview

    The provided text describes a 510(k) premarket notification for Brainlab AG's Cranial Navigation, Navigation Software Cranial, Navigation Software Craniofacial, Cranial EM System, and Automatic Registration iMRI. The document focuses on demonstrating substantial equivalence to predicate devices, particularly highlighting the introduction of Artificial Intelligence/Machine Learning (AI/ML) algorithms for specific features.

    Here's an analysis of the acceptance criteria and the study proving the device meets them, based on the provided text:

    1. A table of acceptance criteria and the reported device performance

    The document mentions acceptance criteria relating to system accuracy.

    Acceptance CriteriaReported Device Performance
    Mean Positional Error ≤ 2 mmFulfilled (details not explicitly quantified beyond "fulfilled")
    Mean Angular Error of instrument's axis ≤ 2°Fulfilled (details not explicitly quantified beyond "fulfilled")
    AI/ML performance for abnormity detectionPrecision and recall higher than atlas-based method.
    AI/ML performance for landmark detectionNo concerns regarding safety and effectiveness (implicitly met expectations).
    Software level of concern"Major" level of concern addressed by V&V testing.
    UsabilitySummative usability carried out according to IEC 62366-1.
    Electrical safety & EMCCompliance to IEC 60601-1, AIM 7351731, IEC 60601-1-2.
    InstrumentsBiocompatibility, cleaning/disinfection, mechanical properties, aging, MRI testing (where applicable) – all evaluated.

    2. Sample size used for the test set and the data provenance

    • Test Set Sample Size: The document does not explicitly state the numerical sample size for the test set used for evaluating the AI/ML algorithm or other performance metrics. It only mentions the "test pool data is set aside at the beginning of the project."
    • Data Provenance: The document does not specify the country of origin of the data or whether it was retrospective or prospective. It just refers to "training pool data" and "test pool data."

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    The document does not provide information on the number or qualifications of experts used to establish ground truth for the test set. It mentions the AI/ML algorithm was developed using a Supervised Learning approach and that its performance was evaluated against a test pool, but no details on human ground truth labeling are given.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    The document does not describe any adjudication methods used for the test set ground truth.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study. The AI/ML functionality described is for "pre-registration" and "centering of views," which are aids to the navigation system, but there is no indication of a study measuring human reader performance with and without AI assistance. The performance testing for AI/ML focuses on its own accuracy (precision and recall) compared to a previous atlas-based method, not human reader improvement.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, a standalone (algorithm only) performance evaluation was done for the AI/ML features. The text states: "For the two features now implemented using AI/ML (landmark detection in the pre-registration step and centering of views if no instrument is tracked to the detected abnormity), performance testing comparing conventional to machine learning based landmark detection and abnormity detection were performed showing equivalent performance as in the predicate device." It also highlights that for abnormity detection, "both precision and recall of the ML-based method are higher in comparison to the atlas-based method." This indicates an isolated evaluation of the algorithm's performance.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    The document does not explicitly state the type of ground truth used for the AI/ML algorithm. It mentions "Supervised Learning" and "training pool data," implying that the training data had pre-established labels (ground truth), but it doesn't specify if these labels came from expert consensus, pathology, or another source. Given the context of image-guided surgery, it's highly probable the ground truth for abnormalities and landmarks would be derived from expert annotations on medical images.

    8. The sample size for the training set

    The document does not explicitly state the numerical sample size for the training set. It only refers to a "training pool data."

    9. How the ground truth for the training set was established

    The document mentions that the AI/ML algorithm was developed using a "Supervised Learning approach." This means that the training data was pre-labeled. However, it does not specify how this ground truth was established (e.g., by manual annotation from a certain number of experts, based on surgical findings, etc.). It only states that the "training process begins with the model observing, learning, and optimizing its parameters based on the training pool data."

    Ask a Question

    Ask a specific question about this device

    K Number
    K201752
    Manufacturer
    Date Cleared
    2021-01-29

    (217 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K162929

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Disposable Pre-calibrated Suction is an accessory of the Cranial Image Guided Surgery System and intended to be used as a navigated suction device in any surgical procedure in which the use of the Cranial Image Guided Surgery System may be indicated.

    Surgical example procedures include:

    • · Cranial resection of tumors and other lesions
    • · Resection of skull base tumors or other lesions
    • · AVM Resection
    Device Description

    Disposable Pre-calibrated Suction is an accessory of the Cranial IGS system and intended to be used as a navigated suction device in any surgical procedure in which the use of the Cranial IGS system may be indicated.

    The Brainlab Disposable Suction is an accessory for the currently released and developed Brainlab optical IGS systems for cranial procedures. The device will be pre-calibrated, i.e. it will be automatically recognized by the system and is immediately ready to use. By tracking the flat markers attached to the integrated tracking array, the instrument and thereby position of the tip can be located by the Brainlab optical IGS systems for cranial procedures.

    The device is intended for single short term invasive use on an individual patient during a single procedure. This invasive device is used for a short-term limited contact (

    AI/ML Overview

    The provided text describes a 510(k) submission for a medical device called "Disposable Pre-calibrated Suction." It is an accessory to a Cranial Image Guided Surgery System. The document focuses on demonstrating substantial equivalence to predicate devices rather than proving a specific AI algorithm's performance against acceptance criteria for a diagnostic task.

    Therefore, the information required to populate the fields related to AI model acceptance criteria, test set details (sample size, provenance, expert consensus, adjudication, MRMC studies), training set details, and ground truth establishment is not present in the provided text. The document primarily details mechanical, accuracy, shelf-life, biocompatibility, and sterility testing for a physical medical instrument.

    However, I can provide the acceptance criteria and reported performance for the instrument tracking accuracy, which is a key technical performance aspect mentioned in the document.

    Here's a table based on the provided "Performance Data" section:

    Acceptance Criteria and Reported Device Performance (Instrument Tracking Accuracy)

    MetricAcceptance Criteria (Not explicitly stated as criteria, but implied by reported performance)Reported Device Performance (REF 52184-01)Reported Device Performance (REF 52185-01)
    Locational Error (Mean)Not explicitly stated; implied to be low0.45 mm0.51 mm
    Locational Error (Std Dev)Not explicitly stated; implied to be low0.11 mm0.09 mm
    Locational Error (95th percentile)Not explicitly stated; implied to be below a certain threshold0.93 mm1.01 mm
    Locational Error (99% confidence interval)Not explicitly stated; implied to be low0.53 mm0.59 mm
    Angular Error (Mean)Not explicitly stated; implied to be low0.19 °0.23 °
    Angular Error (Std Dev)Not explicitly stated; implied to be low0.05 °0.06 °
    Angular Error (95th percentile)Not explicitly stated; implied to be below a certain threshold0.27 °0.31 °
    Angular Error (99% confidence interval)Not explicitly stated; implied to be low0.22 °0.28 °

    Study Proving Device Meets Acceptance Criteria:

    The study described is primarily a technical performance verification aimed at demonstrating substantial equivalence for a physical medical instrument, not an AI algorithm.

    1. Sample sized used for the test set and the data provenance:

      • The document does not specify the sample size (number of measurements or trials) used for the instrument tracking accuracy testing.
      • Data Provenance: Not explicitly stated, but implied to be from internal testing by Brainlab AG given the context of a 510(k) summary. No country of origin for the data is mentioned, nor whether it was retrospective or prospective.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • This is not applicable as the "ground truth" here refers to the actual physical location and orientation of the instrument during accuracy testing, which would be established by high-precision measurement systems, not human experts.
    3. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

      • Not applicable for instrument performance testing.
    4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • Not applicable. This device is a navigated suction instrument, not an AI-powered diagnostic tool requiring human reader studies.
    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • The "standalone" performance here refers to the instrument's accuracy as measured by the tracking system, independent of a human's surgical skill. The table above presents this standalone performance.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • For instrument tracking accuracy, the ground truth would be established by a highly accurate (e.g., optical or mechanical) reference measurement system, often referred to as a "gold standard" measurement setup. It is not expert consensus, pathology, or outcomes data.
    7. The sample size for the training set:

      • Not applicable. This is not an AI/machine learning device that requires a training set. The "pre-calibrated" aspect refers to factory calibration, not AI model training.
    8. How the ground truth for the training set was established:

      • Not applicable as there is no training set for an AI model.

    In summary, the provided document focuses on the engineering and biocompatibility validation of a physical medical device accessory rather than the clinical or diagnostic performance of an AI algorithm. Therefore, many of the requested details related to AI model evaluation are not present.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1