Search Filters

Search Results

Found 4 results

510(k) Data Aggregation

    K Number
    K234047
    Manufacturer
    Date Cleared
    2024-03-20

    (90 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Automatic Registration

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Automatic Registration is a software device for image guided surgery intended to be used in combination with compatible Brainlab navigation systems such as the Brainlab Spine & Trauma Navigation System. Automatic Registration provides an image registration for intraoperatively acquired 3D CT/CBCT or fluoroscopic images.

    Device Description

    The Subject Device Automatic Registration is an accessory to the Brainlab Spine & Trauma Navigation System. It correlates intraoperatively acquired patient data (3D CT/CBCT or fluoroscopic images) to the surgical environment in order to provide a patient registration for subsequent use by the Brainlab Spine & Trauma Navigation. The device includes the following software modules:

    • Automatic Registration 2.6
    • Universal Atlas Performer 6.0
    • Universal Atlas Transfer Performer 6.0 .
      And uses as well several hardware devices, mainly registration matrices, adhesive flat markers and a calibration phantom, for performing the registration. The software is installed on an Image Guided Surgery (IGS) platform. The registration matrices are reusable devices, delivered nonsterile and having patient contact.
    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the Automatic Registration device, based on the provided FDA 510(k) summary:

    1. Table of Acceptance Criteria and Reported Device Performance

    Performance MetricAcceptance CriteriaReported Device Performance
    Accuracy (New Registration Method)Registration accuracy of ≤ 2.5 mm (P95) with a mean navigation accuracy with 3D deviation ≤ 1.5 mmRegistration accuracy of ≤ 2.5 mm (P95) with a mean navigation accuracy with 3D deviation ≤ 1.5 mm
    Software VerificationSuccessful implementation of product specifications, incremental testing, risk control, compatibility, cybersecuritySuccessfully conducted as recommended by FDA guidance.
    AI/ML Landmark DetectionQuantifying object detection, quality of vertebra level assignment, quality of landmark predictions, performance of observer view direction.Assessed by quantifying the above aspects for AI/ML detected landmarks on X-rays.
    UsabilityNo critical use-related problems identified.No critical use-related problems identified after summative usability testing.

    2. Sample Size Used for the Test Set and Data Provenance

    • Accuracy Testing (New Registration Method): The testing was performed on human cadavers. The exact number of cadavers or cases within them is not specified.
    • AI/ML Assessment: The summary states the algorithm was developed using a "controlled internal process that defines activities from the inspection of input data to the training and verification of the algorithm." No specific sample size for the test set is provided, nor is the data provenance (e.g., country of origin, retrospective/prospective) explicitly mentioned for the AI/ML assessment tests.
    • Usability Testing: 15 representative users were used for summative usability testing.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • For the accuracy testing on human cadavers, the method for establishing ground truth and the number/qualifications of experts are not explicitly stated. It can be inferred that ground truth for navigation accuracy would involve precise measurements by trained personnel, likely using specialized equipment, but no details are provided.
    • For the AI/ML algorithm assessment, the criteria for "quantifying object detection, quality of vertebra level assignment, quality of landmark predictions, and the performance of the observer view direction" would imply expert review. However, the number of experts and their qualifications for establishing this ground truth are not specified.

    4. Adjudication Method for the Test Set

    The document does not specify any adjudication methods (e.g., 2+1, 3+1) for the test sets described.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done or at least not reported in this summary. The summary focuses on the standalone performance of the device and its components, particularly the new AI/ML registration update method, against predefined accuracy criteria and usability.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    Yes, a standalone performance evaluation was done for the new registration method, specifically the AI/ML algorithm component. The "Machine Learning" section describes the assessment of the AI/ML detected landmarks on X-rays, evaluating aspects like object detection, vertebra level assignment, and landmark predictions. The accuracy bench testing also evaluates the "overall system registration accuracy" of the new method, implying its standalone performance in achieving the specified accuracy.

    7. The Type of Ground Truth Used

    • For accuracy testing (bench test on cadavers): The ground truth would likely be established through highly precise physical measurements on the cadavers, probably using a gold-standard measurement system (e.g., CMM, anatomical landmarks verified with high-precision tools). The document doesn't explicitly state the methodology, but this is typical for navigation accuracy.
    • For AI/ML assessment: The ground truth for "object detection, quality of vertebra level assignment, quality of landmark predictions, and the performance of the observer view direction for 2D X-rays" would have been established by human experts, likely radiologists or orthopedic surgeons with expertise in spinal anatomy and imaging. The document doesn't explicitly detail the methodology or the experts involved.

    8. The Sample Size for the Training Set

    The sample size for the training set of the Convolutional Neural Network (CNN) is not specified in the provided document. It only mentions that the algorithm was "developed using a controlled internal process that defines activities from the inspection of input data to the training and verification of the algorithm."

    9. How the Ground Truth for the Training Set Was Established

    The document states that the AI/ML algorithm was developed using a "Supervised Learning approach." This implies that the training data was labeled by human experts. However, the specific method (e.g., single expert, consensus, specific qualifications of experts) for establishing this ground truth for the training set is not detailed in the provided text.

    Ask a Question

    Ask a specific question about this device

    K Number
    K223288
    Manufacturer
    Date Cleared
    2023-07-21

    (269 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Cranial Navigation, Navigation Software Cranial, Navigation Software Craniofacial, Cranial EM System, Automatic
    Registration iMRI

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Cranial Navigation: The Cranial Navigation is intended as image-guided planning and navigation system to enable navigated cranial surgery. It links instruments to a virtual computer image data being processed by the navigation platform. The system is indicated for any medical condition in which a reference to a rigid anatomical structure can be identified relative to images (CT. CTA. X-Ray, MR, MRA and ultrasound) of the anatomy, including: Cranial Resection (Resection of tumors and other lesions, Resection of skull-base tumor or other lesions, AVM Resection), Craniofacial Procedures (including cranial and midfacial bones) (Tumor Resection, Bone Tumor Defect Reconstruction, Bone Trauma Defect Reconstruction, Bone Congenital Defect Reconstructions, Orbital cavity reconstruction procedures), Removal of foreign objects.

    Cranial EM System: Cranial EM is intended as an image-guided planning and navigation system to enable neurosurgery procedures. The device is indicated for any medical condition in which a reference to a rigid anatomical structure can be identified relative to images (CT, CTA, X-Ray, MR, MRA and ultrasound) of the anatomy, such as: Cranial Resection (Resection of tumors and other lesions, Resection of skull-base tumor or other lesions), Intracranial catheter placement.

    Device Description

    The subject device consists of several devices: Cranial Navigation using optical tracking technology, its accessory Automatic Registration iMRI, and Cranial EM System using electromaqnetic tracking technology.

    Cranial Navigation is an image guided surgery system for navigated treatments in the field of cranial surgery, including the newly added Craniofacial indication. It offers different patient image registration methods and instrument calibration to allow surgical navigation by using optical tracking technology. The device provides different workflows guiding the user through preoperative and intraoperative steps. The software is installed on a mobile or fixed Image Guided Surgery (IGS) platform to support the surgeon in clinical procedures by displaying tracked instruments in patient's image data. The IGS platforms consist of a mobile Monitor Cart or a fixed ceiling mounted display and an infrared camera for image guided surgery purposes. There are three different product lines of the IGS platforms: "Curve", "Kick" and Buzz Navigation (Ceiling-Mounted). Cranial Navigation consists of: Several software modules for registration, instrument handling, navigation and infrastructure tasks (main software: Cranial Navigation 4.1 including several components), IGS platforms (Curve Navigation 17700, Kick 2 Navigation Station, Buzz Navigation (Ceiling-Mounted) and predecessor models), Surgical instruments for navigation, patient referencing and registration.

    Automatic Registration iMRI is an accessory to Cranial Navigation enabling automatic image registration for intraoperatively acquired MR imaging, The registration object can be used in subsequent applications (e.g. Cranial Navigation 4.1). It consists of the software Automatic Registration iMRI 1.0, a registration matrix and a reference adapter.

    Similarly, the Cranial EM System, is an image-guided planning and navigation system to enable neurosurgical procedures. It offers instrument handling as well as patient registration to allow surqical navigation by using electromagnetic tracking technology. It links patient anatomy (using a patient reference) and instruments in the real world or "patient space" to patient scan data or "imaqe space". This allows for the continuous localization of medical instruments and patient anatomy for medical interventions in cranial procedures. It uses the same software infrastructure components as the Cranial Navigation, and the software is also installed on IGS platforms consisting of a mobile monitor cart and an EM tracking unit. It consists of: Different software modules for instrument set up, registration and navigation (Main software: Cranial EM 1.1 including several components), EM IGS platforms (Curve Navigation 17700 and Kick 2 Navigation Station EM), Surgical instruments for navigation, patient referencing and registration.

    AI/ML Overview

    The provided text describes a 510(k) premarket notification for Brainlab AG's Cranial Navigation, Navigation Software Cranial, Navigation Software Craniofacial, Cranial EM System, and Automatic Registration iMRI. The document focuses on demonstrating substantial equivalence to predicate devices, particularly highlighting the introduction of Artificial Intelligence/Machine Learning (AI/ML) algorithms for specific features.

    Here's an analysis of the acceptance criteria and the study proving the device meets them, based on the provided text:

    1. A table of acceptance criteria and the reported device performance

    The document mentions acceptance criteria relating to system accuracy.

    Acceptance CriteriaReported Device Performance
    Mean Positional Error ≤ 2 mmFulfilled (details not explicitly quantified beyond "fulfilled")
    Mean Angular Error of instrument's axis ≤ 2°Fulfilled (details not explicitly quantified beyond "fulfilled")
    AI/ML performance for abnormity detectionPrecision and recall higher than atlas-based method.
    AI/ML performance for landmark detectionNo concerns regarding safety and effectiveness (implicitly met expectations).
    Software level of concern"Major" level of concern addressed by V&V testing.
    UsabilitySummative usability carried out according to IEC 62366-1.
    Electrical safety & EMCCompliance to IEC 60601-1, AIM 7351731, IEC 60601-1-2.
    InstrumentsBiocompatibility, cleaning/disinfection, mechanical properties, aging, MRI testing (where applicable) – all evaluated.

    2. Sample size used for the test set and the data provenance

    • Test Set Sample Size: The document does not explicitly state the numerical sample size for the test set used for evaluating the AI/ML algorithm or other performance metrics. It only mentions the "test pool data is set aside at the beginning of the project."
    • Data Provenance: The document does not specify the country of origin of the data or whether it was retrospective or prospective. It just refers to "training pool data" and "test pool data."

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    The document does not provide information on the number or qualifications of experts used to establish ground truth for the test set. It mentions the AI/ML algorithm was developed using a Supervised Learning approach and that its performance was evaluated against a test pool, but no details on human ground truth labeling are given.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    The document does not describe any adjudication methods used for the test set ground truth.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study. The AI/ML functionality described is for "pre-registration" and "centering of views," which are aids to the navigation system, but there is no indication of a study measuring human reader performance with and without AI assistance. The performance testing for AI/ML focuses on its own accuracy (precision and recall) compared to a previous atlas-based method, not human reader improvement.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, a standalone (algorithm only) performance evaluation was done for the AI/ML features. The text states: "For the two features now implemented using AI/ML (landmark detection in the pre-registration step and centering of views if no instrument is tracked to the detected abnormity), performance testing comparing conventional to machine learning based landmark detection and abnormity detection were performed showing equivalent performance as in the predicate device." It also highlights that for abnormity detection, "both precision and recall of the ML-based method are higher in comparison to the atlas-based method." This indicates an isolated evaluation of the algorithm's performance.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    The document does not explicitly state the type of ground truth used for the AI/ML algorithm. It mentions "Supervised Learning" and "training pool data," implying that the training data had pre-established labels (ground truth), but it doesn't specify if these labels came from expert consensus, pathology, or another source. Given the context of image-guided surgery, it's highly probable the ground truth for abnormalities and landmarks would be derived from expert annotations on medical images.

    8. The sample size for the training set

    The document does not explicitly state the numerical sample size for the training set. It only refers to a "training pool data."

    9. How the ground truth for the training set was established

    The document mentions that the AI/ML algorithm was developed using a "Supervised Learning approach." This means that the training data was pre-labeled. However, it does not specify how this ground truth was established (e.g., by manual annotation from a certain number of experts, based on surgical findings, etc.). It only states that the "training process begins with the model observing, learning, and optimizing its parameters based on the training pool data."

    Ask a Question

    Ask a specific question about this device

    K Number
    K203679
    Manufacturer
    Date Cleared
    2021-03-18

    (91 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Automatic Registration

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Automatic Registration is a surgical device for image guided surgery intended to be used in combination with compatible Brainlab navigation systems. Automatic Registration provides an image registration for intraoperatively acquired 3D CT/ CBCT or fluoroscopic images. It consists of the software module Automatic Registration and hardware accessories.

    Device Description

    The Subject Device is intended to be used in combination with compatible Brainlab navigation systems. Automatic Registration provides an image registration for intraoperatively acquired 3D CT/CBCT or fluoroscopic images. It consists of the software module Automatic Registration and hardware accessories.

    In a spinal context, Automatic Registration serves as accessory to the Spine & Trauma navigation system.

    The Matrices are reusable devices delivered in non-sterile condition. The devices makes the Correlation ("Registration") of Intraoperative acquired patient data to the surgical environment possible by determining its position in relation to the patient and the navigated instruments.

    AI/ML Overview

    The document describes K203679, a premarket notification for the "Automatic Registration" device by Brainlab AG. The device is a surgical device for image-guided surgery intended to be used in combination with compatible Brainlab navigation systems, providing image registration for intraoperatively acquired 3D CT/CBCT or fluoroscopic images.

    Here's an analysis of the acceptance criteria and study data provided:

    Acceptance Criteria and Device Performance

    Acceptance Criteria (Predicate Device)Reported Device Performance
    Mean navigation accuracy:
    Ask a Question

    Ask a specific question about this device

    Why did this record match?
    Device Name :

    Surgery System, Navigation Software Cranial, Navigation Software ENT, Registration Software Cranial, Automatic
    Registration 2.0, Ultrasound Navigation Software (BK), Intraoperative Structure Update

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Cranial IGS System, when used with a compatible navigation platform and compatible instrument accessories, is intended as an image-guided planning and navigation system to enable navigated surgery. It links instruments to a virtual computer image space on patient image data that is being processed by the navigation platform.

    The system is indicated for any medical condition in which a reference to a rigid anatomical structure can be identified relative to images (CT, CTA, X-Ray, MR, MRA and ultrasound) of the anatomy, including:

    • · Cranial Resection
      • · Resection of tumors and other lesions
      • · Resection of skull-base tumor or other lesions
      • · AVM Resection
    • · Cranial biopsies
    • Intracranial catheter placement
    • · Intranasal structures and Paranasal Sinus Surgery
      • · Functional endoscopic sinus surgery (FESS)
      • · Revision & distorted anatomy surgery all intranasal structures and paranasal sinuses
    Device Description

    The Cranial IGS System consists of software and hardware (instruments) components that when used with a compatible navigation or "IGS platform" enables navigated surgery. It links instruments in the real world or "patient scan data or "image space". This allows for the continuous localization of medical instruments and patient anatomy for medical interventions in cranial and ENT procedures.

    AI/ML Overview

    The provided text describes the Cranial Image Guided Surgery System, which is a medical device. The information details the device's intended use, technological characteristics compared to predicate devices, and a summary of verification and validation activities.

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria for the "Cranial Image Guided Surgery System" are not explicitly stated as distinct acceptance criteria values in the document. Instead, the document presents performance verification results (accuracy), which are implicitly the performance targets for the device. The "mean accuracy" values mentioned are the internal acceptance criteria the device was required to meet.

    Performance MetricAcceptance Criteria (Implicit from "mean accuracy")Reported Device Performance (Mean)Reported Device Performance (Standard Deviation)Reported Device Performance (99th percentile)
    Location error≤ 2 mm1.3 mm0.5 mm2.2 mm
    Trajectory angle error≤ 2°0.73°0.34°1.3°

    2. Sample Size Used for the Test Set and Data Provenance

    The document mentions "Nonclinical performance testing (Accuracy)" and "The following table summarizes the performance verification results of the system." However, it does not specify the sample size (e.g., number of test cases, number of images, or number of simulated scenarios) used for these accuracy tests.

    The data provenance is also not explicitly stated in terms of country of origin or whether the data was retrospective or prospective. Given that it's "nonclinical performance testing," it is likely that the testing involved phantom studies or simulated scenarios rather than real patient data.

    3. Number of Experts Used to Establish Ground Truth and Their Qualifications

    The document does not specify the number of experts used to establish ground truth for the test set or their qualifications. The accuracy testing seems to be based on physical measurements against established ground truth (e.g., from a phantom or known geometry), rather than expert consensus on image interpretation.

    4. Adjudication Method for the Test Set

    The document does not mention any adjudication method (e.g., 2+1, 3+1, none) for the test set. Given that the performance testing is focused on mechanical/measurement accuracy (location and trajectory angle errors), an adjudication method requiring human interpretation would not be applicable in the same way as it would be for a diagnostic AI system.

    5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study Was Done

    The document does not indicate that an MRMC comparative effectiveness study was done. The assessment presented is focused on the device's accuracy in navigation, not on a comparison of human reader performance with and without AI assistance. This device is an image-guided surgery system, which assists surgeons during procedures, rather than an AI diagnostic tool primarily interpreted by human readers.

    6. If a Standalone (i.e., Algorithm Only Without Human-in-the-Loop Performance) Was Done

    The "Nonclinical performance testing (Accuracy)" described can be considered a form of standalone performance evaluation for the algorithm's core functionality (localization and trajectory determination). The results presented (location error, trajectory angle error) are metrics of the system's inherent accuracy, without explicitly involving real-time human interaction for performance measurement in these specific tests. However, the device is ultimately intended for human-in-the-loop use in surgery.

    7. The Type of Ground Truth Used

    The ground truth used for the "Nonclinical performance testing (Accuracy)" appears to be physical measurement against a known standard or ideal. For instance, in a phantom study, the "true" location and trajectory would be precisely known or measurable, allowing for the calculation of errors from the device's output. The text does not specify if it was expert consensus, pathology, or outcomes data.

    8. The Sample Size for the Training Set

    The document does not provide any information regarding the sample size used for the training set. It primarily discusses the device's verification and validation, but not the development or training of any underlying algorithms (if applicable, beyond traditional image processing and navigation).

    9. How the Ground Truth for the Training Set Was Established

    Since no information on a "training set" is provided, there is no mention of how ground truth for a training set was established. The device's functionality appears to be based on established navigation principles and software engineering, rather than a machine learning model that requires a labeled training dataset with associated ground truth for learning.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1