K Number
K102990
Device Name
BRAINLAB KNEE
Manufacturer
Date Cleared
2011-04-04

(178 days)

Product Code
Regulation Number
882.4560
Panel
OR
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Brainlab Knee is intended to be an intraoperative image guided localization system to enable minimally invasive surgery. It links a freehand probe, tracked by a passive marker sensor system to virtual computer image space on an individual 3D-model of the patient's bone, which is generated through acquiring multiple landmarks on the bone surface. The system is indicated for any medical condition in which the use of stereotactic surgery may be appropriate and where a reference to a rigid anatomical structure, such as the skull, a long bone, or vertebra, can be identified relative to a CT, x-ray, MR-based model of the anatomy. The system aids the surgeon to accurately navigate a knee prosthesis to the intraoperatively planned position. Ligament balancing and measurements of bone alignment are provided by Brainlab Knee.

Example orthopedic surgical procedures include but are not limited to:

  • · Total Knee Replacement
  • · Ligament Balancing
  • · Range of Motion Analysis
  • · Patella Tracking
Device Description

Brainlab knee is an image guided surgery system for total knee replacement surgery . based on landmark based visualization of the femur and tibia.

AI/ML Overview

The provided 510(k) summary for Brainlab Knee (K102990) describes the device, its intended use, and substantial equivalence to predicate devices, but it does not contain the detailed information necessary to complete the requested table and answer all questions regarding acceptance criteria and the specific study proving the device meets those criteria.

The submission focuses on verification and validation activities at a high level. It generally discusses that the system was verified and validated according to BrainLAB procedures, and that functionality was verified on released platforms with workbench tests on milled model bones. It also mentions validation methods such as comparison to previous products, literature research, real-world testing, usability tests, design reviews, and software validation. Furthermore, it states that validation activities were supported by design reviews with initial design surgeons and a cadaver test.

However, the provided text lacks specific, quantifiable acceptance criteria for performance metrics. It does not report precise device performance values against such criteria. The document describes general validation and verification but does not detail a specific study with defined acceptance criteria and reported outcomes to prove those criteria were met for this particular submission.

Here's a breakdown of what can and cannot be answered from the provided text:

1. A table of acceptance criteria and the reported device performance

  • Cannot be provided based on the input. The document does not specify any quantifiable acceptance criteria (e.g., accuracy thresholds, precision ranges) for the Brainlab Knee system. Consequently, it does not report specific device performance values against such criteria. It generally states that "Cut and implant positions have been compared to theoretical values" during workbench tests, but no actual values or criteria are provided.

2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

  • Cannot be fully provided. The document mentions "workbench test have been performed on precisely milled model bones" and "a cadaver test."
    • Sample size for test set: Not specified for either the model bones or the cadaver test.
    • Data provenance: Not specified (e.g., country of origin, retrospective/prospective).

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

  • Cannot be provided. The document mentions "design reviews with many of the initial design surgeons" which "supported" the validation. However, it does not specify the number or qualifications of experts involved in establishing ground truth for any specific test set, nor does it explicitly state their role in ground truth determination.

4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

  • Cannot be provided. The document does not describe any adjudication method for establishing ground truth or resolving discrepancies in test results.

5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

  • No, an MRMC study was not done, or at least not described in this submission. The Brainlab Knee system is an image-guided surgery system, not an AI-assisted diagnostic tool that would typically involve "human readers" in an MRMC study context. The document focuses on the system's ability to aid surgeons in navigating prostheses and performing measurements, not on its impact on diagnostic reader performance. Therefore, an effect size of human readers improving with/without AI assistance is not applicable to the information provided.

6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

  • Yes, implicitly. The "workbench test" performed on precisely milled model bones, where "Cut and implant positions have been compared to theoretical values," represents a form of standalone testing. This indicates that the algorithm's output (planned positions) was evaluated against a known ground truth (theoretical values) without immediate human surgical intervention as part of the evaluation of the algorithm's core functionality.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

  • "Theoretical values" and potentially "established history of successful use" and "design surgeons' input."
    • For workbench tests on model bones, the ground truth was "theoretical values" for cut and implant positions.
    • Validation also included "Comparison of the design to a previous product having an established history of successful use," which implies the ground truth for some aspects might be derived from the proven performance of predicate devices.
    • "Design reviews with many of the initial design surgeons" also contributed to validation, suggesting expert input played a role, though not explicitly as "ground truth" for a specific test set.

8. The sample size for the training set

  • Not applicable / Not provided. The Brainlab Knee is an image-guided surgery system that uses landmarks collected intraoperatively to create a 3D model. It's not an AI/machine learning system in the modern sense that typically has a "training set" of data to learn from in the same way. Its functionality is based on geometric algorithms and image processing, not a trained predictive model that would require a large training dataset to learn patterns.

9. How the ground truth for the training set was established

  • Not applicable / Not provided. As explained above, the concept of a "training set" with established ground truth is not directly applicable to this type of device based on the information given.

§ 882.4560 Stereotaxic instrument.

(a)
Identification. A stereotaxic instrument is a device consisting of a rigid frame with a calibrated guide mechanism for precisely positioning probes or other devices within a patient's brain, spinal cord, or other part of the nervous system.(b)
Classification. Class II (performance standards).