Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K220652
    Device Name
    Knee 3
    Manufacturer
    Date Cleared
    2022-06-03

    (88 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Knee3 is intended to be an intraoperative localization system used during orthopedic knee surgery. It links a navigated instrument, tracked by a passive marker sensor system, to virtual computer image space on an individually acquired model of the patient's anatomical axes, which are generated through acquiring multiple anatomical bony landmarks. The system is indicated for medical conditions in which the use of stereotaxic surgery may be appropriate and where a reference to a rigid anatomical structure, such as the femur and tibia, can be identified relative to the acquired anatomical landmarks. The system aids the surgeon to plan resections, measure cutting block alignment, measure bone resections, and/or leg alignment. The device is indicated for total and uni-compartmental knee replacement.

    Device Description

    The Knee3 is a software application module intended to be used on compatible navigation platforms that assist in the implantation of prosthetic knee implants. It uses instruments and accessories such as reference arrays, pointers and plane tools, which are tracked by an infrared camera of the respective platform, to determine femur and tibia anatomical landmarks as well as instrument positions in relation to the registered bones. The resulting navigation measurements support the user with the placement of the cutting guides.

    AI/ML Overview

    The provided document, a 510(k) summary for Brainlab's Knee3 device, outlines the device's intended use, technological characteristics, and performance data to support its substantial equivalence to predicate devices. However, it does not present a formal table of acceptance criteria and reported device performance as typically seen in a clinical study report. Instead, it describes general accuracy limits for the system's technical navigation and mentions clinical validation through literature review.

    Based on the provided text, here's what can be extracted and inferred regarding the device's acceptance criteria and the study proving it meets them:

    1. A table of acceptance criteria and the reported device performance

    The document does not provide a formal table with acceptance criteria side-by-side with reported performance for clinical validation. It offers a general statement about the system's technical accuracy:

    Acceptance Criteria (Inferred)Reported Device Performance
    Technical navigation accuracy within ± 2° for angles and ± 2mm for distances in 95% of casesShown to be within limits of ± 2° for angles and ± 2mm for distances in 95% of cases.

    It's important to note that this is described as "technical navigation accuracy (bench accuracy)" and not a clinical accuracy directly on patients.

    2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)

    The document does not specify a sample size for a test set in the context of a prospective clinical study involving the Knee3 device itself. It states: "Clinical data gathered from literature was used to validate the accuracy of the predecessor software device versions (predicate devices) and respective workflows." This indicates a retrospective review of existing literature rather than a new prospective study with a dedicated test set for the Knee3.

    Therefore, the data provenance is from literature regarding predicate devices, not specifically from a new study on Knee3. The country of origin of this literature data is not specified.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)

    Since the clinical validation was based on "clinical data gathered from literature" related to predicate devices, the document does not provide information on the number or qualifications of experts used to establish ground truth for this validation. This likely refers to the aggregated data and conclusions from various studies in the literature, not a specific panel of experts reviewing a test set for Knee3.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

    Given that the clinical validation relied on "literature" and not a bespoke test set with expert ground truth establishment for Knee3, no adjudication method is described or implied.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    The document does not indicate that an MRMC comparative effectiveness study was done. The Knee3 is an "intraoperative localization system" and a "software application module" that "aids the surgeon to plan resections, measure cutting block alignment, measure bone resections, and/or leg alignment." It's a navigation system for surgery, not an AI for image interpretation that would typically involve human readers. Therefore, there is no mention of "human readers" or "AI assistance" in the context of improving their performance.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The "technical navigation accuracy (bench accuracy)" measured on a "bench model" can be considered a form of standalone performance evaluation for the system's core accuracy without surgeon interaction on a patient. It showed the system to be "within limits of ± 2° for angles and ± 2mm for distances in 95% of cases."

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    For the "technical navigation accuracy," the ground truth was presumably physical measurements on a bench model, comparing the system's output to known, precisely measured values.

    For the "clinical validation," the ground truth described as being derived from "clinical data gathered from literature" for predicate devices. This implies that the effectiveness and accuracy of the workflows were established through patient outcomes or established surgical accuracy metrics reported in prior clinical studies for those predicate devices. The exact nature of this "ground truth" (e.g., post-operative imaging, clinical assessments) is not detailed in this summary.

    8. The sample size for the training set

    The document does not mention a training set size. This device is described as a "software application module" and a "stereotaxic instrument" that links physical instruments to virtual models. While it undoubtedly has software components, the description does not explicitly state it utilizes machine learning or AI that would typically require a distinct training set for model development in the way that, for example, an image interpretation AI would. The "software verification and validation testing" refers to general software quality assurance, not a machine learning model's training.

    9. How the ground truth for the training set was established

    As no training set is described in the context of machine learning, there is no information on how ground truth for such a set was established.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1