K Number
K220104
Device Name
Knee+
Manufacturer
Date Cleared
2022-09-01

(232 days)

Product Code
Regulation Number
882.4560
Panel
OR
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Knee is a stereotaxic system including an intraoperative software as a medical device and surgical instruments. Knee is intended for primary Total Knee Replacement, to assist the surgeon in determining reference alignment axes in relation to anatomical landmarks, in order to position the cutting guide regarding computed mechanical axis. The Knee includes smart glasses as a Head Mounted Device (HMD) for displaying information to the user intraoperatively. The smart glasses should not be relied upon solely and should always be used in conjunction with traditional methods.

Device Description

The main purpose of Knee* is to assist the surgeon during the primary Total Knee Replacement (TKR) intervention. Knee* includes software and surgical instruments.

Knee* provides information to help locate and orientate the main femoral and tibial cutting planes as required in knee replacement surgery. Knee* allows the surgeon to adjust the cutting plane orientation and the resection level. This includes means for the surgeon to collect anatomical references during the TKR intervention using the surgical instruments. The software locates in a 3D reference frame the instruments which include markers. All collected coordinates are treated by software algorithms to provide the surgeon with relevant orientation of the tracked cutting guide. Kneet software is installed on a wearable Head Mounted Device (HMD) which includes an embedded camera and displays intraoperative information to the user. This near-eve display allows the surgeon to look at the HMD screen or the field of view when needed.

AI/ML Overview

The provided text describes the regulatory clearance of a medical device named "Knee+" but does not contain the specific acceptance criteria or the study details proving the device meets those criteria in the format requested.

The document primarily focuses on demonstrating substantial equivalence to a predicate device (K202750) rather than presenting a detailed performance study against predefined acceptance criteria. It mentions that "All performance testing demonstrates that Knee* performs according to its specifications and functions as intended," and lists types of non-clinical performance data, but does not disclose the specific criteria or the quantitative results from those tests.

Therefore, most of the requested information cannot be extracted from the provided text.

Here's an attempt to answer what can be inferred from the document, along with explanations for what cannot be found:

1. A table of acceptance criteria and the reported device performance

  • Cannot be provided. The document does not list specific numerical acceptance criteria (e.g., accuracy thresholds) or the quantitative performance results (e.g., reported accuracy values) for the Knee+ device in a table format. It generally states that "overall system repeatability and accuracy were tested" and that "all performance testing demonstrates that Knee* performs according to its specifications and functions as intended," but without specifics.

2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

  • Cannot be provided specifically for accuracy/performance testing. The document mentions "Bench testing was conducted" to evaluate repeatability and accuracy, but it does not specify the sample size used for these tests. Data provenance (country, retrospective/prospective) is also not mentioned for this bench testing.
  • For Human Factors testing: "Test participants representing the intended users of the device were included." The exact number of participants is not specified.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

  • Not applicable/Cannot be provided. The document describes "Bench testing" for accuracy and repeatability, implying these were engineering tests rather than clinical studies requiring expert-established ground truth on patient data. No mention of experts or their qualifications for establishing ground truth is present.

4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

  • Not applicable/Cannot be provided. Given the nature of the described "bench testing," adjudication methods for expert consensus on a test set are not relevant and are not mentioned.

5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

  • No. The document does not describe an MRMC comparative effectiveness study involving human readers or AI assistance in that context. The device is a "stereotaxic system" that assists surgeons, and the performance testing mentioned (bench testing, software V&V, instrument functional tests) is not of this type.

6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done

  • Implicitly yes, in part. The document states "overall system repeatability and accuracy were tested" as part of bench testing. While the "system" includes the software, the listed tests sound like standalone engineering performance tests, rather than human-in-the-loop performance. However, there are no specific results provided to detail this standalone performance.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

  • Inferred: Mechanical/Metrological standards or reference measurements. For "overall system repeatability and accuracy" during bench testing, the ground truth would typically be established by highly precise mechanical or optical measurement systems, or by comparison to known physical standards, rather than expert consensus, pathology, or outcomes data, which are relevant for diagnostic or prognostic devices. No specific details are given.

8. The sample size for the training set

  • Not applicable/Cannot be provided. The document doesn't discuss machine learning or AI model training. The device is described as a "stereotaxic system including an intraoperative software" that "treats collected coordinates by software algorithms." This implies rule-based or conventional algorithmic software rather than a system trained on a large dataset. Therefore, there is no mention of a "training set."

9. How the ground truth for the training set was established

  • Not applicable/Cannot be provided. As there is no mention of a training set, the method for establishing its ground truth is also not discussed.

§ 882.4560 Stereotaxic instrument.

(a)
Identification. A stereotaxic instrument is a device consisting of a rigid frame with a calibrated guide mechanism for precisely positioning probes or other devices within a patient's brain, spinal cord, or other part of the nervous system.(b)
Classification. Class II (performance standards).