K Number
K230850
Manufacturer
Date Cleared
2023-12-20

(267 days)

Product Code
Regulation Number
888.3560
Panel
OR
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The United Orthopedic Knee Patient Specific Instrumentation is indicated as an orthopedic instrument system to assist in the positioning of compatible total knee arthroplasty systems. It is comprised of surgical planning software (Intelligent Surgery Knee CT Segmentation Engine/Knee X-ray Segmentation Engine, and Implant Recognition Engine) intended to preoperatively plan the surgical placement of the United Orthopedics Knee implants on the basis of provided patient radiological images and 3D reconstructions of bones with identifiable anatomical landmarks, and surgical instrument components that include patient specific or customized guides fabricated on the surgical plan to precisely reference the placement of the implant components intra-operatively per the surgical plan. The United Orthopedic Knee Patient Specific Instrumentation is indicated for patients without severe bone deformities, such as a HKA greater than 15° or deformities due to prior fracture of the distal femur or proximal tibia.

The instruments are intended for use with the U2 Total Knee System when the clinical evaluation complies with its evaluation complies with its cleared indications for use. The instruments are intended for single use only.

Device Description

The United Orthopedic Knee Patient Specific Instrumentation is comprised of: United Orthopedics (UO) surgical guides (hardware), anatomical models (physical replica), Intelligent Surgery Knee CT Segmentation Engine / Intelligent Surgery Knee X-ray Segmentation Engine (software), and Intelligent Surgery Knee Implant Recognition Engine (software). Enhatch is responsible for design and development of all three components of the system.

The subject device is intended to facilitate the implantation of the U2 Knee protheses under the U2 Knee System developed and distributed by United Orthopedics Corporation: U2 Total Knee System.

[THE SOFTWARE]

The Intelligent Surgery Knee Segmentation Engine consists of two imaging modalities, CT (Knee CT Segmentation) and X-ray (Knee X-ray Segmentation). The Intelligent Surgery Knee CT Segmentation Engine and X-ray Segmentation Engine are web applications that use deep learning algorithms to detect and extract region of interest (ROI) information (femur and tibia) from medical imaging data (DICOM). The segmentation engines generate 3D models which can be used for treatment planning of Total Knee Arthroplasty (TKA), design of surgical guides, or generation of 3D printed anatomical models.

The Intelligent Surgery Knee Implant Recognition Engine is a web application that uses an optimization algorithm as a treatment planning tool for total knee arthroplasty. It assists in selecting implant size and position, from a range of implants of a total knee implant system, using a range of run parameters based upon the TKA surgical technique of that system. The software identifies anatomical landmarks of the patient's bony anatomy and articular surface topographies to reference the position and alignment of the femoral and tibial implant components. This positioning and alignment in turn allows for design of surgical guides and of the United Orthopedic Knee Patient Specific Instrumentation .

Note, these algorithms are static and non-adaptive; they do not alter their behavior over time based on user input.

[THE HARDWARE]

The UO surgical guides and anatomical models are patient-specific instruments designed to facilitate the implantation of the United Orthopedics Knee protheses. The UO surgical guides are designed based on preoperative plan generated by the software Intelligent Surgery Knee Implant Recognition Engine.

AI/ML Overview

The provided text describes a 510(k) submission for the "United Orthopedic Knee Patient Specific Instrumentation" and highlights different performance tests conducted. However, the document primarily focuses on demonstrating substantial equivalence to predicate devices and general software verification and validation, rather than a detailed acceptance criteria table with reported device performance specifically for an AI algorithm's diagnostic or predictive capabilities.

The AI components mentioned are:

  • Intelligent Surgery Knee CT Segmentation Engine / Intelligent Surgery Knee X-ray Segmentation Engine: Uses deep learning algorithms to detect and extract region of interest (ROI) information (femur and tibia) and generate 3D models.
  • Intelligent Surgery Knee Implant Recognition Engine: Uses an optimization algorithm as a treatment planning tool for total knee arthroplasty, assisting in implant selection and position.

The information provided about the study mainly focuses on general software testing and system verification, not a specific study proving the AI's diagnostic/predictive accuracy against a gold standard in a clinical context, which is typically what is asked for in such acceptance criteria discussions for AI/ML medical devices.

Therefore, I cannot fully complete all sections of your request based on the provided text, especially regarding specific performance metrics for the AI components and clinical study details (e.g., MRMC study, human reader improvement).

Here's what can be extracted and inferred, along with what's missing:


1. A table of acceptance criteria and the reported device performance

The document lists general testing categories and states that the device "passed the acceptance criteria and demonstrated satisfactory performance per the intended use" for each. It does not provide specific quantitative acceptance criteria or reported numerical performance metrics for the AI algorithms (e.g., sensitivity, specificity, Dice score for segmentation, or accuracy of landmark detection against a ground truth). It describes the type of testing but not the results in detail.

Acceptance Criteria CategoryReported Device Performance (as stated in document)
Segmentation System Testing"The device passed the acceptance criteria and demonstrated satisfactory performance per the intended use."
Model Verification Testing"The device passed the acceptance criteria and demonstrated satisfactory performance per the intended use."
Software System Testing"All specimens were within the bounds of the acceptance criteria. The resulting output measurements from the system were within the bounds of the input parameters (input values produced expected output values). The device passed the acceptance criteria and demonstrated satisfactory performance per the intended use."
Guide Wear Testing"The device passed the acceptance criteria of average weight loss and demonstrated satisfactory performance per the intended use."
System Verification and Validation Test"The results demonstrated satisfactory performance per the intended use as in the predicate."

2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

  • Sample Size: Not specified for any of the tests. The document generically mentions "specimens" and "cadaver specimens."
  • Data Provenance: Not specified (e.g., country of origin).
  • Retrospective or Prospective: Not specified. "Cadaver specimens" suggest an ex-vivo or lab-based study rather than a direct clinical study.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

  • The document states "Full use simulations tests using cadaver specimens were performed by multiple surgeons to verify and validate the overall system performance."
  • Number of experts: "Multiple surgeons," but no specific number is given.
  • Qualifications: "Surgeons," but no specific experience or specialty (e.g., orthopedic surgeon, years of experience) is provided.
  • It's unclear if these surgeons established the ground truth or simply evaluated the system's performance in a simulated setting. For segmentation or landmark detection, ground truth typically involves more rigorous, independently verified annotations.

4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

  • Not specified.

5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

  • No MRMC study described. The focus appears to be on the system's accuracy in generating models and plans, rather than its impact on human reader performance in a diagnostic context. The "multiple surgeons" evaluating the system in the System V&V test is a system performance evaluation, not an MRMC study comparing human performance with and without AI assistance for a specific task.

6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

  • The "Segmentation System Testing" and "Software System Testing" sections likely represent standalone algorithm testing, where the algorithm's output (segmentation masks, 3D models, landmark detection accuracy) was compared against a reference.
  • However, no specific performance metrics (e.g., accuracy, precision, recall, Dice score) are provided to quantify this standalone performance.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

  • The document implies that the ground truth for "Segmentation System Testing," "Model Verification Testing," and "Software System Testing" was based on established benchmarks, comparisons to expected outputs, or potentially expert-derived ground truth for the "accuracy" claims. However, the specific method (e.g., expert consensus, manual measurements, etc.) for establishing this ground truth is not explicitly stated.
  • For the "System Verification and Validation Test" using cadaver specimens and surgeons, the "satisfactory performance" seems to imply that the surgeons found the system's output acceptable and accurate for surgical planning, but the method of establishing the "true" anatomical values or optimal plans as ground truth is not detailed.

8. The sample size for the training set

  • Not specified. The document mentions the use of "deep learning algorithms" but provides no details on the training data.

9. How the ground truth for the training set was established

  • Not specified. The document mentions deep learning but no details about the ground truth creation for training.

In summary, the provided FDA 510(k) summary focuses on general validation of a device that includes AI components, but it does not provide the highly specific, quantifiable details often required for AI/ML device submissions regarding their specific performance metrics, test set characteristics, or the rigorous establishment of ground truth that one would expect for a diagnostic AI algorithm. This submission appears to be more focused on the overall system's functionality and its role in surgical planning rather than a detailed clinical validation study of the AI's diagnostic capabilities.

§ 888.3560 Knee joint patellofemorotibial polymer/metal/polymer semi-constrained cemented prosthesis.

(a)
Identification. A knee joint patellofemorotibial polymer/metal/polymer semi-constrained cemented prosthesis is a device intended to be implanted to replace a knee joint. The device limits translation and rotation in one or more planes via the geometry of its articulating surfaces. It has no linkage across-the-joint. This generic type of device includes prostheses that have a femoral component made of alloys, such as cobalt-chromium-molybdenum, and a tibial component or components and a retropatellar resurfacing component made of ultra-high molecular weight polyethylene. This generic type of device is limited to those prostheses intended for use with bone cement (§ 888.3027).(b)
Classification. Class II.