K Number
K212040
Device Name
RI.HIP MODELER
Date Cleared
2022-03-11

(254 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

RI.HIP MODELER is intended for preoperative planning for primary total hip arthroplasty. RI.HIP MODELER is intended to be used as a tool to assist the surgeon in the selection and positioning of components in primary total hip arthroplasty.

RI.HIP MODELER is indicated for individuals undergoing primary hip surgery.

Device Description

RI.HIP MODELER is a non-invasive standalone Total Hip Arthroplasty (THA) planning software application intended to provide preoperative planning for hip implant acetabular cup rotational selection and placement. RI.HIP MODELER allows surgeons to visualize and perform analysis of digital images for assessment of spinopelvic mobility of the patient. The app is used to characterize patient conditions, manage patient performance expectations, and help surgeons determine acetabular cup placement based on spinopelvic mobility.

The software provides a baseline cup orientation recommendation intended to reduce incidences of implant impingement based on patient condition and implant specifications. The surgeon is reminded to verify and adjust to the parameters based on their clinical judgment.

AI/ML Overview

The provided text does not contain a detailed study with specific acceptance criteria and reported device performance values in a direct table format. However, it does discuss performance testing and verification/validation activities.

Here's an attempt to extract and synthesize the information based on the provided text, addressing your points where possible:

1. Table of Acceptance Criteria and Reported Device Performance

The document does not explicitly state quantitative acceptance criteria in a table. It refers to general design requirements, safety and effectiveness, and meeting intended use. The closest to a quantifiable performance metric is:

Acceptance Criteria (Implied)Reported Device Performance
Accuracy of landmarking measurements in worst-case conditionsWithin 2 degrees (for image capture verification testing)
Ability to provide a baseline cup orientation that increases distance to implant impingementVerification and validation indicate no safety or performance concerns. Tested by computational models compliant with ASME V&V 40: 2018.
Workflow execution and safe/effective use by representative usersPerformance testing and Human Factors validation confirmed successful, safe, and effective use.
Generation of clinically relevant image with minimal distortionVerification confirmed.

2. Sample Size Used for the Test Set and Data Provenance

  • Test Set Sample Size: The document does not specify a numerical sample size for the test set used in "Image Capture Verification Testing" or "Performance Testing." It mentions "representative users" for Usability Engineering Validation Testing.
  • Data Provenance: Not explicitly stated. The text notes "No human clinical testing was required," implying that the testing was primarily bench and simulated. The mention of "typical patient activities of daily living (accessed through a simulation database)" suggests some simulated patient data was used, but the origin of this database is not provided.

3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

This information is not provided. The study relies on simulated data and computational models rather than expert-established ground truth on patient scans for the primary performance claims. For "Usability Engineering Validation Testing," it states "representative users" were involved, implying surgeons, but their number and specific qualifications are not detailed.

4. Adjudication Method for the Test Set

Not applicable. The reported testing focuses on technical performance and usability, not on diagnostic agreement among experts.

5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance

No, a multi-reader multi-case (MRMC) comparative effectiveness study was not done. The document explicitly states, "No human clinical testing was required to determine the safety and effectiveness of RI.HIP MODELER." The device's primary function is described as assisting the surgeon with a baseline cup orientation recommendation, rather than directly improving human reader performance in diagnosis or measurement.

6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

Yes, testing was done on the standalone software's performance. The document describes "Software Verification Testing" and "Image Capture Verification Testing" that assessed the algorithm's performance and accuracy (e.g., landmarking accuracy within 2 degrees). The computational models used in the design are also verified and validated, supporting the algorithm's ability to provide a baseline cup orientation.

7. The Type of Ground Truth Used

The ground truth for the core functionality appears to be derived from:

  • Computational Models: The algorithm for baseline cup orientation relies on "computational models used in the design of RI.HIP MODELER comply with ASME V&V 40: 2018, Assessing Credibility of Computational Modeling through Verification and Validation." These models simulate implant kinematics and distance to impingement.
  • Design Requirements/Input Specifications: Performance testing demonstrated that the device "meets the required design inputs."
  • Simulated Database: "Implant kinematics and distance to impingement are calculated for each activity, given a set of implant specifications," using information "accessed through a simulation database."

For the image capture verification, the ground truth for "accuracy of landmarking measurements" would likely be based on known reference points or measurements on the test images, though this isn't explicitly detailed.

8. The Sample Size for the Training Set

This information is not provided. The document focuses on verification and validation of the device, not the details of its training phase. The device "employs an algorithm" and uses a "simulation database," but there is no mention of a traditional machine learning "training set" with a specified size.

9. How the Ground Truth for the Training Set Was Established

This information is not provided because details about a "training set" in the context of machine learning are not explicitly discussed. The algorithm's behavior is based on "computational models" and a "simulation database," whose internal mechanisms and ground truth establishment are not described in this summary.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).