(237 days)
OTS Hip is indicated to enable planning of orthopedic surgical procedures based on CT medical imaging data of the patient anatomy. It is an intraoperative image-guided localization system that enables navigated surgery. It links a freehand probe, tracked by a passive marker sensor system, to virtual computer image space on a patient's preoperative image data being processed by the OTS platform.
The system is indicated for orthopedic hip surgical procedures where a reference to a rigid anatomical structure, such as the pelvis, can be identified relative to a CT-based model of the anatomy. The system aids the surgeon to accurately navigate a compatible prosthesis to the preoperatively planned position.
The system is designed for orthopedic surgical procedures including:
- Pre-operative planning of Total Hip Arthroplasty (THA)
- Intraoperative navigated surgery for THA using a posterior approach
OTS Hip is a system to support a surgeon with preoperative planning and intraoperative guidance during orthopedic hip joint replacement surgery.
OTS Hip is comprised of software systems and hardware components that work together to form a stereotaxic system. The system uses medical imaging data in DICOM format that is loaded into the system for access in the software that are part of the system.
OTS Hip software consists of OTS Hip Plan (OHP), which is a 3D preoperative planning software, and OTS Hip Guide (OHG) that provides intraoperative real-time navigation for the quidance of surgical tools and prosthetic components in relation to the preoperatively determined goal positions.
OHP is a software for preoperative planning prior to a THA (Total Hip Arthroplasty) surgery. OHP enables the orthopedic surgeon to prepare surgery by analyzing the patient anatomy in a 3D environment based on medical imaging data.
OHG imports the result from the preceding planning stage, a released plan, with the 3D model and planned data, from the database of the OTS system. In addition, OHG monitors the real-time information of the position of instruments and prosthetic components in a 3D environment by means of medical imaqinq data.
The components of the OHG device include a camera and computer stand with an electrical system to which a camera and a medical panel PC are attached, a footswitch, a keyboard, Tracers (passive markers), adapters that hold the Tracers and can be mounted to compatible surgical instruments and that are used for calibration, and tools and instruments that are used during surgery.
The document describes the performance testing and validation of the OTS Hip device, particularly focusing on its Machine Learning (ML) algorithms for segmentation and landmark identification.
Here's an analysis of the acceptance criteria and the study that proves the device meets them:
1. Table of Acceptance Criteria and Reported Device Performance
The document states that the ML-models overall met the acceptance criteria for segmentation and landmark identification, though it also notes two clinically complex cases that did not pass for segmentation. While specific numerical acceptance criteria (e.g., minimum dice score for segmentation, specific distance for landmark identification) are not explicitly provided in a table, the qualitative statement indicates successful validation.
Given the information provided, a table attempting to present this would look like:
Feature/Metric | Acceptance Criteria (Implicit) | Reported Device Performance |
---|---|---|
Segmentation Accuracy | ML-models should achieve acceptable accuracy when compared to manually annotated ground truth. | Overall met acceptance criteria. Two clinically complex cases did not pass. |
Landmark Identification Accuracy | ML-models should achieve acceptable accuracy when compared to manually annotated ground truth. | Overall met acceptance criteria. (No specific failures mentioned for landmarks) |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: 90 datasets
- Data Provenance:
- Countries of Origin: US (45.6%), Japan (33.3%), and the European Union (21.1%).
- Retrospective/Prospective: Not explicitly stated, but the description of "real-life data from surgeries under clinical conditions" and "collected from the same site" for OUS datasets suggests retrospective collection of existing CT medical imaging data.
- Representativeness: The datasets were described as "representative of the US population in terms of gender, age, and ethnicity" and included "images from multiple CT equipment manufacturers." The data from Japan included a "high percentage of dysplastic hips with accompanying marked degenerative change."
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: Not explicitly stated as a single number. The text mentions "Appropriately qualified experts established the ground truth" and "Cases were then validated by a third reviewer who evaluated the initial annotation." This implies at least two annotators (initial) and a third for validation/adjudication.
- Qualifications of Experts: "Appropriately qualified experts." Specific qualifications (e.g., years of experience, sub-specialty) are not detailed.
4. Adjudication Method for the Test Set
- Method: The document states, "Cases were then validated by a third reviewer who evaluated the initial annotation." This suggests a form of conflict resolution or quality control, where a third expert steps in after initial annotation to confirm or correct. The exact rules (e.g., majority vote, senior expert decision) are not specified, but it implies a process where disagreements or initial annotations are reviewed. There is no mention of "2+1" or "3+1" specifically, but the "third reviewer" role aligns with adjudication.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
- MRMC Study: The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study comparing human readers with AI assistance vs. without AI assistance. The validation focuses on the standalone performance of the ML algorithms compared to expert ground truth.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- Standalone Performance: Yes, standalone performance of the ML algorithms was done. The "Machine Learning Algorithm Validation" section explicitly states that "The results of segmentation and landmark ML algorithms were compared with the manually annotated 'ground truth' segmentations and landmarks of the test dataset." This describes an algorithm-only evaluation.
7. The Type of Ground Truth Used
- Ground Truth Type: Expert consensus/manual annotation. The ground truth was established by "Appropriately qualified experts" through "manually annotated 'ground truth' segmentations and landmarks."
8. The Sample Size for the Training Set
- Training Set Sample Size: The exact sample size for the training set is not specified. The document only mentions that the "test datasets were independent from the training dataset, where none of the datasets used for training was used for testing."
9. How the Ground Truth for the Training Set Was Established
- Training Set Ground Truth: The document states that "Cases were then separated into training and testing datasets in an unbiased fashion." It implies that the same method used for establishing ground truth for the test set (i.e., "manually annotated 'ground truth' segmentations and landmarks" by "appropriately qualified experts" with potential "third reviewer" validation) would have been applied to the data that eventually formed the training set. However, the details for the training set ground truth establishment are not explicitly elaborated further than this general statement for all cases.
§ 882.4560 Stereotaxic instrument.
(a)
Identification. A stereotaxic instrument is a device consisting of a rigid frame with a calibrated guide mechanism for precisely positioning probes or other devices within a patient's brain, spinal cord, or other part of the nervous system.(b)
Classification. Class II (performance standards).