(59 days)
TMIN® Miniature Robotic System is indicated as a stereotaxic instrumentation system for total knee replacement (TKA) surgery. It is to assist the surgeon by providing software-defined spatial boundaries for orientation and reference information to identifiable anatomical structures for the accurate of knee implant components.
The robotic device placement is performed relative to anatomical landmarks as recorded using the system intraoperatively and based on a surgical plan determined preoperatively using CT based surgical planning tools.
The targeted population has the same characteristics as the population that is suitable for the implant(s) compatible with the TMINI® Miniature Robotic System.
The TMNI® Miniature Robotic System is to be used with the following knee replacement systems in accordance with the indications and contraindications:
- · Enovis™ EMPOWR Knee System®
- · Ortho Development® BKS® and BKS TriMax® Knee System
- · Total Joint Orthopedics Klassic® Knee System
- · United® U2™ Total Knee System
- · Medacta® GMK® Sphere / SpheriKA Knee Systems
- · Zimmer Biomet Anterior & Posterior Referencing Persona® Knee
- b-ONE MOBIO® Total Knee System
- · Maxx Orthopedics Freedom® Total & Titan Knee
- · LINK® LinkSymphoKnee System
The TMINI® Miniature Robotic System like its predicate, the TMINI® Miniature Robotic System consists of three primary components: a three-dimensional, graphical. Preoperative Planning Workstation (TPLAN Planning Station), an Optical Tracking Navigation Console (TNav) and a robotically controlled hand-held tool (TMINI Robot) that assists the surgeon in preparing the bone for implantation of TKA components. In addition, this submission will add a web-based method for surgeons to review, approve and download approved surgical plans generated on the TPLAN Planning Station.
The TPLAN Planning Station uses preoperative CT scans of the operative leg to create 3D surface models for case templating and intraoperative registration purposes. The Planning Workstation contains a library of 510(k) cleared knee replacement implant(s) available for use with the system. The surgeon can select an implant model from this library. The planner/surgeon can manipulate the 3D representation of the implant in relation to the bone model to optimally place the implant. The surgeon reviews and approves the case plan using either TPLAN or the TCM web-based application once the surgeon is satisfied with the implant selection, location and orientation. The data from the approved plan is written to a file that is used to guide the robotically controlled hand-held tool.
The hand-held robotic tool is optically tracked relative to optical markers placed in both the femur and tibia and articulates in two degrees-of-freedom, allowing the user to place bone pins in a planar manner in both bones. Mechanical guides are clamped to the bone pins, resulting in subsequent placement of cut slots and drill guide holes such that the distal femoral and proximal tibial cuts can be made in the pre-planned positions and orientations, and such that the implant manufacturer's multi-planer cutting block can be placed relative to drilled distal femoral pilot holes. If the surgeon needs to change the plan during surgery, it can be changed intraoperatively.
The provided document, an FDA 510(k) summary for the TMINI® Miniature Robotic System, focuses on demonstrating substantial equivalence to a predicate device (K243285) rather than on presenting de novo acceptance criteria and a study proving the device meets those criteria.
The purpose of this 510(k) submission (K243481) is to introduce modifications to the TPLAN Planning station (specifically, an improved segmentation algorithm using a pre-trained and closed machine learning model, enhanced DICOM data importing, updated implant display and selection tools, and improved cybersecurity) and to add the THINK Case Manager (TCM) as a remote method for surgeon review, approval and downloading of approved surgical plans.
Therefore, the document does not contain the detailed information requested regarding specific acceptance criteria and a study that proves the device meets them in the context of a new device's performance validation against novel criteria. Instead, it asserts that the modified device's performance is substantially equivalent to that of the predicate device, which presumably underwent its own performance validation to establish its safety and effectiveness.
However, based on the provided text, I can extract information related to performance testing that was conducted to support the substantial equivalence claim.
Here's an analysis based on the available information:
Key Takeaway from the Document:
The document argues for substantial equivalence to a predicate device (K243285), meaning it explicitly states that no new questions of safety or effectiveness were identified by the modifications, and thus, extensive de novo performance validation against new acceptance criteria was not required or presented. The performance testing mentioned serves to confirm that the modifications did not negatively impact the existing performance characteristics.
Attempt to answer your questions based on the provided 510(k) summary:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly present a table of specific quantitative acceptance criteria or detailed reported device performance values. Instead, it generally states that performance testing was conducted using methods and acceptance criteria similar to those used for the predicate device, and that the device "met all test criteria and specifications."
Here's a summary of the performance testing mentions:
Acceptance Criterion (Inferred from testing type) | Reported Device Performance |
---|---|
Full System Run Through Testing | Passed |
Cutting Accuracy | Passed |
Pin & Block Placement Accuracy | Passed |
Cadaver Lab Validation Testing | Passed |
System Gap Balance Accuracy | Passed |
User Needs Validation Testing | Passed |
Usability Testing | Passed |
Software Testing | Passed |
Biocompatibility Testing (for patient-contacting materials if changed) | Passed (No new testing required as no material changes) |
Note: The phrase "Passed" indicates that the device met the (unspecified) acceptance criteria for these tests. The document emphasizes that these tests used "similar test methods and acceptance criteria to those used for the predicate device."
2. Sample size used for the test set and the data provenance
The document does not specify the sample sizes for any of the performance tests (e.g., number of test runs, number of cases in cadaver lab validation).
Data provenance is not explicitly mentioned (e.g., country of origin). The studies appear to be pre-clinical (e.g., cadaver lab) or in-house testing, not clinical studies involving patient data for the purpose of this 510(k). The mention of "preoperative CT scans of the operative leg to create 3D surface models" refers to the input data for the system's function, not necessarily the data used for testing its performance in this submission.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
This information is not provided in the document. Given that it's a 510(k) for device modifications and not a de novo clinical validation, the focus is on engineering and performance testing against internal specifications or predicate performance, rather than establishing clinical ground truth with human experts for an AI component. The software modification mentioned is an "improved segmentation algorithm using a pre-trained and closed machine learning model," which suggests it's an internal algorithm improvement, not a new diagnosis/decision-support AI requiring human expert consensus for ground truth comparison.
4. Adjudication method for the test set
This information is not provided.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
An MRMC study was not done or mentioned. This type of study is typically performed for AI-driven diagnostic or decision-support tools where human interpretation is involved. The device here is a robotic system for total knee replacement, where the AI component mentioned is an "improved segmentation algorithm" for planning, not a diagnostic imaging AI that assists radiologists.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The document mentions an "improved segmentation algorithm using a pre-trained and closed machine learning model." While it implies standalone performance would be measured for such an algorithm (e.g., against ground truth segmentations), the document does not detail the specific standalone performance metrics, acceptance criteria, or results for this algorithm. The overall "Software Testing" passed, but these details are not exposed.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The document does not explicitly state the type of ground truth used for the tests it lists. For "Cutting Accuracy," "Pin & Block Placement Accuracy," and "System Gap Balance Accuracy," it's highly likely that engineering measurements against a known or measured truth (e.g., precise physical measurements in a cadaver lab or benchtop setting) served as the ground truth. For "Software Testing" related to the segmentation algorithm, the ground truth would typically be manually segmented CT scans by experts, but this is not confirmed in the text.
8. The sample size for the training set
This information is not provided. The document states the segmentation algorithm uses a "pre-trained and closed machine learning model," meaning the training was completed before this submission and the model is static. The size and nature of the training data are not disclosed.
9. How the ground truth for the training set was established
This information is not provided. For a segmentation algorithm, the ground truth for training would typically involve manual segmentation annotations performed by qualified experts (e.g., orthopedic surgeons or trained annotators with anatomical expertise) on a dataset of CT scans.
§ 882.4560 Stereotaxic instrument.
(a)
Identification. A stereotaxic instrument is a device consisting of a rigid frame with a calibrated guide mechanism for precisely positioning probes or other devices within a patient's brain, spinal cord, or other part of the nervous system.(b)
Classification. Class II (performance standards).