K Number
K192979
Manufacturer
Date Cleared
2020-03-11

(139 days)

Product Code
Regulation Number
888.3030
Panel
OR
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The KLS Martin Individual Patient Solutions (IPS) Planning System is intended for use as a software system and image segmentation system for the transfer of imaging information from a medical scanner such as a CT based system. The input data file is processed by the IPS Planning System and the result is an output data file that may then be provided as digital models or used in an additive manufacturing portion of the system that produces physical outputs including anatomical models, guides, and case reports for use in thoracic (excluding spine) and reconstructive surgeries. The IPS Planning System is also intended as a pre-operative software tool for simulating surgical treatment options.

Device Description

The KLS Martin Individual Patient Solutions (IPS) Planning System is a collection of software and associated additive manufacturing equipment intended to provide a variety of outputs to support thoracic (excluding spine) and reconstructive surgeries. The system uses electronic medical images of the patients' anatomy (CT data) with input from the physician. to manipulate original patient images for planning and executing surgery. The system processes the medical images and produces a variety of patient specific physical and/or digital output devices which include anatomical models, guides, and case reports.

AI/ML Overview

The KLS Martin Individual Patient Solutions (IPS) Planning System is a medical device for surgical planning. The provided text contains information about its acceptance criteria and the studies performed to demonstrate its performance.

1. Table of Acceptance Criteria and Reported Device Performance

The provided document describes specific performance tests related to the materials and software used in the KLS Martin Individual Patient Solutions (IPS) Planning System. However, it does not explicitly provide a table of acceptance criteria with numerical targets and corresponding reported device performance values for the device's primary function of surgical planning accuracy or effectiveness. Instead, it relies on demonstrating that materials withstand sterilization, meet biocompatibility standards, and that software verification and validation were completed.

Here's a summary of the performance claims based on the provided text:

Acceptance Criteria CategoryReported Device Performance (Summary from text)
Material PerformancePolyamide (PA) Devices: Demonstrated ability to withstand multiple sterilization cycles and maintain ≥85% of initial tensile strength, leveraging data from K181241. Testing provides evidence of shelf life.
Titanium Devices (additively manufactured): Demonstrated substantial equivalence to titanium devices manufactured using traditional (subtractive) methods, leveraging testing from K163579. These devices are identical in formulation, manufacturing processes, and post-processing.
BiocompatibilityAll testing (cytotoxicity, sensitization, irritation, and chemical/material characterization) was within pre-defined acceptance criteria, in accordance with ISO 10993-1. Adequately addresses biocompatibility for output devices and intended use.
SterilizationSteam sterilization validations performed for each output device for the dynamic-air-removal cycle in accordance with ISO 17665-1:2006 to a sterility assurance level (SAL) of 10-6 using the biological indicator (BI) overkill method. All test method acceptance criteria were met.
PyrogenicityLAL endotoxin testing conducted according to AAMI ANSI ST72. Results demonstrate endotoxin levels below USP allowed limit for medical devices and meet pyrogen limit specifications.
Software Verification & ValidationPerformed on each individual software application (Materialise Mimics, Geomagic® Freeform Plus™) used in planning and design. Quality and on-site user acceptance testing provided objective evidence that all software requirements and specifications were correctly and completely implemented and traceable to system requirements. Testing showed conformity with pre-defined specifications and acceptance criteria. Software documentation ensures mitigation of potential risks and performs as intended based on user requirements and specifications.
Guide SpecificationsThickness (Cutting/Marking Guide): Min: 1.0 mm, Max: 20 mm.
Width (Cutting/Marking Guide): Min: 7 mm, Max: 300 mm.
Length (Cutting/Marking Guide): Min: 7 mm, Max: 300 mm.
Degree of curvature (in-plane): N/A
Degree of curvature (out-of-plane): N/A
Screw hole spacing (Cutting/Marking Guide): Min: ≥4.5 mm, Max: No Max.
No. of holes (Cutting/Marking Guide): N/A
Screw SpecificationsDiameter (Temporary): 2.3 mm - 3.2 mm.
Length (Temporary): 7 mm - 17 mm.
Style: maxDrive (Drill-Free, non-locking, locking).

2. Sample size used for the test set and the data provenance

The document specifies "simulated use of guides intended for use in the thoracic region was validated by means of virtual planning sessions with the end-user." However, it does not provide any specific sample size for a test set (e.g., number of cases or patients) or details about the provenance of data (e.g., retrospective or prospective, country of origin). The studies appear to be primarily focused on material and software validation, not a clinical test set on patient data.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

The document does not specify the number of experts used to establish ground truth for a test set, nor their qualifications. The "virtual planning sessions with the end-user" implies input from clinical professionals, but no details are provided.

4. Adjudication method for the test set

The document does not describe any adjudication method (e.g., 2+1, 3+1, none) for a test set.

5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

The document states that "Clinical testing was not necessary for the determination of substantial equivalence." Therefore, an MRMC comparative effectiveness study involving AI assistance for human readers was not performed or reported for this submission. The device is a planning system for producing physical outputs, not an AI-assisted diagnostic tool for human readers.

6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

The device is described as "a software system and image segmentation system for the transfer of imaging information... The system processes the medical images and produces a variety of patient specific physical and/or digital output devices." It also involves "input from the physician" and "trained employees/engineers who utilize the software applications to manipulate data and work with the physician to create the virtual planning session." This description indicates a human-in-the-loop system, not a standalone algorithm. Performance testing primarily focuses on the software's ability to implement requirements and specifications and material properties, rather than an independent algorithmic assessment.

Software verification and validation were performed on "each individual software application used in the planning and design," demonstrating conformity with specifications. This can be considered the standalone performance evaluation for the software components, ensuring they function as intended without human error, but it's within the context of supporting a human-driven planning process.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

For the materials and sterilization parts of the study, the "ground truth" or reference is established by international standards (ISO 10993-1, ISO 17665-1:2006) and national standards (AAMI ANSI ST72, USP allowed limit for medical devices).

For the "virtual planning sessions with the end-user," the ground truth is implicitly the expert judgment/agreement of the end-user (physician) on the simulated surgical treatment options and guide designs. No further specifics are given.

8. The sample size for the training set

The document does not mention any training set or its sample size. This is expected as the device is not described as a machine learning or AI device that requires a training set for model development in the typical sense. The software components are commercially off-the-shelf (COTS) applications (Materialise Mimics, Geomagic® Freeform Plus™) which would have their own internal validation and verification from their developers.

9. How the ground truth for the training set was established

Since no training set is mentioned for the device itself, the establishment of ground truth for a training set is not applicable in this document.

§ 888.3030 Single/multiple component metallic bone fixation appliances and accessories.

(a)
Identification. Single/multiple component metallic bone fixation appliances and accessories are devices intended to be implanted consisting of one or more metallic components and their metallic fasteners. The devices contain a plate, a nail/plate combination, or a blade/plate combination that are made of alloys, such as cobalt-chromium-molybdenum, stainless steel, and titanium, that are intended to be held in position with fasteners, such as screws and nails, or bolts, nuts, and washers. These devices are used for fixation of fractures of the proximal or distal end of long bones, such as intracapsular, intertrochanteric, intercervical, supracondylar, or condylar fractures of the femur; for fusion of a joint; or for surgical procedures that involve cutting a bone. The devices may be implanted or attached through the skin so that a pulling force (traction) may be applied to the skeletal system.(b)
Classification. Class II.