Search Results
Found 5 results
510(k) Data Aggregation
(69 days)
The X-Guide® Surgical Navigation System is a computerized navigational system intended to provide assistance in both the preoperative planning phase and intra-operative surgical phase of dental implantation procedures. The system provides software to preoperatively plan dental implantation procedures and provides navigational guidance of the surgical instruments.
The device is intended for use for partially edentulous and edentulous adult and geriatric patients who need dental implants as part of their treatment plan.
The X-Guide® Surgical Navigation System is an electro-optical dental implantation procedures by providing the surgeon with accurate surgical tool placement and guidal plan built upon Computed Tomographic (CT scan) data.
The X-Guide® Surgical Navigation System is currently cleared (K150222) with Bone Screws which are intended to affix an Edentulous Clin to an edentulous patients anatomy. The Edentulous clip is necessary to attach tracking patterns to facilitate the navigation and tracking process, and is calibrated to the patient anatomy and CT.
As an alternative, the proposed EDX Bone Screws and EDX Nut may be used to secure tracking patterns to edentulous patient anatomy.
The proposed EDX Bone Screws are a titanium screws which are tapered and vary in diameters of 2.7 mm and 3.2 mm wth thread lengths of 7mm and stud lengths of 4 mm, and 16 mm. In addition to the dimensional design changes, the screws have been modified with an M5 threading to accept the EDX Nut. The addition of M5 the Bone Screw is necessary to secure the tracking patterns to the patient.
The 2.7 mm diameter EDX Bone Screws are typically placed to secure the EDX Tracker Arm. Should the bone be too soft resulting in a loose Bone Screw, a large diameter "rescue" EDX Bone Screw (3.2 mm diameter) can be placed.
All of the EDX Bone Screws and EDX Bone Nuts are manufactured using a Ti6AL4V (Grade 5) alloy per ASTM F136 and adhere to the requirements of ASTM F543.
The EDX Bone Screws are intended to be removed from the conclusion of the surgical implant procedure. The EDX Bone Screws are single use devices, sold in a non-sterile state and intended to be sterilized by the end user prior to use.
The body of the proposed EDX Tracker Arms a stanless steel, which is a harder material providing more stability / rigidity in the Tracker Arm. A passivation coating is added to prevent oxidation of the during steam sterilization.
The EDX Tracker Arms include a spike intended to added stability in EDX Tracker Arm registration.
The body of the proposed EDX Tracker Arms are contoured in a variety of geometric shapes intended to minimize interferences with patient soft tissue by positioning on the Mandible and Maxilla in the posterior positions of the oral cavity.
The EDX Tracker Arms are distributed in a non-sterile state, and intended to be sterilized by the end user prior to use.
This is an FDA 510(k) Premarket Notification for the X-Guide® Surgical Navigation System, specifically addressing new components (EDX Bone Screws, EDX Nut, and EDX Tracker Arms). The document details the device description, comparison to predicate and reference devices, and non-clinical performance testing.
Here's the breakdown of the acceptance criteria and study information, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly present a table of acceptance criteria with corresponding performance numbers for the new components in terms of clinical accuracy or effectiveness. Instead, the "Performance Testing - Non-Clinical" section describes various tests performed and their conclusions, primarily focusing on demonstrating equivalence to the predicate device or adherence to existing standards.
Acceptance Criteria Category | Description | Reported Device Performance |
---|---|---|
Mechanical Properties | The predicate device's bone screws were tested per ASTM E543-17. The new EDX Bone Screws should be substantially equivalent in mechanical properties. | A review of test results and comparison demonstrated that the EDX Bone Screws are substantially equivalent to the predicate device Bone Screws. |
Additional mechanical testing was conducted to assess comparative deflection and stability of the EDX assembly under load conditions (specific results not detailed). | ||
Biocompatibility | Per FDA Guidance document 1811, "Use of International Standard ISO 10993-1," biocompatibility testing is not necessary if the material, process, and tissue contact are comparable to a legally marketed predicate device, and manufacturing does not adversely impact biocompatibility. Otherwise, cytotoxicity, sensitization, and irritation testing are required. | The EDX Bone Screws, EDX Bone Screw Nut, and EDX Tracker Arms were deemed equivalent in material, process, and tissue contact to the predicate device. Therefore, biocompatibility results from the predicate device (K150222) were used to demonstrate compliance. |
Sterilization | The new components (EDX Bone Screws) should meet a sterility assurance level (SAL) of ≤10-6 using the biological indicator (BI) overkill method. Cleaning and sterilization validation per ISO 17665-1 and FDA Reprocessing Guidance Document. | The EDX Bone Screws are considered geometrically similar to the predicate system, which successfully met a SAL of ≤10-6. Cleaning and sterilization validation conducted per ISO 17665-1 and FDA Reprocessing Guidance Document were provided. |
Effect on Navigation | The changes in the EDX components (bone screws, nut, tracker arms) should not raise new issues of substantial equivalence, implying that the navigational accuracy and performance of the X-Guide® system should remain consistent with the cleared predicate. | "(S)ubstantial equivalence between the predicate and the proposed EDX Bone Screw, EDX Tracker Arms, and EDX Bone Screw Nut has been demonstrated throughout this submission by performance and bench testing, comparison of intended clinical and similarity in materials." |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: Not explicitly stated for each specific test mentioned (e.g., mechanical testing, sterilization validation). The document uses a comparative approach, relying on the substantial equivalence to previous tests or established standards.
- Data Provenance: Not explicitly stated (e.g., country of origin). The testing seems to be internal or conducted by the manufacturer based on regulatory standards. The studies referred to are "performance and bench testing." The phrase "Additional mechanical testing was conducted on the subject device" suggests this was done for the current submission.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
Not applicable. The provided document details non-clinical performance testing and a comparison to a predicate device for regulatory clearance. It does not describe a study involving human experts to establish ground truth for a test set in the context of diagnostic accuracy or clinical decision-making.
4. Adjudication Method for the Test Set
Not applicable, as there is no mention of a test set requiring adjudication in the context of expert review or consensus.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done for this submission. The document explicitly states: "Clinical testing was not necessary for the substantial equivalence determination."
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
The device itself is a surgical navigation system, which inherently involves a human (the surgeon) in the loop. The "performance and bench testing" described are non-clinical and focus on the physical components and their properties, not a standalone algorithmic performance in the absence of human interaction.
7. The Type of Ground Truth Used
The "ground truth" for the non-clinical tests is established by:
- Established Standards: e.g., ASTM E543-17 for mechanical properties, ISO 10993-1 for biocompatibility, ISO 17665-1 for sterilization.
- Predicate Device Data: Performance characteristics of the previously cleared X-Guide® Surgical Navigation System (K150222) serve as a baseline for demonstrating substantial equivalence of the new components.
8. The Sample Size for the Training Set
Not applicable. This document is not describing an AI/algorithm where a "training set" would be used in the traditional sense. The X-Guide® system is a navigation system that uses CT scan data for preoperative planning and intra-operative guidance, not a machine learning model that requires a training set for its development.
9. How the Ground Truth for the Training Set Was Established
Not applicable, as there is no training set for an AI/algorithm described in this document.
Ask a specific question about this device
(139 days)
The KLS Martin Individual Patient Solutions (IPS) Planning System is intended for use as a software system and image segmentation system for the transfer of imaging information from a medical scanner such as a CT based system. The input data file is processed by the IPS Planning System and the result is an output data file that may then be provided as digital models or used in an additive manufacturing portion of the system that produces physical outputs including anatomical models, guides, and case reports for use in thoracic (excluding spine) and reconstructive surgeries. The IPS Planning System is also intended as a pre-operative software tool for simulating surgical treatment options.
The KLS Martin Individual Patient Solutions (IPS) Planning System is a collection of software and associated additive manufacturing equipment intended to provide a variety of outputs to support thoracic (excluding spine) and reconstructive surgeries. The system uses electronic medical images of the patients' anatomy (CT data) with input from the physician. to manipulate original patient images for planning and executing surgery. The system processes the medical images and produces a variety of patient specific physical and/or digital output devices which include anatomical models, guides, and case reports.
The KLS Martin Individual Patient Solutions (IPS) Planning System is a medical device for surgical planning. The provided text contains information about its acceptance criteria and the studies performed to demonstrate its performance.
1. Table of Acceptance Criteria and Reported Device Performance
The provided document describes specific performance tests related to the materials and software used in the KLS Martin Individual Patient Solutions (IPS) Planning System. However, it does not explicitly provide a table of acceptance criteria with numerical targets and corresponding reported device performance values for the device's primary function of surgical planning accuracy or effectiveness. Instead, it relies on demonstrating that materials withstand sterilization, meet biocompatibility standards, and that software verification and validation were completed.
Here's a summary of the performance claims based on the provided text:
Acceptance Criteria Category | Reported Device Performance (Summary from text) |
---|---|
Material Performance | Polyamide (PA) Devices: Demonstrated ability to withstand multiple sterilization cycles and maintain ≥85% of initial tensile strength, leveraging data from K181241. Testing provides evidence of shelf life. |
Titanium Devices (additively manufactured): Demonstrated substantial equivalence to titanium devices manufactured using traditional (subtractive) methods, leveraging testing from K163579. These devices are identical in formulation, manufacturing processes, and post-processing. | |
Biocompatibility | All testing (cytotoxicity, sensitization, irritation, and chemical/material characterization) was within pre-defined acceptance criteria, in accordance with ISO 10993-1. Adequately addresses biocompatibility for output devices and intended use. |
Sterilization | Steam sterilization validations performed for each output device for the dynamic-air-removal cycle in accordance with ISO 17665-1:2006 to a sterility assurance level (SAL) of 10-6 using the biological indicator (BI) overkill method. All test method acceptance criteria were met. |
Pyrogenicity | LAL endotoxin testing conducted according to AAMI ANSI ST72. Results demonstrate endotoxin levels below USP allowed limit for medical devices and meet pyrogen limit specifications. |
Software Verification & Validation | Performed on each individual software application (Materialise Mimics, Geomagic® Freeform Plus™) used in planning and design. Quality and on-site user acceptance testing provided objective evidence that all software requirements and specifications were correctly and completely implemented and traceable to system requirements. Testing showed conformity with pre-defined specifications and acceptance criteria. Software documentation ensures mitigation of potential risks and performs as intended based on user requirements and specifications. |
Guide Specifications | Thickness (Cutting/Marking Guide): Min: 1.0 mm, Max: 20 mm. |
Width (Cutting/Marking Guide): Min: 7 mm, Max: 300 mm. | |
Length (Cutting/Marking Guide): Min: 7 mm, Max: 300 mm. | |
Degree of curvature (in-plane): N/A | |
Degree of curvature (out-of-plane): N/A | |
Screw hole spacing (Cutting/Marking Guide): Min: ≥4.5 mm, Max: No Max. | |
No. of holes (Cutting/Marking Guide): N/A | |
Screw Specifications | Diameter (Temporary): 2.3 mm - 3.2 mm. |
Length (Temporary): 7 mm - 17 mm. | |
Style: maxDrive (Drill-Free, non-locking, locking). |
2. Sample size used for the test set and the data provenance
The document specifies "simulated use of guides intended for use in the thoracic region was validated by means of virtual planning sessions with the end-user." However, it does not provide any specific sample size for a test set (e.g., number of cases or patients) or details about the provenance of data (e.g., retrospective or prospective, country of origin). The studies appear to be primarily focused on material and software validation, not a clinical test set on patient data.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
The document does not specify the number of experts used to establish ground truth for a test set, nor their qualifications. The "virtual planning sessions with the end-user" implies input from clinical professionals, but no details are provided.
4. Adjudication method for the test set
The document does not describe any adjudication method (e.g., 2+1, 3+1, none) for a test set.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document states that "Clinical testing was not necessary for the determination of substantial equivalence." Therefore, an MRMC comparative effectiveness study involving AI assistance for human readers was not performed or reported for this submission. The device is a planning system for producing physical outputs, not an AI-assisted diagnostic tool for human readers.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The device is described as "a software system and image segmentation system for the transfer of imaging information... The system processes the medical images and produces a variety of patient specific physical and/or digital output devices." It also involves "input from the physician" and "trained employees/engineers who utilize the software applications to manipulate data and work with the physician to create the virtual planning session." This description indicates a human-in-the-loop system, not a standalone algorithm. Performance testing primarily focuses on the software's ability to implement requirements and specifications and material properties, rather than an independent algorithmic assessment.
Software verification and validation were performed on "each individual software application used in the planning and design," demonstrating conformity with specifications. This can be considered the standalone performance evaluation for the software components, ensuring they function as intended without human error, but it's within the context of supporting a human-driven planning process.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
For the materials and sterilization parts of the study, the "ground truth" or reference is established by international standards (ISO 10993-1, ISO 17665-1:2006) and national standards (AAMI ANSI ST72, USP allowed limit for medical devices).
For the "virtual planning sessions with the end-user," the ground truth is implicitly the expert judgment/agreement of the end-user (physician) on the simulated surgical treatment options and guide designs. No further specifics are given.
8. The sample size for the training set
The document does not mention any training set or its sample size. This is expected as the device is not described as a machine learning or AI device that requires a training set for model development in the typical sense. The software components are commercially off-the-shelf (COTS) applications (Materialise Mimics, Geomagic® Freeform Plus™) which would have their own internal validation and verification from their developers.
9. How the ground truth for the training set was established
Since no training set is mentioned for the device itself, the establishment of ground truth for a training set is not applicable in this document.
Ask a specific question about this device
(284 days)
The KLS Martin Individual Patient Solutions (IPS) Planning System is intended for use as a software system and image segmentation system for the transfer of imaging information from a computerized tomography (CT) medical scan. The input data file is processed by the IPS Planning System and the result is an output data file that may then be provided as digital models or used as input to a rapid prototyping portion of the system that produces physical outputs including anatomical models, guides and case reports for use in the marking of cranial surgery. The IPS Planning System is also intended as a pre-operative software tool for simulating surgical treatment options.
The KLS Martin Individual Patient Solutions (IPS) Planning System is a collection of software and associated additive manufacturing (rapid prototyping) equipment intended to provide a variety of outputs to support reconstructive cranial surgeries. The system uses electronic medical images of the patients' anatomy (CT data) with input from the physician, to manipulate original patient images for planning and executing surgery. The system processes the medical images and produces a variety of patient specific physical and/or digital output devices which include anatomical models, guides, and case reports for use in the marking of cranial bone in cranial surgery.
The KLS Martin Individual Patient Solutions (IPS) Planning System is a software system and image segmentation system used for transferring imaging information from a CT scan. The system processes input data to produce output data files, which can be digital models or physical outputs like anatomical models, guides, and case reports for cranial surgery. It is also a pre-operative software tool for simulating surgical treatment options.
Here's an analysis of the acceptance criteria and supporting studies based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance:
Acceptance Criteria Category | Reported Device Performance |
---|---|
Tensile & Bending Testing | Polyamide guides withstand multiple sterilization cycles without degradation and maintain 85% of initial tensile strength after 6 months. Additively manufactured titanium devices are equivalent to or better than traditionally manufactured titanium devices. |
Biocompatibility Testing | Polyamide devices meet pre-defined acceptance criteria (cytotoxicity, sensitization, irritation, chemical/material characterization, acute systemic toxicity, material-mediated pyrogenicity, indirect hemolysis). Titanium devices (including acute systemic toxicity, material-mediated pyrogenicity, indirect hemolysis) meet pre-defined acceptance criteria. |
Sterilization Testing | All output devices (polyamide, epoxy/resin/acrylic, titanium) achieve a sterility assurance level (SAL) of $10^{-6}$ using the biological indicator (BI) overkill method for steam sterilization. |
Pyrogenicity Testing | Devices contain endotoxin levels below the USP allowed limit for medical devices in contact with cerebrospinal fluid ( |
Ask a specific question about this device
(161 days)
The KLS Martin Individual Patient Solutions (IPS) Planning System is intended for use as a software system and image segmentation system for the transfer of imaging information from a medical scanner such as a CT based system. The input data file is processed by the IPS Planning System and the result is an output data file that may then be provided as digital models or used as input to a rapid prototyping portion of the system that produces physical outputs including anatomical models, guides, splints, and case reports for use in maxillofacial surgery. The IPS Planning System is also intended as a pre-operative software tool for simulating / evaluating surgical treatment options.
The KLS Martin Individual Patient Solutions (IPS) Planning System is a collection of software and associated additive manufacturing (rapid prototyping) equipment intended to provide a variety of outputs to support reconstructive and orthognathic surgeries. The system uses electronic medical images of the patients' anatomy (CT data) with input from the physician, to manipulate original patient images for planning and executing surgery. The system processes the medical images and produces a variety of patient specific physical and/or digital output devices which include anatomical models, guides, splints and case reports.
The provided text describes the KLS Martin Individual Patient Solutions (IPS) Planning System and its regulatory clearance (K182789) by the FDA. However, it does not contain information about a study proving the device meets specific acceptance criteria in the context of an AI/algorithm's performance.
Instead, the document is a 510(k) summary demonstrating substantial equivalence to a predicate device (K181241), primarily to expand the patient population to include pediatric subgroups. The core of the device is a planning system involving commercial off-the-shelf (COTS) software and additive manufacturing for physical outputs, with human-in-the-loop interaction from trained employees/engineers and physicians.
Therefore, most of the requested information regarding AI acceptance criteria, performance metrics, sample sizes, ground truth establishment, expert adjudication, and MRMC studies is not present in the provided document. The device, as described, is not an AI algorithm in the sense that it performs automated diagnostic or treatment recommendations independently based on image analysis with defined metrics. It's a system to assist human planning processes.
Here's an analysis based on the information available in the document, and a clear indication of what is not available.
Acceptance Criteria and Device Performance (as far as applicable from the document)
The document focuses on demonstrating substantial equivalence, not on acceptance criteria for a freestanding AI algorithm's performance. The "performance" described is primarily related to material properties, biocompatibility, and sterilization, and the functioning of the software as a tool for human planning.
Acceptance Criteria (Inferred from Substantial Equivalence and Safety/Performance) | Reported Device Performance (from document) |
---|---|
Material Degradation (Polyamide Guides) | Subject polyamide guides can withstand multiple sterilization cycles without degradation and can maintain 85% of its initial tensile strength. Demonstrates shelf life of 6 months. (p.10) |
Titanium Device Equivalency | Additively manufactured titanium devices are equivalent to, or better than, titanium devices manufactured using traditional (subtractive) methods, leveraging data from reference device K163579. (p.10) |
Biocompatibility | Biocompatibility endpoints (cytotoxicity, sensitization, irritation, chemical/material characterization) for both polyamide and titanium manufactured devices met pre-defined acceptance criteria (leveraged from predicate K181241 and reference K163579). (p.10) |
Sterility Assurance Level (SAL) | Achieved an SAL of 10^-6 for each output device using the biological indicator (BI) overkill method for steam sterilization. Validations for polyamide and titanium leveraged from predicate K181241 and reference K163579. (p.10) |
Pyrogenicity (Endotoxin Levels) | Endotoxin levels were below the USP allowed limit for medical devices and met pyrogen limit specifications, leveraging data from reference device K163579 for titanium. (p.10) |
Software Functionality and Validation | Quality and on-site user acceptance testing provided objective evidence that all software requirements and specifications were implemented correctly and completely and are traceable to system requirements. Testing from risk analysis showed conformity with pre-defined specifications and acceptance criteria. Software documentation demonstrates mitigation of potential risks and performance as intended. (p.11, p.14) |
Safety and Effectiveness in Pediatric Subpopulations | A risk assessment based on FDA guidance and supporting peer-reviewed clinical literature was performed. The conclusion is that expanding the patient population to neonates, infants, and children does not identify new issues of safety or effectiveness and the device is substantially equivalent to the predicate. (p.5, p.11, p.14) Note: This is a risk assessment and literature review, not a new clinical performance study. |
Information NOT Available in the Document (as it pertains to an AI/Algorithm performance study):
-
Sample size used for the test set and data provenance:
This information is not provided. The study performed was primarily non-clinical (material testing, biocompatibility, sterilization) and a risk assessment for pediatric use, not a clinical trial evaluating algorithm performance on a test set of patient data. -
Number of experts used to establish the ground truth for the test set and their qualifications:
Not applicable, as no dedicated test set for evaluating AI/algorithm performance against a ground truth is described. The system relies on physician input and interaction with trained employees/engineers for planning, not on an autonomous algorithmic output that requires expert adjudication for ground truth. -
Adjudication method for the test set:
Not applicable for the same reasons as above. -
If a multi-reader multi-case (MRMC) comparative effectiveness study was done:
No, an MRMC comparative effectiveness study was not done. The document explicitly states: "Clinical testing was not necessary for the determination of substantial equivalence." (p.11, p.14). The device is a planning system with human-in-the-loop, not an AI intended to improve human readers' performance directly. -
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
No, a standalone algorithm performance study was not described. The device is presented as a "software system and image segmentation system" where "trained employees/engineers... manipulate data and work with the physician to create the virtual planning session." (p.4, p.13) The core function described is human-assisted planning and production of physical models, not an autonomous algorithmic output. -
The type of ground truth used:
Ground truth in the context of an AI algorithm's diagnostic or predictive performance is not relevant here, as no such AI is described. The "ground truth" for the device's function would relate to the accuracy of the generated physical models and plans relative to the patient's anatomy and surgical intent, which is managed through human interaction and validation within the system's intended use. -
The sample size for the training set:
Not applicable. The document refers to "commercially off-the-shelf (COTS) software applications" (p.4, p.13) which implies existing, validated software tools are being used, not a newly developed AI model requiring a separate training set. -
How the ground truth for the training set was established:
Not applicable, as no new training set for an AI model is described.
Ask a specific question about this device
(126 days)
The KLS Martin Individual Patient Solutions (IPS) Planning System is intended for use as a software system and image segmentation system for the transfer of imaging information from a medical scanner such as a CT based system. The input data file is processed by the IPS Planning System and the result is an output data file that may then be provided as digital models or used as input to a rapid prototyping portion of the system that produces physical outputs including anatomical models, guides, splints, and case reports for use in maxillofacial surgery. The IPS Planning System is also intended as a pre-operative software tool for simulating / evaluating surgical treatment options.
The KLS Martin Individual Patient Solutions (IPS) Planning System is a collection of software and associated additive manufacturing (rapid prototyping) equipment intended to provide a variety of outputs to support reconstructive and orthognathic surgeries. The system uses electronic medical images of the patients' anatomy (CT data) with input from the physician, to manipulate original patient images for planning and executing surgery. The system processes the medical images and produces a variety of patient specific physical and/or digital output devices which include anatomical models, guides, splints and case reports.
Here's an analysis of the acceptance criteria and study information based on the provided text, focusing on what is present and what is not:
The document (K181241 510(k) Summary) describes the KLS Martin Individual Patient Solutions (IPS) Planning System, which is a software system for image segmentation and pre-operative planning, and also provides physical outputs like anatomical models, guides, and splints for maxillofacial surgery. The submission focuses on demonstrating substantial equivalence to a predicate device (VSP System K120956) and uses several reference devices.
Acceptance Criteria and Study Information:
Based on the provided text, the device itself is a planning system that ultimately produces physical outputs. The "performance" being evaluated relates to the characteristics of these physical outputs (tensile strength, biocompatibility, sterility, pyrogenicity) and the software's functionality. There isn't a direct "device performance" metric in the traditional sense of an AI diagnostic device's sensitivity, specificity, or accuracy against a clinical outcome.
1. Table of Acceptance Criteria and Reported Device Performance
Given the nature of the device (planning system with physical outputs, not a diagnostic AI), the performance metrics are primarily related to safety and manufacturing quality.
Acceptance Criteria Category | Specific Acceptance Criteria (as implied) | Reported Device Performance (as summarized) |
---|---|---|
Tensile & Bending | Polyamide guides maintain 85% of initial tensile strength after multiple sterilization cycles. Demonstrate a 6-month shelf life. Titanium devices are equivalent or better than traditional methods. | Polyamide guides met the criteria, demonstrating resistance to degradation after sterilization and supporting a 6-month shelf life. Titanium test results were leveraged from a reference device (K163579) and confirmed equivalence or superiority to traditional manufacturing. |
Biocompatibility | Meet pre-defined acceptance criteria for cytotoxicity, sensitization, irritation, and chemical/material characterization (according to ISO 10993-1). | All conducted tests (cytotoxicity, sensitization, irritation, chemical/material characterization) for subject devices (polyamide and titanium) were within pre-defined acceptance criteria. Titanium results also leveraged from K163579. |
Sterilization | Achieve a Sterility Assurance Level (SAL) of 10^-6 for each output device using the BI overkill method for steam sterilization (according to ISO 17665-1:2006 for dynamic-air-removal cycle). | All test method acceptance criteria were met, achieving the specified SAL of 10^-6. Validations for titanium were leveraged from K163579. |
Pyrogenicity | Meet pyrogen limit specifications, with endotoxin levels below USP allowed limit for medical devices (according to AAMI ANSI ST72 for LAL endotoxin testing). | The devices contain endotoxin levels below the USP allowed limit for medical devices and meet pyrogen limit specifications. Testing for titanium was leveraged from K163579. |
Software Verification & Validation (V&V) | All software requirements and specifications are implemented correctly and completely, traceable to system requirements. Conformity with pre-defined specifications and acceptance criteria based on risk analysis and impact assessments. Mitigation of potential risks and performance as intended based on user requirements. | Quality and on-site user acceptance testing provided objective evidence of correct and complete implementation of software requirements, traceability, and conformity with specifications and acceptance criteria. Software documentation demonstrated risk mitigation and intended performance. (Note: Specific quantitative metrics for software performance are not provided in this summary). |
2. Sample size used for the test set and the data provenance
- Test Set Sample Size: The document does not specify a "test set" in the context of a dataset for evaluating AI performance (e.g., medical images for segmentation accuracy). Instead, "testing" refers to non-clinical performance evaluations of the physical outputs and software.
- For Tensile & Bending, Biocompatibility, Sterilization, and Pyrogenicity, the sample sizes are not explicitly mentioned, but the tests were performed on "the subject polyamide guides" and "titanium" components. The provenance is internal testing performed by the manufacturer, or results leveraged from previous KLS Martin device submissions.
- For Software V&V, no specific numerical "test set" of software inputs is given, but testing was performed on "each individual software application."
- Data Provenance: The data provenance for non-clinical testing is internal to the manufacturer or relied upon previous regulatory submissions for similar materials/processes. It is not patient or country-specific data as would be for clinical studies. The data used by the IPS Planning System itself (CT data) would be patient data, but the evaluation here is of the system's outputs, not its interpretation of patient data in a diagnostic manner. The document states the system "transfers imaging information from a medical scanner such as a CT based system."
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- This question is not applicable in the context of this 510(k) submission. The "ground truth" for the non-clinical tests (tensile strength, biocompatibility, etc.) is established by standard scientific and engineering methodologies, not by expert medical review of images.
- For the "Software Verification and Validation," the "ground truth" for software functionality is defined by the established software requirements and specifications, validated by internal quality and user acceptance testing, not by external experts in the medical domain. The document mentions "input from the physician" for manipulation of original patient images, suggesting physicians set the clinical goals for the plan, but the validation of the system's performance is not described as involving experts establishing a "ground truth" concerning image interpretation.
4. Adjudication method for the test set
- Not applicable. There is no adjudication method described as would be used for clinical interpretation or diagnostic performance evaluation by multiple experts. The non-clinical tests follow established standards and protocols.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, a multi-reader multi-case (MRMC) comparative effectiveness study was not done.
- This device is a planning system for maxillofacial surgery, not a diagnostic AI that assists human readers in image interpretation or diagnosis. It aids in surgical planning and creates physical outputs. The submission explicitly states: "Clinical testing was not necessary for the determination of substantial equivalence."
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Given the device's function, it is inherently a "human-in-the-loop" system. The description states: "The system uses electronic medical images of the patients' anatomy (CT data) with input from the physician, to manipulate original patient images for planning and executing surgery." And, "The physician provides input for model manipulation and interactive feedback through viewing of digital models...that are modified by the trained employee/engineer during the planning session."
- Therefore, performance of the algorithm without human intervention is not the intended use or focus of this submission. The "software verification and validation" (Section 11) is the closest thing to an "algorithm-only" evaluation, but it's about software functionality, not standalone image interpretation performance.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- For the non-clinical performance tests of the physical outputs (Tensile & Bending, Biocompatibility, Sterilization, Pyrogenicity), the "ground truth" is defined by established engineering and scientific standards (e.g., ISO 10993-1, ISO 17665-1:2006, AAMI ANSI ST72, and internal specifications).
- For Software Verification & Validation, the "ground truth" is adherence to predefined software requirements and specifications (functional and non-functional, related to image transfer, manipulation, and output file generation). It is not based on medical "ground truth" like pathology or clinical outcomes.
8. The sample size for the training set
- This question is not applicable. The KLS Martin IPS Planning System is described as using "validated commercially off-the-shelf (COTS) software applications" for image manipulation. There is no mention of a "training set" in the context of machine learning or AI model development within this summary. It appears to be a rule-based or conventional algorithmic system rather than a deep learning/machine learning model that would require a distinct training set.
9. How the ground truth for the training set was established
- This question is not applicable, as no training set for machine learning was mentioned or identified in the document (see point 8).
Ask a specific question about this device
Page 1 of 1