Search Filters

Search Results

Found 4 results

510(k) Data Aggregation

    K Number
    K210731
    Manufacturer
    Date Cleared
    2022-07-18

    (494 days)

    Product Code
    Regulation Number
    872.4760
    Panel
    Dental
    Reference & Predicate Devices
    Predicate For
    Why did this record match?
    Reference Devices :

    K943347, K170272, K182789

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    KLS Martin Individual Patient Solutions (IPS) is intended as a pre-operative software tool for simulating / evaluating surgical treatment options as a software and image segmentation system for the transfer of imaging information from a medical scanner such as a CT based system. The is processed by the IPS software and the result is an output data file that may then be provided as digital models or used as input in an additive manufacturing portion of the system that produces physical outputs including implants, anatomical models, guides, splints, and case reports for use in maxillofacial, midface, & mandibular surgery.

    KLS Martin Individual Patient Solutions (IPS) implant devices are intended for use in the stabilization, fixation, and reconstruction of the maxillofacial / midface and mandibular skeletal regions in children (2 years of age), adolescents (12 years of age - 21 years of age), and adults.

    Device Description

    KLS Martin Individual Patient Solutions (IPS) is comprised of a collection of software and associated additive manufacturing equipment intended to produce various outputs to support reconstructive and orthognathic surgeries. The system processes the medical images to produce various patient-specific physical and/or digital output devices which include implants, anatomical models, guides, splints, and case reports.

    Patient-specific metallic bone plates are used in conjunction with metallic bone screws for internal fixation of maxillofacial, midface, and mandibular bones. The devices are manufactured based on medical imaging (CT scan) of the patient's anatomy with input from the physician during virtual planning and prior to finalization and production of the device. The physician provides input for model manipulation and interactive feedback by viewing digital models of planned outputs that are modified by trained KLS Martin engineers during the planning session. For each design iteration, verification is performed by virtually fitting the generated output device over a 3D model of the patient's anatomy to ensure its dimensional properties allow an adequate fit.

    Implants are provided non-sterile and are manufactured using traditional (subtractive) or additive manufacturing methods from either CP Titanium (ASTM F67) or Ti-6AI-4V (ASTM F136). These patient-specific devices are fixated with previously cleared KLS Martin screws.

    AI/ML Overview

    Here's a summary of the acceptance criteria and study information for the KLS Martin Individual Patient Solutions device, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria for the KLS Martin Individual Patient Solutions device primarily revolve around demonstrating substantial equivalence to predicate devices and ensuring the safety and effectiveness of the device, particularly for the expanded pediatric population and new specifications. The performance is assessed through various non-clinical tests and a review of clinical literature.

    Acceptance Criteria CategorySpecific Criteria/TestsReported Device Performance
    Material PropertiesBiocompatibility (ISO 10993-1)Cytotoxicity, chemical analysis, sensitization, irritation, and chemical/material characterization leveraged from predicate/reference devices for titanium, synthetic polymers, and acrylic resins. New photopolymer resin for splints passed cytotoxicity, sensitization, irritation, and material-mediated pyrogenicity testing.
    Mechanical PropertiesBending Resistance and Fatigue Life (ASTM F382)Determined to be substantially equivalent to K943347 plates (reference device). New worst-case midface, orbit, and mandible plate designs were tested.
    SterilizationSterility Assurance Level (SAL) of 10^-6 (ISO 17665-1:2006)Validations for titanium devices leveraged from K191028. Validations for synthetic polymers and acrylic resins leveraged from K182789. New photopolymer resin for splints also underwent sterilization validation, with acceptance criteria met.
    PyrogenicityLAL endotoxin testing (AAMI ANSI ST72)Endotoxin levels below USP allowed limit for medical devices, meeting pyrogen limit specifications. Leveraged from K191028 for titanium devices.
    Software PerformanceSoftware Verification and ValidationObjective evidence that all software requirements and specifications were correctly and completely implemented, traceable to system requirements. Demonstrated conformity with predefined specifications and acceptance criteria.
    Clinical Performance (Pediatric Expansion)Risk mitigation assessments (FDA Guidance "Premarket Assessment of Pediatric Medical Devices") and review of peer-reviewed clinical literature.Risk assessments addressed various pediatric risk factors. Six clinical studies (patients 18 months to 18 years) were analyzed to support safety and effectiveness in pediatric subpopulations (2 to <12 years, and 12 to 21 years of age), with noted precautions for growth impact and radiation exposure.
    Substantial EquivalenceDifferences in technological characteristics do not raise new or different questions of safety and effectiveness.Non-clinical performance testing, clinical performance data review, risk analysis, and incorporation of reference devices demonstrated substantial equivalence.

    2. Sample Size Used for the Test Set and Data Provenance

    The provided text primarily details non-clinical testing and a review of clinical literature rather than a specific test set for the device's performance in a diagnostic or predictive context.

    • Non-clinical testing: The sample sizes for mechanical, biocompatibility, sterilization, and pyrogenicity testing are not explicitly stated in the provided document. These tests are typically conducted on a representative number of device samples according to established standards.
    • Clinical Literature Review: The clinical performance data comes from 6 clinical studies that analyzed patients aged 18 months to 18 years. The provenance of this data is retrospective, as it involves a review of published literature findings rather than a new prospective clinical trial conducted by KLS Martin. The specific countries of origin for these studies are not mentioned.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    This information is not applicable in the context of the provided document. The document describes a 510(k) submission for a medical device (surgical planning software and implants), not an AI/ML diagnostic or predictive device that typically requires expert-established ground truth for a test set. The "ground truth" for this device's performance would be the successful outcome of surgical planning and the functional stability of the implants, assessed through non-clinical means and literature review, rather than expert annotation of data.

    4. Adjudication Method for the Test Set

    This information is not applicable for the reasons stated in point 3.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size

    No, an MRMC comparative effectiveness study was not done for this device. The device is a surgical planning tool and implant system, not an AI-assisted diagnostic tool that would typically be evaluated with MRMC studies to assess human reader improvement.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    The device encompasses both software (for pre-operative planning and image segmentation) and physical outputs (implants, models, guides). The software component, KLS Martin Individual Patient Solutions (IPS), is described as a "pre-operative software tool for simulating / evaluating surgical treatment options as a software and image segmentation system." The software itself undergoes "Software Verification and Validation" to ensure it performs as intended based on user requirements and specifications. This suggests the software functionality is evaluated as a standalone component in terms of its technical accuracy and adherence to specifications. However, its effectiveness is intrinsically linked to its use in the context of surgical planning involving human input (physician feedback). Therefore, while the software's functional performance is verified, it is not described as having an 'algorithm only without human-in-the-loop performance' study in the way an AI diagnostic algorithm might be evaluated.

    7. The Type of Ground Truth Used

    • Non-clinical Performance: The "ground truth" for the non-clinical tests (mechanical, biocompatibility, sterilization, pyrogenicity) is defined by established industry standards and regulatory requirements (e.g., ASTM F382, ISO 10993-1, ISO 17665-1:2006, AAMI ANSI ST72). The device's performance is compared against these standards or against predicate devices that have already met these standards.
    • Clinical Performance (Pediatric Expansion): The "ground truth" for supporting the expanded pediatric indications comes from peer-reviewed clinical literature (6 studies cited) that assessed the safety and effectiveness of similar bone plate devices and the subject device's components in pediatric populations, following FDA guidance on "Use of Real-World Evidence."

    8. The Sample Size for the Training Set

    This information is not provided in the document. Software for medical devices, especially those involving image processing and CAD/CAM, often utilizes pre-existing algorithms and models rather than being trained from scratch on large datasets in the way a deep learning AI might be. If any training was involved, the details are not disclosed here.

    9. How the Ground Truth for the Training Set Was Established

    This information is not provided in the document. As stated above, it is unclear if a "training set" in the context of machine learning was used. If the software involves image segmentation or manipulation, it likely relies on validated algorithms rather than a dynamically trained model requiring ground truth from human annotations for training.

    Ask a Question

    Ask a specific question about this device

    K Number
    K193301
    Device Name
    coDiagnostiX
    Manufacturer
    Date Cleared
    2021-06-21

    (570 days)

    Product Code
    Regulation Number
    872.4120
    Panel
    Dental
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    Reference Devices :

    K182789, K112280

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    coDiagnostiX is an implant planning and surgery planning software tool intended for use by dental professionals who have appropriate knowledge in the field of application. The software reads imaging information output from medical scanners such as CBCT or CT scanners.

    It is indicated for pre-operative simulation and evaluation of patient anatomy, dental implant placement, surgical instrument positioning, and surgical treatment options, in edentulous, partial edentulous or dentition situations, which may require a surgical guide. It is further indicated for the user to design such guides for, alone or in combination, the guiding of a surgical path along a trajectory or a profile, or to help evaluate a surgical preparation or step.

    coDiagnostiX software allows for surgical guide export to a validated manufacturing center or to the point of care. Manufacturing at the point of care requires a validated process using CAM equipment (additive manufacturing system, including software and associated tooling) and compatible material (biocompatible and sterilizable). A surgical guide may require to be used with accessories.

    Device Description

    The main uses and capabilities of the coDiagnostiX software are unchanged from the primary predicate version.

    As in the primary predicate version, it is a software for dental surgical treatment planning. It is designed for the evaluation and analysis of 3-dimensional datasets and the precise image-guided and reproducible preoperative planning of dental surgeries.

    The first main steps in its workflow include the patient image data being received from CBCT (Cone Beam Computed Tomography) or CT. The data in DICOM format is then read with the coDiagnostiX DICOM transfer module according to the standard, converted into 3-dimensional datasets and stored in a database.

    The pre-operative planning is performed by the computation of several views (such as a virtual Orthopantomogram or a 3-dimensional reconstruction of the image dataset), by the analysis of the image data, and the placement of surgical items (i.e. sleeves, implants) upon the given views. The pre-operative planning is then followed as decided by the design of a corresponding surgical guide that reflects the assigned placement of the surgical items.

    Additional functions are available to the user for refinement of the preoperative planning, such as:

    • · Active measurement tools, length and angle, for the assessment of surgical treatment options:
    • · Nerve module to assist in distinguishing the nervus mandibularis canal;
    • 3D sectional views through the jaw for fine adjustment of surgical treatment options; .
    • Segmentation module for coloring several areas inside the slice dataset, e.g., jawbone, . native teeth, or types of tissue such as bone or skin, and creating a 3D reconstruction for the dataset;
    • Parallelizing function for the adjustment of adjacent images; and
    • · Bone densitometry assessment, with a density statistic in areas of interest.

    All working steps are automatically saved to the patient file may contain multiple surgical treatment plan proposals which allows the user to choose the ideal surgical treatment plan. The output file of the surgical guide and/or the guided surgical is then generated from the final surgical treatment plan.

    coDiagnostiX software allows for surgical guide export to a validated manufacturing center or to the point of care. Manufacturing at the point of care requires a validated process using CAM equipment (additive manufacturing system, including software and associated tooling) and compatible material (biocompatible and sterilizable). A surgical quide may require to be used with accessories.

    AI/ML Overview

    The provided document is a 510(k) Premarket Notification from the FDA for the coDiagnostiX dental implant planning and surgery planning software. It details the device's indications for use, comparison to predicate/reference devices, and non-clinical performance data used to demonstrate substantial equivalence.

    However, the document does not contain specific acceptance criteria for a device's performance (e.g., accuracy metrics or thresholds), nor does it describe a comparative study that proves the device meets such criteria with detailed quantitative results. The section on "Non-Clinical Performance Data" broadly discusses verification and validation, but lacks the granular data requested.

    Therefore, many of the requested points cannot be answered from the provided text.

    Here's an attempt to answer based on the available information, noting where information is absent:

    Device: coDiagnostiX (K193301)

    1. Table of Acceptance Criteria and Reported Device Performance

    Information Not Provided in Document: The document does not specify quantitative acceptance criteria or provide a table of reported device performance metrics against such criteria. It states that "The acceptance criteria are met" for sterilization validation and "Expected results are met" for process performance qualifications, but does not provide details on what those criteria or results actually were.

    2. Sample Size and Data Provenance for Test Set

    Information Not Provided in Document: The document mentions "software verification and validation" and "Biocompatibility testing" but does not specify sample sizes for any test sets (e.g., number of patient scans, number of manufactured guides). There is also no explicit mention of data provenance (e.g., country of origin, retrospective/prospective collection).

    3. Number and Qualifications of Experts for Ground Truth Establishment

    Information Not Provided in Document: The document states "software verification and validation is conducted to assure requirements and specifications as well as risk mitigations (design inputs) are correctly and completely implemented and traceable to design outputs." However, it does not specify if experts were involved in establishing ground truth for a test set, their number, or their qualifications. The device is intended for "dental professionals who have appropriate knowledge in the field of application."

    4. Adjudication Method for Test Set

    Information Not Provided in Document: No information regarding an adjudication method (e.g., 2+1, 3+1, none) for a test set is provided.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    Information Not Provided in Document: The document focuses on demonstrating substantial equivalence primarily through technological characteristics and non-clinical data, rather than through an MRMC comparative effectiveness study involving human readers. There is no mention of such a study or an effect size for human reader improvement with AI assistance.

    6. Standalone (Algorithm Only) Performance Study

    Information Not Provided in Document: While the document refers to "software verification and validation" to demonstrate that "the software performs as intended" and "the base accuracy is identical as compared to the predicate device," it does not provide details of a standalone (algorithm only) performance study with specific metrics, such as sensitivity, specificity, or accuracy values. It implies the software's capabilities (CAD type, image sources, output files) are unchanged from the predicate, and therefore its base accuracy is "identical."

    7. Type of Ground Truth Used

    Information Not Provided in Document: The document does not specify the type of ground truth used for any testing. It mentions "evaluation and analysis of 3-dimensional datasets" from CBCT or CT scans, but not how ground truth (e.g., for specific anatomical features, surgical path accuracy, etc.) was established for validation purposes.

    8. Sample Size for Training Set

    Information Not Provided in Document: The document does not provide any information about a training set or its sample size. This is a 510(k) submission primarily comparing the device to a predicate, not detailing the development or training of an AI algorithm from scratch. The changes described are primarily feature expansions to an existing software.

    9. How Ground Truth for Training Set Was Established

    Information Not Provided in Document: As no information on a training set is provided, there is no information on how its ground truth was established.


    Summary of Document's Approach to Meeting Requirements:

    The document primarily relies on demonstrating substantial equivalence to an existing predicate device (K130724 coDiagnostiX Implant Planning Software) and reference devices rather than presenting a de novo performance study with specific acceptance criteria and detailed quantitative results.

    The key arguments for substantial equivalence are:

    • The device has the "same intended use" as the primary predicate device.
    • It has "similar technological characteristics" (software, interface, inputs, outputs are either identical or considered similar with addressed impacts).
    • Changes are described as "feature expansions" and "minor updates" to an existing, cleared software.
    • Non-clinical data, including software verification and validation, biocompatibility testing, sterilization validation, and manufacturing process qualifications, are stated to have met acceptance criteria and demonstrated that the device "performs as intended" and is "safe and effective."

    The non-clinical data section broadly implies that the software's core accuracy functions are maintained from the predicate device, since its fundamental CAD capabilities, image sources, and output file functions are unchanged. The new features ("planning of a surgical path along a trajectory," "planning of a surgical path along a profile," and "planning to help evaluate surgical preparation or step") are leveraged from the predicate's general implant planning and surgical planning indications. The document asserts that these changes "do not change the intended use or the applicable fundamental technology, and do not raise any new questions of safety or effectiveness."

    Ask a Question

    Ask a specific question about this device

    K Number
    K201052
    Manufacturer
    Date Cleared
    2020-08-31

    (132 days)

    Product Code
    Regulation Number
    882.4310
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    Reference Devices :

    KLS Martin Individual Patient Solutions (IPS) Planning System (K182789), Stryker PEEK Customized Cranial

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The KLS Martin Individual Patient Solutions (IPS) Planning System is intended for use as a software system and image segmentation system for the transfer of imaging information from a computerized tomography (CT) medical scan. The input data file is processed by the IPS Planning System and the result is an output data file that may then be provided as digital models or used as input to a rapid prototyping portion of the system that produces physical outputs including anatomical models, guides, and case reports for use in the marking and cutting of cranial bone in cranial surgery. The IPS Planning System is also intended as a pre-operative software tool for simulating / evaluating surgical treatment options. Information provided by the software and device output is not intended to eliminate, replace, or substitute, in whole or in part, the healthcare provider's judgment and analysis of the patient's condition.

    Device Description

    The KLS Martin Individual Patient Solutions (IPS) Planning System is a collection of software and associated additive manufacturing (rapid prototyping) equipment intended to provide a variety of outputs to support reconstructive cranial surgeries. The system uses electronic medical images of the patients' anatomy (CT data) with input from the physician, to manipulate original patient images for planning and executing surgery. The system processes the medical images and produces a variety of patient specific physical and/or digital output devices which include anatomical models, guides, and case reports for use in the marking and cutting of cranial bone in cranial surgery.

    AI/ML Overview

    The provided text is a 510(k) summary for the KLS Martin Individual Patient Solutions (IPS) Planning System. It details the device, its intended use, and comparisons to predicate and reference devices. However, it does not describe specific acceptance criteria and a study dedicated to proving the device meets those criteria in the typical format of a diagnostic AI/ML device submission.

    Instead, the document primarily focuses on demonstrating substantial equivalence to a predicate device (K182889) and leveraging existing data from that predicate, as well as two reference devices (K182789 and K190229). The "performance data" sections describe traditional medical device testing (tensile, biocompatibility, sterilization, software V&V) and a simulated design validation testing and human factors and usability testing rather than a clinical study evaluating the accuracy of an AI/ML algorithm's output against a ground truth.

    Specifically, there is no mention of:

    • Acceptance criteria for an AI/ML model's performance (e.g., sensitivity, specificity, AUC).
    • A test set with sample size, data provenance, or ground truth establishment details for AI/ML performance evaluation.
    • Expert adjudication methods, MRMC studies, or standalone algorithm performance.

    The "Simulated Design Validation Testing" and "Human Factors and Usability Testing" are the closest sections to a performance study for the IPS Planning System, but they are not framed as an AI/ML performance study as requested in the prompt.

    Given this, I will extract and synthesize the information available regarding the described testing and attempt to structure it to address your questions, while explicitly noting where the requested information is not present in the provided document.


    Acceptance Criteria and Device Performance (as inferred from the document)

    The document primarily states that the device passes "all acceptance criteria" for various tests, but the specific numerical acceptance criteria (e.g., minimum tensile strength, maximum endotoxin levels) and reported performance values are generally not explicitly quantified in a table format. The closest to "performance" is the statement that "additively manufactured titanium devices are equivalent or better than titanium devices manufactured using traditional (subtractive) methods."

    Since the document doesn't provide a table of acceptance criteria and reported numerical performance for an AI/ML model's accuracy, I will present the acceptance criteria and performance as described for the tests performed:

    Test CategoryAcceptance Criteria (as described)Reported Device Performance (as described)
    Tensile & Bending TestingPolyamide guides can withstand multiple sterilization cycles without degradation and can maintain 85% of initial tensile strength. Titanium devices must be equivalent or better than those manufactured using traditional methods.Polyamide guides meet criteria. Additively manufactured titanium devices are equivalent or better than traditionally manufactured ones.
    Biocompatibility TestingAll biocompatibility endpoints (cytotoxicity, sensitization, irritation, chemical/material characterization, acute systemic, material-mediated pyrogenicity, indirect hemolysis) must be within pre-defined acceptance criteria.All conducted tests were within pre-defined acceptance criteria, adequately addressing biocompatibility.
    Sterilization TestingSterility Assurance Level (SAL) of 10^-6 for dynamic-air-removal cycle. All test method acceptance criteria must be met.All test method acceptance criteria were met.
    Pyrogenicity TestingEndotoxin levels must be below the USP allowed limit for medical devices that have contact with cerebrospinal fluid (< 2.15 EU/device) and meet pyrogen limit specifications.Devices contain endotoxin levels below the USP allowed limit (< 2.15 EU/device) and meet pyrogen limit specifications.
    Software Verification and ValidationAll software requirements and specifications are implemented correctly and completely, traceable to system requirements. Conformity with pre-defined specifications and acceptance criteria. Mitigation of potential risks. Performs as intended based on user requirements and specifications.All appropriate steps have been taken to ensure mitigation of any potential risks and performs as intended based on the user requirements and specifications.
    Simulated Design Validation Testing"Passed all acceptance criteria regardless of age or size" for representative cranial case extrapolated to six age ranges. Manufacturable at a high and acceptable level of fidelity, independent of feature size, age of patient, and device size.Demonstrated that the subject devices passed all acceptance criteria regardless of age or size. Confirms manufacturability at a high and acceptable level of fidelity, independent of feature size, age of patient, and device size.
    Human Factors and Usability TestingNo potential risks or concerns, outside of those previously raised and mitigated in the IFU, are found. Clinical experts confirm testing and outputs are applicable to real life situations and can be used to effectively execute a planned cranial procedure.No potential risks or concerns were found (outside of those mitigated in IFU). All clinical experts confirmed the testing and outputs were applicable to real life situations and could be used to effectively execute a planned cranial procedure (pediatric or adult patients).

    Detailed Study Information (Based on available text):

    1. Sample size used for the test set and the data provenance:

      • Test Set for Simulated Design Validation Testing: A "representative cranial case" was "extrapolated to six (6) distinct age ranges for input data (CT scan) equals output data validation." This implies 6 simulated cases were tested, but no further details on the number of actual CT scans or patients are provided.
      • Test Set for Human Factors and Usability Testing: "Eighteen (18) cases were analyzed" (6 distinct age ranges, with outputs sent to 3 clinical experts, meaning 6 (age ranges) x 3 (experts) = 18 cases analyzed in total by the experts).
      • Data Provenance: Not specified for the "representative cranial case" in simulated design validation. For human factors, it implicitly used outputs derived from the "six (6) distinct age ranges" based on the system's processing. The document does not specify if the data was retrospective or prospective, or the country of origin.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Simulated Design Validation Testing: Not explicitly stated that experts established ground truth for this test. It seems to be a technical validation against the design specifications.
      • Human Factors and Usability Testing: "Three separate clinical experts" were used to review the outputs. Their qualifications are not specified beyond being "clinical experts." Their role was to analyze for potential use problems and make recommendations, and confirm applicability to real-life situations. This is not the establishment of ground truth in the sense of a diagnostic classification.
    3. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

      • No adjudication method is described for either the simulated design validation or human factors/usability testing. The human factors testing involved reviews by multiple experts, but no process for reconciling disagreements or establishing a consensus "ground truth" among them is mentioned; they each provided independent feedback.
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • An MRMC comparative effectiveness study was not conducted according or described in this document. The device is not presented as an AI-assisted diagnostic tool that improves human reader performance in the traditional sense. It's a pre-operative planning system that processes CT data to create physical/digital outputs. The "Human Factors and Usability Testing" involved multiple readers (clinical experts) and multiple cases, but it was for usability assessment rather than a comparative effectiveness study of AI assistance.
    5. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:

      • The document describes "Software Verification and Validation" which is a form of standalone testing for the software applications used. It states that "all software requirements and specifications were implemented correctly and completely." However, this is a validation of the software's functionality and adherence to specifications, not a performance study of an AI/ML algorithm's accuracy in a diagnostic context. The system is explicitly described as requiring "trained employees/engineers who utilize the software applications to manipulate data and work with the physician to create the virtual planning session," indicating a human-in-the-loop process for generating the final outputs.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc):

      • Simulated Design Validation Testing: The "ground truth" appears to be the "initial input data (.STL)" and the design specifications; the test verifies that "output data (CT scan) equals output data validation" (likely intended to mean input equals output, or input from CT leads to correct output).
      • Human Factors and Usability Testing: The "ground truth" is effectively the "expert opinion" of the three clinical experts regarding the usability and applicability of the outputs, rather than a definitive medical diagnosis.
    7. The sample size for the training set:

      • The document describes the KLS Martin IPS Planning System as using "commercially off-the-shelf (COTS) software applications" (Materialise Mimics and Geomagic® Freeform PlusTM) for image segmentation and manipulation. This implies that the core algorithms were pre-existing and not developed by KLS Martin as a novel AI/ML model that would require a distinct training set outlined in this submission. Therefore, no information on a training set size is provided for the device.
    8. How the ground truth for the training set was established:

      • Not applicable, as no training set for a novel AI/ML model by KLS Martin is described. The COTS software validation would have been performed by their respective developers.
    Ask a Question

    Ask a specific question about this device

    K Number
    K191028
    Manufacturer
    Date Cleared
    2019-11-22

    (218 days)

    Product Code
    Regulation Number
    872.4760
    Panel
    Dental
    Reference & Predicate Devices
    Predicate For
    Why did this record match?
    Reference Devices :

    K173039, K944565, K182789

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    KLS Martin Individual Patient Solutions implant devices are intended for use in the stabilization, fixation, and reconstruction of the maxillofacial / midface and mandibular skeletal regions.

    Device Description

    KLS Martin Individual Patient Solutions is comprised of patient-specific models and metallic bone plates used in conjunction with metallic bone screws for internal fixation of maxillofacial / midface and mandibular bones. The devices are manufactured based on medical imaging (CT scan) of the patient's anatomy with input from the physician during virtual planning and prior to finalization and production of the device. The physician only provides input for model manipulation and interactive feedback by viewing digital models of planned outputs that are modified by trained KLS Martin engineers during the planning session. For each design iteration, verification is performed by virtually fitting the generated implant over a 3D model of the patient's anatomy to ensure its dimensional properties allow an adequate fit.

    Implants are provided non-sterile, range in thickness from 0.3 mm - 10 mm, and are manufactured using traditional (subtractive) methods from either CP Titanium (ASTM F67) or Ti-6Al-4V (ASTM F136) materials or additive methods from Ti-6Al-4V. These patient-specific devices are fixated with previously cleared KLS Martin screws.

    AI/ML Overview

    The provided text describes the performance testing of the KLS Martin Individual Patient Solutions device, primarily focusing on non-clinical bench testing to demonstrate substantial equivalence to predicate devices, rather than a clinical study establishing device performance against acceptance criteria in human subjects.

    Therefore, many of the requested details regarding clinical study design (e.g., sample size for test set, expert adjudication, MRMC study, standalone performance, ground truth establishment for training/test sets) are not applicable as they relate to clinical studies that were explicitly stated as "not necessary for the substantial equivalence determination."

    However, I can extract information related to the non-clinical performance testing and the implicit acceptance criteria derived from comparison to predicate devices and established standards.

    Here's a breakdown of the available information:

    Acceptance Criteria and Reported Device Performance (Non-Clinical)

    The acceptance criteria are generally implied to be meeting or exceeding the performance and safety profiles of the predicate devices and relevant ASTM/ISO standards. The "reported device performance" is framed as demonstrating substantial equivalence rather than specific numerical metrics for a clinical outcome.

    Acceptance Criteria (Implied)Reported Device Performance
    Mechanical Performance (Tensile & Bending): Equivalent or superior bending resistance and fatigue life to predicate devices (specifically K944565) as per ASTM F382."The bending resistance and fatigue life of the subject devices was determined to be substantially equivalent to the K944565 plates."
    Biocompatibility: Meet ISO 10993-1 standards and be equivalent to predicate (K163579) for titanium devices."Biocompatibility endpoints were evaluated in accordance with ISO 10993-1... The battery of cytotoxicity, chemical analysis, sensitization and irritation, and chemical/material characterization testing was leveraged from K163579 for titanium devices. The subject devices are identical to the primary predicate devices in material formulations, manufacturing methods and processes, and sterilization methods. No other chemicals have been added..."
    Sterilization: Achieve a sterility assurance level (SAL) of 10-6 using the biological indicator (BI) overkill method as per ISO 17665-1:2006."Steam sterilization validations were performed... All test method acceptance criteria were met. Validations for devices manufactured from titanium were leveraged from the predicate device, KLS Martin Individual Patient Solutions (K163579). Subject titanium devices are identical in formulation, manufacturing processes, and post-processing procedures (cleaning & sterilization) as the predicate device."
    Pyrogenicity: Contain endotoxin levels below the USP allowed limit for medical devices as per AAMI ANSI ST72."LAL endotoxin testing was conducted according to AAMI ANSI ST72... The results of the testing demonstrate that the subject devices contain endotoxin levels below the USP allowed limit for medical devices and meet pyrogen limit specifications. LAL endotoxin testing for titanium was leveraged from the predicate device, KLS Martin Individual Patient Solutions (K163579)."
    Software Verification & Validation: All software requirements and specifications are implemented correctly and completely, traceable to system requirements, and conform to pre-defined specifications and acceptance criteria."Quality and on-site user acceptance testing provide objective evidence that all software requirements and specifications were implemented correctly and completely and are traceable to system requirements. Testing required as a result of risk analysis and impact assessments showed conformity with pre-defined specifications and acceptance criteria. Software documentation demonstrates all appropriate steps have been taken to ensure mitigation of any potential risks and performs as intended based on the user requirements and specifications."

    Study Information (Non-Clinical Focus)

    1. Sample size used for the test set and the data provenance:

      • Test Set Sample Size: Not explicitly stated in terms of number of unique devices for each mechanical test, but rather described as "bench testing" and "comparative performance testing." It implies sufficient samples were tested to meet standard requirements for ASTM F382 and other bench tests.
      • Data Provenance: The tests were conducted to demonstrate substantial equivalence to predicate devices (K163579, K944565), and some performance data (biocompatibility, sterilization, pyrogenicity) were "leveraged" from previous clearances of the predicate device since the materials and processes are identical. This implies the data originates from the manufacturer's internal testing or contract labs. The country of origin for the data is not specified but is presumably within the regulatory framework acceptable to the FDA. The testing is non-clinical bench testing, not retrospective or prospective human data.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience):

      • Not Applicable. This pertains to clinical studies involving expert review of diagnostic images or outcomes. The provided text describes non-clinical bench testing and software validation.
    3. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

      • Not Applicable. This pertains to clinical studies involving human interpretation and ground truth establishment.
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • Not Applicable. No clinical study (MRMC or otherwise) involving human readers or AI assistance was conducted or described for this device, as "Clinical testing was not necessary for the substantial equivalence determination." This device consists of patient-specific implants and the related planning system, not an AI for image interpretation.
    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • Not Applicable. While software verification and validation were performed for the planning software, it's not an "algorithm" in the sense of an AI model for diagnosis. The software is a tool for design and planning, with human input from the physician and KLS Martin engineers. Therefore, a "standalone algorithm performance" as typically defined for AI/ML devices is not relevant here.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • For Bench Testing: The "ground truth" for non-clinical performance testing is established by validated test methods (e.g., ASTM, ISO standards) and comparison to the known performance of the predicate devices. For software, the ground truth is its pre-defined specifications and user requirements.
      • Not Applicable for clinical "ground truth" types mentioned (expert consensus, pathology, outcomes data).
    7. The sample size for the training set:

      • Not Applicable. This refers to machine learning models. The device involves patient-specific design based on CT scans, but not a generalizable AI model that requires a training set in the conventional sense. The software's design and functionality are established through traditional software development and validation, not machine learning training.
    8. How the ground truth for the training set was established:

      • Not Applicable. (See point 7).
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1