Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K230211
    Device Name
    CranioXpand
    Manufacturer
    Date Cleared
    2023-11-21

    (299 days)

    Product Code
    Regulation Number
    882.5330
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The KLS Martin CranioXpand Spring system is indicated for use in the treatment of cranial conditions such as craniosynostosis and congental deficiencies in which osteotomies and gradual bone distraction are indicated for the infant pediatric subpopulation (29 days of age). The CranioXpand implants are implantable single-use products intended for temporary stabilization of the bony cranial roof during and after surgery. This device is intended to be removed after bone consolidation.

    Device Description

    The KLS Martin CranioXpand Spring System consists of implantable spring distractors and supporting instruments intended for temporary stabilization and distraction of the bony cranial roof during and after surgery through distraction osteogenesis. The CranioXpand Springs are offered in various sizes. The spring features include rounded atraumatic contours to ensure optimal embedding in soft tissue with curved ends to ensure the devices can securely anchor in the bone. Two springs are provided as part of the CranioXpand system for anterior and posterior placement on the osteotomies. The springs are removed after adequate bone formation or after the bone consolidation phase is complete. These devices are typically left in the implanted location for 3 - 6 months before explantation. The CranioXpand Instruments are accessories used to facilitate spring size selection, and spring insertion and positioning.

    AI/ML Overview

    The provided text is a 510(k) Premarket Notification from the FDA, specifically concerning the KLS-Martin L.P. CranioXpand device. This document focuses on demonstrating substantial equivalence to a predicate device, rather than proving the device meets clinical performance acceptance criteria through the types of studies you've queried (e.g., MRMC studies, standalone AI performance, expert ground truth adjudication).

    The CranioXpand device is a physical implant (spring system) used for cranial conditions in pediatric patients, not an AI or software-based diagnostic tool. Therefore, the types of studies and acceptance criteria you've asked about, which are common for AI-driven image analysis or diagnostic devices, are not applicable to the information contained within this 510(k) submission.

    The "studies" conducted for this device are non-clinical performance bench testing and biocompatibility testing, designed to show that the CranioXpand is as safe and effective as its predicate device.

    However, I can extract the information relevant to their acceptance criteria and performance studies for this type of medical device:

    Here's a breakdown based on the provided document:

    1. Acceptance Criteria and Reported Device Performance

    The acceptance criteria for the CranioXpand device, as reported in this 510(k), are focused on bench testing to demonstrate performance equivalence to the predicate device and biocompatibility.

    Acceptance Criteria CategorySpecific CriteriaReported Device Performance
    BiocompatibilityCompliance with ISO 10993-1:2018 for long-term implants in contact with tissue/bone for the springs, and external communicating devices with limited contact for the instruments, covering endpoints such as cytotoxicity, sensitization, irritation, pyrogenicity, carcinogenicity, implantation, acute/subacute/subchronic/chronic toxicity.The CranioXpand device and accessories were evaluated per FDA guidance "Use of International Standard ISO 10993-1, "Biological evaluation of medical devices - Part 1: Evaluation and testing within a risk management Process" and found to comply with the requirements of ISO 10993-1:2018 thus are considered biocompatible.
    Spring Testing (Performance Bench Test)Force measurements during cyclical testing (compressing to 10mm, holding 5s, decompressing, repeated 6 times) must show performance comparable to the predicate device. The exact quantitative criteria for "comparable" are not explicitly stated, but the conclusion is a "Pass.""A comparison of the performance of the subject and predicate springs via force measurements during cyclical testing was conducted... The acceptance criteria of the test were met, thus demonstrating that the performance of the subject device is substantially equivalent to that of the predicate device." Concluded: Pass
    Insertion Instruments TestingInstrument must appropriately open, close, and pick up the spring. Measurements to verify the instrument could compress spring legs sufficiently (<17mm)."A visual and holding inspection was performed of whether the instrument could appropriately open, close, and pick up the spring. Measurements were conducted to verify that the instrument could compress the spring legs sufficiently (<17mm)." Concluded: Pass
    Selection Instrument Testing (Static)Force output via static testing (load applied until 15mm distance between legs, held 5s, unloaded). The specific values of the force output for acceptance are not provided, but the test passed."The force output of the selection instrument via static testing was determined by recording the force while first applying a load on the instrument (1.0mm/s until distance between legs is 15mm), holding the position (5 seconds), then unloading the instrument (same speed)." Concluded: Pass
    Selection Instrument Testing (Dynamic)Withstand 1000 load cycles at continuous load (load applied at 0.5mm/s to 45mm, held 1s, unloaded, repeated 1000 times)."Verification of whether the selection instrument could withstand 1000 load cycles at continuous load via dynamic testing was determined by recording the force while repeating the following steps until 1000 cycles are reached: first applying a load on the instrument (45mm at test speed of 0.5mm/s), holding the position, (1 second) then unloading the instrument (same speed)." Concluded: Pass

    2. Sample Size and Data Provenance

    • Test Set Sample Size: The document does not specify exact sample sizes for the bench testing beyond implying "samples" of the subject and predicate springs and instruments were tested. For such physical device testing, sample sizes are typically determined by engineering standards and statistical confidence levels relevant to manufacturing variability, rather than patient-based data.
    • Data Provenance: The data provenance is from non-clinical laboratory bench testing of the physical devices (springs and instruments). This is not patient data, nor is there any mention of country of origin for such data, as it's likely conducted internally or by contract labs. It is inherently prospective in the sense that the tests are designed and performed to demonstrate specific performance characteristics.

    3. Number of Experts and Qualifications for Ground Truth

    • Not applicable. This FDA submission is for a physical medical device (implant) and its instruments, not an AI or diagnostic software. Therefore, there is no "ground truth" derived from expert image interpretation or clinical diagnosis in the context of the requested AI-related study types. The "ground truth" for these tests is the physical measurement of force, distance, and visual inspection by testing personnel, adhering to established engineering and quality control standards.

    4. Adjudication Method for the Test Set

    • Not applicable. As there are no human expert interpretations of data (like images) that require adjudication for ground truth establishment.

    5. Multi Reader Multi Case (MRMC) Comparative Effectiveness Study

    • No. An MRMC comparative effectiveness study was not conducted as this is a physical implant, not a diagnostic or image-reading AI device. The comparison here is between the subject device's physical performance and the physical performance of a predicate device, as demonstrated through bench testing.

    6. Standalone (Algorithm Only) Performance

    • Not applicable. There is no algorithm or software for "standalone" performance to be evaluated.

    7. Type of Ground Truth Used

    • Physical Measurement and Engineering Specifications: The "ground truth" in this context refers to the expected physical properties and performance characteristics of the device (e.g., force specifications, dimensions, functional operation). This is established through engineering design, material specifications, and the performance characteristics of the legally marketed predicate device.

    8. Sample Size for the Training Set

    • Not applicable. There is no "training set" in the context of this 510(k) submission, as it does not involve machine learning algorithms. Design and manufacturing processes are iteratively refined, but this is distinct from an AI training set.

    9. How the Ground Truth for the Training Set Was Established

    • Not applicable. As above, no training set for an AI model.

    In summary: The provided 510(k) document is for a physical medical implant (CranioXpand) and demonstrates substantial equivalence through biocompatibility testing and engineering bench tests comparing its physical performance to a predicate device. It does not involve AI, image analysis, or clinical studies characteristic of AI-driven diagnostic devices where concepts like MRMC studies, expert ground truth, and training/test sets are relevant.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1