Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K180539
    Date Cleared
    2018-08-13

    (166 days)

    Product Code
    Regulation Number
    888.3030
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K141078, K152171, K140550

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Deformity Analysis and Correction Software (DACS) and Instrumentation are intended to be used as components of the Smith & Nephew Taylor Spatial Frame external fixation system that is indicated for the following: post-traumatic joint contracture which has resulted in loss of range of motion; fractures and disease which generally may result in joint contractures or loss of range of motion and fractures requiring distraction; open and closed fracture fixation; pseudoarthrosis of long bones; limb lengthening by epiphyseal distraction; correction of bony or soft tissue deformities; correction of bony or soft tissue defects; joint arthrodesis; infected fractures or nonunions.

    Device Description

    The Deformity Analysis and Correction Software (DACS) and Instrumentation is an optional software component and is used to assist the physician in calculating the lengths of the struts connecting the rings to manipulate the bone fragments. The software receives inputs from the physician and allows the physician to visualize the moving bone position. The program computes the strut lengths necessary to implement any desired translation and/or rotation required by the surgeon. The instrumentation includes Radiopaque Fiducial Markers which are attached to the Smith & Nephew Taylor Spatial Frame external fixator.

    AI/ML Overview

    The provided text is a 510(k) summary for the Deformity Analysis and Correction Software (DACS) and Instrumentation. It details the device, its intended use, and a comparison to predicate devices, but it does not contain a specific section outlining detailed acceptance criteria and a study that explicitly proves the device meets those criteria in a quantitative manner.

    Instead, the document focuses on demonstrating substantial equivalence to predicate devices through qualitative comparisons and general statements about performance and accuracy testing.

    However, based on the information provided, we can infer some aspects and construct a table and description as requested, noting where specific details are absent.

    Here's an analysis of the document to answer your questions:

    1. A table of acceptance criteria and the reported device performance

    The document does not explicitly present a table of "acceptance criteria" with quantitative performance metrics. It generally states that "Performance and accuracy testing were performed to test the ability of the Deformity Analysis and Correction Software (DACS) and Instrumentation to produce correct results under different variations of bone deformities, anatomical orientations, and device combinations." It also states that testing "demonstrated that the Deformity Analysis and Correction Software (DACS) and Instrumentation is capable of successfully correcting the variety of deformities it may encounter in the clinical setting."

    Without explicit pass/fail criteria or quantitative results such as mean error, standard deviation, or accuracy ranges, it's impossible to create a strict "acceptance criteria" table from this document. However, based on the description of the testing, the implied acceptance criteria were that the software would "produce correct results" and "successfully correct deformities."

    Here's a table based on the implied performance and accuracy from the document:

    Acceptance Criteria (Inferred)Reported Device Performance (as stated in document)
    Ability to produce correct results under different variations of bone deformities, anatomical orientations, and device combinations."Performance and accuracy testing were performed to test the ability of the Deformity Analysis and Correction Software (DACS) and Instrumentation to produce correct results under different variations of bone deformities, anatomical orientations, and device combinations."
    Capability of successfully correcting the variety of deformities encountered in clinical settings."Testing with these image pairs demonstrated that the Deformity Analysis and Correction Software (DACS) and Instrumentation is capable of successfully correcting the variety of deformities it may encounter in the clinical setting."
    Functionality and safety in comparison to predicate devices."From the evidence submitted in this 510(k), the Deformity Analysis and Correction Software (DACS) and Instrumentation demonstrates that the device is as safe, as effective, and performs as well as or better than the legally marketed device predicates."
    "…confirmed that any differences between the subject device and predicate software do not render the device NSE as there is not a new intended use; and any differences in technological characteristics do not raise different questions of safety and effectiveness than the predicate device."

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Sample Size for Test Set: The document mentions that testing was "executed against a variety of CAD-generated image sets and a Smith & Nephew Taylor Spatial Frame x-ray image set." The exact number (sample size) of these image sets is not specified.
    • Data Provenance:
      • Source: "CAD-generated image sets" (simulated data) and "a Smith & Nephew Taylor Spatial Frame x-ray image set" (likely real patient data, but source country is not specified).
      • Retrospective or Prospective: The use of "CAD-generated image sets" implies simulated, non-patient-specific, or laboratory data. The "Smith & Nephew Taylor Spatial Frame x-ray image set" could be retrospective, either from pre-existing clinical cases or data specifically acquired for testing, but the document does not clarify.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    The document does not specify the number or qualifications of experts used to establish ground truth for the test set. Given the use of "CAD-generated image sets" where "known inputs" were available, the ground truth for these would be inherent in the CAD model parameters rather than established by human experts. For the "Smith & Nephew Taylor Spatial Frame x-ray image set," it's unclear how ground truth was established or if experts were involved.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    The document does not describe any adjudication method for the test set. The validation approach appears to be a direct comparison of software-calculated results against "known inputs" for simulated data and (presumably) against accepted clinical measurements or calculations for real image data, rather than an expert consensus process.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not conducted or described in this document. The DACS is described as software that "assists the physician in calculating the lengths of the struts" and allows "the physician to visualize the moving bone position." It computes strut lengths based on physician inputs. There is no mention of an AI component that would assist human readers in interpretation or diagnosis, nor any study comparing human performance with and without such assistance. The software is a calculation and visualization tool, not an AI-based diagnostic aid that would typically be evaluated in an MRMC study.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The performance testing mentioned ("Performance and accuracy testing were performed to test the ability of the Deformity Analysis and Correction Software (DACS) and Instrumentation to produce correct results...") appears to be a form of standalone testing where the software's output was compared to known or expected values. The document states: "The known inputs for each image (device types and strut settings) was compared to the results calculated by the Deformity Analysis and Correction Software (DACS) and Instrumentation." This suggests testing the algorithm's direct output against a reference.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    The ground truth used appears to be:

    • "Known inputs" from CAD-generated images: For simulated data, the ground truth is the inherent parameters or "true" values defined within the CAD models. This is a form of engineered or definitional ground truth.
    • For the "Smith & Nephew Taylor Spatial Frame x-ray image set," the ground truth is presumably established clinical measurements or calculations associated with those images, although this is not explicitly detailed. It does not mention expert consensus, pathology, or outcomes data specifically.

    8. The sample size for the training set

    The document does not mention a training set or its sample size. This is a 510(k) summary for software that appears to be deterministic (performing calculations based on input parameters) rather than a machine learning/AI model that would typically require a training set. The phrase "Deformity Analysis and Correction Software (DACS)" itself suggests a rule-based or algorithmic system, not necessarily one that learns from data.

    9. How the ground truth for the training set was established

    As no training set is mentioned (see point 8), there is no information provided on how ground truth for a training set was established.

    Ask a Question

    Ask a specific question about this device

    K Number
    K143125
    Manufacturer
    Date Cleared
    2014-12-10

    (40 days)

    Product Code
    Regulation Number
    888.3030
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K141078

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The TL-HEX System is intended for limb lengthening by metaphyseal or epiphyseal distractions, fixation of open and closed fractures, treatment of non-union or pseudoarthrosis of long bones and correction of bony or soft tissue defects or deformities. Within this range, indications include:
    • Post-traumatic joint contracture which has resulted in loss of range of motion
    • Fractures and disease which generally may result in joint contractures or loss of range of motion and fractures requiring distraction
    • Open and closed fracture fixation
    • Pseudoarthrosis of long bones
    • Limb lengthening by epiphyseal or metaphyseal distraction
    • Correction of bony or soft tissue deformities
    • Correction of bony or soft tissue defects
    • Joint arthrodesis
    • Infected fractures or non-unions

    Device Description

    The Subject device is a multilateral external fixation system. The System can also be used with a web-based software component that is designed to be used to assist the physician in creating a patient adjustment schedule that assists in adjusting the six struts. The System can also be used with other existing Orthofix external fixation components and screws (such as TrueLok or X Caliber).

    Components of the system include:
    Full, 5/8 and 3/8 aluminum Rings
    Double Row Footplates
    Adjustable struts
    Aluminum strut clips (number and direction)
    Stainless steel instrumentation such as hex drivers, wrenches, and pliers
    Implantable components such as half pins and wires
    Web-based software

    AI/ML Overview

    This document is a 510(k) premarket notification for a medical device called the Orthofix TL-HEX True Lok Hexapod System (TL-HEX) V1.3. It's a submission to the FDA to demonstrate that the new version of the device is substantially equivalent to a previously cleared predicate device.

    The prompt asks for details about acceptance criteria and a study proving the device meets these criteria as if this were an AI/software device. However, this document is for a mechanical external fixation system and its associated software for calculating adjustments, not an AI or diagnostic imaging device. Therefore, many of the requested categories (like "human readers improve with AI," "standalone performance," "ground truth type," "training set size") are not applicable to this type of device and submission.

    The "study" in this context refers to non-clinical performance testing to demonstrate the safety and effectiveness of the mechanical components and the proper functioning of the software.

    Here's an attempt to answer the questions based on the provided document, adapting where necessary given the nature of the device:

    1. A table of acceptance criteria and the reported device performance

    Acceptance Criteria (based on tests and standards)Reported Device Performance
    Mechanical Performance: Withstand expected loads without failure (conformance to ASTM F 1541-02 (2007)e1 "Standard Specification and Test Methods for External Skeletal Fixation Devices")"A review of the mechanical data indicates that the hardware components of the Subject device are capable of withstanding expected loads without failure." "All testing met or exceeded the requirements as established by the test protocols and applicable standards."
    Software Performance: Software functions as intended (conformance with FDA's guidance document "General Principles of Software Validation; Final Guidance for Industry and FDA Staff," including Software IQ, OQ, PQ)"Additionally, software verification and validation testing was completed in conformance with [FDA guidance]. The results of software testing indicate that the software performed as intended."
    Risk Management: Potential hazards evaluated and controlled (conformance with a Risk Management Plan)"The potential hazards have been evaluated and controlled through a Risk Management Plan."
    Substantial Equivalence: Maintain intended use, indications for use, technological characteristics, and labeling compared to the predicate device, considering the extended range of components and software update. These are the underlying "acceptance criteria" for a 510(k) in general."Documentation was provided to demonstrate that the subject device is substantially equivalent to the Predicate Orthofix TL-HEX True Lok Hexapod System (TL-HEX) (K141078). The subject device is substantially equivalent to the predicate device in intended use, indications for use, technological characteristics, and labeling. The subject device includes an extended range of rings, footplates, struts, and a consequent update of the software." "Based on the indications for use, technological characteristics, materials, and comparison to predicate devices, the Subject Orthofix TL-HEX True Lok Hexapod System (TL HEX) has been shown to be substantially equivalent to legally marketed predicate device, and is safe and effective for its intended use."

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Test Set Sample Size: Not explicitly stated as a "sample size" in the context of patient data, as this is primarily a mechanical and software validation. The mechanical testing would have involved a specific number of device components tested per protocol (e.g., several samples of each new component type). Software testing would involve comprehensive test cases to cover all functionalities.
    • Data Provenance: Not applicable in the sense of patient data origin or clinical study type. The reference to "ASTM F 1541-02 (2007)e1" for mechanical testing implies standard laboratory testing. Software testing is internal verification and validation.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    • This question is not applicable to this type of device submission. "Ground truth" in a diagnostic or AI context typically refers to clinical diagnosis or expert annotations. For this submission, standards and protocols (like ASTM F 1541-02 for mechanical, and FDA guidance for software validation) served as the "ground truth" for evaluating performance and compliance. The experts involved would be engineers conducting the tests and quality/regulatory personnel reviewing the results against the established acceptance criteria.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    • Not applicable. This type of submission does not involve adjudication of clinical cases or expert opinions. Performance is assessed against predefined engineering and software validation criteria.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • Not applicable. This is not an AI diagnostic imaging device. It's an external fixation system with software to assist in calculating strut adjustments for limb reconstruction. Clinical data was explicitly stated as "not needed to support the safety and effectiveness" for this 510(k) submission, as substantial equivalence was demonstrated through non-clinical testing.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • The software component of the device is described as "designed to be used to assist the physician in creating a patient adjustment schedule." This implies it's an assistive tool for a human-in-the-loop process. The software's performance was validated in a standalone capacity (i.e., its calculations and functionalities were verified independently) as part of "software verification and validation testing" to ensure it performs "as intended." However, it's not an "algorithm only" in the sense of a fully automated diagnostic or treatment decision system.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • For mechanical testing: The "ground truth" is adherence to the specified performance requirements of the ASTM standard (ASTM F 1541-02 (2007)e1), which define acceptable load-bearing capacities and failure modes for skeletal fixation devices.
    • For software testing: The "ground truth" is that the software performs its calculations correctly, adheres to its functional specifications, and meets the requirements of the FDA's "General Principles of Software Validation" guidance. This would be established through a series of defined test cases with known expected outputs.

    8. The sample size for the training set

    • Not applicable. This is not an AI/machine learning device that requires a "training set" in the conventional sense. The software's logic is deterministic (calculating adjustments based on user input and pre-programmed algorithms), not learned from data.

    9. How the ground truth for the training set was established

    • Not applicable, as there is no "training set" for this device. The software "ground truth" for its development would be based on biomechanical principles, engineering equations, and clinical requirements for hexapod system adjustments.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1