Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K221792
    Date Cleared
    2022-08-25

    (65 days)

    Product Code
    Regulation Number
    888.3030
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    OSN

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    1. Fractures and disease which generally may result in joint contractures or loss of range of motion and fractures requiring distraction
    2. Open and closed fracture fixation
    3. Pseudarthrosis of long bones
    4. Limb lengthening by distraction
    5. Correction of bony or soft tissue deformities
    6. Joint arthrodesis
    7. Infected fractures
    8. Nonunions
    Device Description

    The Smith & Nephew Acute QC Strut and Components consist of multiple sized struts (e.g., X Short, Short, Medium, and Long) and components such as nuts, bands, and washers to build an external fixation construct. The proposed devices incorporate design features similar to those currently incorporated in previously cleared Smith & Nephew SMART Taylor Spatial Frame and ILIZAROV™ External Fixation System. The Acute QC Strut and Components will be manufactured from aluminum, composite, and stainless-steel material, which is identical to that of the predicate devices of the SMART Taylor Spatial Frame System (e.g., K210953, S.E. 07/29/2021).

    AI/ML Overview

    This document is a 510(k) Premarket Notification for a medical device (Acute QC Strut and Components). The acceptance criteria and supporting studies described in such documents are typically related to proving the substantial equivalence of the new device to existing predicate devices, primarily through engineering performance testing and material compatibility, rather than clinical performance (like improved diagnostic accuracy).

    Therefore, the provided text does not contain the information requested regarding acceptance criteria related to a study proving the device meets acceptance criteria through clinical performance metrics, such as sensitivity, specificity, or human reader improvement with AI assistance. This type of information is typically found in submissions for AI/ML-enabled diagnostic devices, not for mechanical orthopedic devices like external fixation systems.

    The document primarily focuses on demonstrating:

    • Substantial Equivalence: The new device has the same intended use, indications for use, similar design, materials, and performance characteristics as legally marketed predicate devices.
    • Bench Testing: Mechanical, biological, and MR safety testing to ensure the device performs as expected and is safe.

    Given this, I cannot construct the table and detailed answers about clinical performance studies as the information is not present in the provided text.

    However, I can interpret the provided text in the context of what was done for this specific device.

    What is present in the document:

    • Acceptance Criteria & Performance (Implicit - Mechanical/Material): The acceptance criteria are implicitly met by demonstrating that the mechanical performance, biological safety, and MR safety of the Acute QC Strut and Components are "substantially equivalent" or do not present "greater risk" than the predicate devices. The document states:

      • "The subject Acute QC Struts were determined to not present a greater biological risk than the cleared worst-case representative SMART FX STRUT..."
      • "Performance testing was conducted on the subject, implantable devices in comparison against one or more of the previously cleared predicate devices... A review of the mechanical data in the submission indicates that the Acute QC Strut and Components are substantially equivalent to the previously cleared predicate devices."
    • Study That Proves the Device Meets Acceptance Criteria: This was primarily through non-clinical performance testing (bench testing) and comparison to predicate devices, rather than a clinical study involving diagnostic accuracy.

      The performance testing reviewed included:

      • Fully Reversed Compressive Fatigue Loading of the Construct
      • Continuous Compressive Static Loading of the Construct
      • Static Compressive Bending Load on the Ball Joint Assembly
      • Compressive Fatigue Loading of the Ball Joint Assembly
      • Torsional Strength Testing of the Subject Half Pin Washer
      • Biological risk assessment (ISO 10993-1: 2018)
      • MR safety technical memo comparing to predicate devices.

    Based on the provided text, the requested information (related to clinical performance of an AI/ML device) is explicitly not present.

    The document states: "Clinical data was not needed to support the safety and effectiveness of the subject devices." This confirms that no clinical studies as you've described (e.g., MRMC, standalone AI performance, expert ground truth adjudication) were conducted or deemed necessary for this 510(k) submission.

    Therefore, I cannot fill out the requested table or answer the specific questions about clinical performance studies accurately from the provided text.

    Ask a Question

    Ask a specific question about this device

    K Number
    K180539
    Date Cleared
    2018-08-13

    (166 days)

    Product Code
    Regulation Number
    888.3030
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    OSN

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Deformity Analysis and Correction Software (DACS) and Instrumentation are intended to be used as components of the Smith & Nephew Taylor Spatial Frame external fixation system that is indicated for the following: post-traumatic joint contracture which has resulted in loss of range of motion; fractures and disease which generally may result in joint contractures or loss of range of motion and fractures requiring distraction; open and closed fracture fixation; pseudoarthrosis of long bones; limb lengthening by epiphyseal distraction; correction of bony or soft tissue deformities; correction of bony or soft tissue defects; joint arthrodesis; infected fractures or nonunions.

    Device Description

    The Deformity Analysis and Correction Software (DACS) and Instrumentation is an optional software component and is used to assist the physician in calculating the lengths of the struts connecting the rings to manipulate the bone fragments. The software receives inputs from the physician and allows the physician to visualize the moving bone position. The program computes the strut lengths necessary to implement any desired translation and/or rotation required by the surgeon. The instrumentation includes Radiopaque Fiducial Markers which are attached to the Smith & Nephew Taylor Spatial Frame external fixator.

    AI/ML Overview

    The provided text is a 510(k) summary for the Deformity Analysis and Correction Software (DACS) and Instrumentation. It details the device, its intended use, and a comparison to predicate devices, but it does not contain a specific section outlining detailed acceptance criteria and a study that explicitly proves the device meets those criteria in a quantitative manner.

    Instead, the document focuses on demonstrating substantial equivalence to predicate devices through qualitative comparisons and general statements about performance and accuracy testing.

    However, based on the information provided, we can infer some aspects and construct a table and description as requested, noting where specific details are absent.

    Here's an analysis of the document to answer your questions:

    1. A table of acceptance criteria and the reported device performance

    The document does not explicitly present a table of "acceptance criteria" with quantitative performance metrics. It generally states that "Performance and accuracy testing were performed to test the ability of the Deformity Analysis and Correction Software (DACS) and Instrumentation to produce correct results under different variations of bone deformities, anatomical orientations, and device combinations." It also states that testing "demonstrated that the Deformity Analysis and Correction Software (DACS) and Instrumentation is capable of successfully correcting the variety of deformities it may encounter in the clinical setting."

    Without explicit pass/fail criteria or quantitative results such as mean error, standard deviation, or accuracy ranges, it's impossible to create a strict "acceptance criteria" table from this document. However, based on the description of the testing, the implied acceptance criteria were that the software would "produce correct results" and "successfully correct deformities."

    Here's a table based on the implied performance and accuracy from the document:

    Acceptance Criteria (Inferred)Reported Device Performance (as stated in document)
    Ability to produce correct results under different variations of bone deformities, anatomical orientations, and device combinations."Performance and accuracy testing were performed to test the ability of the Deformity Analysis and Correction Software (DACS) and Instrumentation to produce correct results under different variations of bone deformities, anatomical orientations, and device combinations."
    Capability of successfully correcting the variety of deformities encountered in clinical settings."Testing with these image pairs demonstrated that the Deformity Analysis and Correction Software (DACS) and Instrumentation is capable of successfully correcting the variety of deformities it may encounter in the clinical setting."
    Functionality and safety in comparison to predicate devices."From the evidence submitted in this 510(k), the Deformity Analysis and Correction Software (DACS) and Instrumentation demonstrates that the device is as safe, as effective, and performs as well as or better than the legally marketed device predicates."
    "…confirmed that any differences between the subject device and predicate software do not render the device NSE as there is not a new intended use; and any differences in technological characteristics do not raise different questions of safety and effectiveness than the predicate device."

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Sample Size for Test Set: The document mentions that testing was "executed against a variety of CAD-generated image sets and a Smith & Nephew Taylor Spatial Frame x-ray image set." The exact number (sample size) of these image sets is not specified.
    • Data Provenance:
      • Source: "CAD-generated image sets" (simulated data) and "a Smith & Nephew Taylor Spatial Frame x-ray image set" (likely real patient data, but source country is not specified).
      • Retrospective or Prospective: The use of "CAD-generated image sets" implies simulated, non-patient-specific, or laboratory data. The "Smith & Nephew Taylor Spatial Frame x-ray image set" could be retrospective, either from pre-existing clinical cases or data specifically acquired for testing, but the document does not clarify.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    The document does not specify the number or qualifications of experts used to establish ground truth for the test set. Given the use of "CAD-generated image sets" where "known inputs" were available, the ground truth for these would be inherent in the CAD model parameters rather than established by human experts. For the "Smith & Nephew Taylor Spatial Frame x-ray image set," it's unclear how ground truth was established or if experts were involved.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    The document does not describe any adjudication method for the test set. The validation approach appears to be a direct comparison of software-calculated results against "known inputs" for simulated data and (presumably) against accepted clinical measurements or calculations for real image data, rather than an expert consensus process.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not conducted or described in this document. The DACS is described as software that "assists the physician in calculating the lengths of the struts" and allows "the physician to visualize the moving bone position." It computes strut lengths based on physician inputs. There is no mention of an AI component that would assist human readers in interpretation or diagnosis, nor any study comparing human performance with and without such assistance. The software is a calculation and visualization tool, not an AI-based diagnostic aid that would typically be evaluated in an MRMC study.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The performance testing mentioned ("Performance and accuracy testing were performed to test the ability of the Deformity Analysis and Correction Software (DACS) and Instrumentation to produce correct results...") appears to be a form of standalone testing where the software's output was compared to known or expected values. The document states: "The known inputs for each image (device types and strut settings) was compared to the results calculated by the Deformity Analysis and Correction Software (DACS) and Instrumentation." This suggests testing the algorithm's direct output against a reference.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    The ground truth used appears to be:

    • "Known inputs" from CAD-generated images: For simulated data, the ground truth is the inherent parameters or "true" values defined within the CAD models. This is a form of engineered or definitional ground truth.
    • For the "Smith & Nephew Taylor Spatial Frame x-ray image set," the ground truth is presumably established clinical measurements or calculations associated with those images, although this is not explicitly detailed. It does not mention expert consensus, pathology, or outcomes data specifically.

    8. The sample size for the training set

    The document does not mention a training set or its sample size. This is a 510(k) summary for software that appears to be deterministic (performing calculations based on input parameters) rather than a machine learning/AI model that would typically require a training set. The phrase "Deformity Analysis and Correction Software (DACS)" itself suggests a rule-based or algorithmic system, not necessarily one that learns from data.

    9. How the ground truth for the training set was established

    As no training set is mentioned (see point 8), there is no information provided on how ground truth for a training set was established.

    Ask a Question

    Ask a specific question about this device

    K Number
    K140550
    Manufacturer
    Date Cleared
    2014-08-25

    (174 days)

    Product Code
    Regulation Number
    888.3030
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    OSN

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The OrthoHub External Fixator Software is used with Smith & Nephew Taylor Spatial Frame (TSF) rings and struts for the treatment of traumatic or reconstructive tibia deformities. It is used to generate a prescription of strut adjustments to provide to the patient.

    The OrthoHub External Fixator Software is used to assist the clinician in adjusting the Smith & Nephew Taylor Spatial Frame (TSF) External Fixator by creating a patient adjustment schedule. The OrthoHub software receives x-ray images of the deformity and installed fixator hardware and produces a prescription recommending adjustments to the fixator that define a correction path for the deformity. The software is used as an accessory to the commercially available hardware as detailed in the Instructions for Use, specifically, Smith and Nephew's Taylor Spatial Frame rings and struts.

    Device Description

    The OrthoHub External Fixator Software is a software program used on a Macintosh Computer. The software is used to generate a prescription which details adjustments required for the treatment of traumatic or reconstructive tibia deformities when using the Smith and Nephew Taylor Spatial Frame External Fixator hardware. Users input orthogonal x-ray images of a patient's deformity taken after installation of the fixator hardware on the patient. The software creates a colored graphical representation of the bones and orthopedic fixator hardware shown in the x-rays, and the user adjusts this graphical representation so that it best matches the underlying x-rays. The user then defines a correction rate, and the software generates a prescription of strut adjustments to correct the deformity. This prescription is provided to the patient for strut adjustment during the prescription period.

    AI/ML Overview

    The provided document, an FDA 510(k) summary for the OrthoHub External Fixator Software, describes the device's intended use and a general verification and validation process. However, it does not contain specific acceptance criteria, detailed study designs, or performance metrics in the way requested by the prompt for an AI/CAD-like system (e.g., sensitivity, specificity, FROC analysis).

    Based on the available text, here's an attempt to answer the questions, highlighting where information is missing:

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance CriteriaReported Device Performance
    Software performs as intended: The software should accurately generate prescriptions for strut adjustments based on input x-rays and user-defined parameters for treating traumatic or reconstructive tibia deformities using Smith & Nephew Taylor Spatial Frame (TSF) rings and struts."Results of the testing confirmed that the software performs as intended."
    "The software produced recommended adjustments as appropriate for the inputs (patient x-rays) and user entered information (fixator hardware, time for prescription/treatment)."
    Substantial Equivalence: The software's function and output should be comparable to the predicate devices (Smith & Nephew Spatialframe V4.1 Web-based Software) in providing recommended adjustments for external hardware."Results of performance testing through the bench testing and software verification and validation process demonstrate that the OrthoHub External Fixator Software functions as intended and is substantially equivalent to the predicate devices."
    "The OrthoHub External Fixator Software has the same intended use and indications and utilizes the same technology as the predicates."

    Missing Information: Specific quantitative performance metrics (e.g., accuracy of measurement, deviation from gold standard, success rate of deformity correction) are not provided in this document. The "acceptance criteria" identified are general statements of functionality and equivalence rather than precise, measurable targets.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: Not specified. The document mentions "inputs (patient x-rays)" but does not quantify how many cases or patients were used for testing.
    • Data Provenance: Not specified (e.g., country of origin, retrospective/prospective). It implicitly uses "patient x-rays" which suggests clinical data, but details are absent.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Number of Experts: Not specified.
    • Qualifications of Experts: Not specified.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not specified. The document refers to "software verification & validation testing" and "mechanical bench top side by side comparison testing," but does not detail how ground truth or performance assessment was adjudicated.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done

    • MRMC Study: No, an MRMC comparative effectiveness study is not indicated as having been done. The document describes "bench testing and software verification and validation process" and comparison to predicate devices, but not a study involving human readers with and without AI assistance to measure improvement.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Standalone Performance: Yes, the described "Software Verification & Validation Testing" and "Mechanical Bench Top Side by Side Comparison Testing" evaluate the software's ability to 'produce recommended adjustments as appropriate for the inputs' and 'function as intended'. This implies a standalone evaluation of the algorithm's output, without explicitly detailing a human-in-the-loop component for the testing itself, though the device's ultimate use is to assist the clinician.

    7. The Type of Ground Truth Used

    • Type of Ground Truth: Not explicitly stated as "ground truth." However, the implicit ground truth for assessing the software's accuracy would likely be:
      • Validated anatomical measurements/deformity definitions: For the software to correctly "match the underlying x-rays" and "generate a prescription."
      • Engineering/biomechanical principles: Against which the "recommended adjustments" are compared to determine if they are "appropriate."
      • Predicate device outputs: The software's output is compared to the outputs of established predicate devices to demonstrate substantial equivalence.

    8. The Sample Size for the Training Set

    • Sample Size for Training Set: Not applicable/not specified. The document describes the software as performing "mathematical calculations" and creating a "graphical representation" based on user input and x-ray images. It does not suggest the use of machine learning or deep learning that would involve a "training set" in the conventional AI/CAD sense. The software appears to be rule-based or calculation-based, interpreting user inputs (x-rays, fixator hardware, correction rate) rather than learning from a dataset.

    9. How the Ground Truth for the Training Set was Established

    • Ground Truth for Training Set: Not applicable, as there is no indication of a training set for a machine learning model.

    Summary of Missing Information:

    This 510(k) summary provides a high-level overview focused on establishing substantial equivalence to predicate devices, rather than a detailed performance study typically associated with AI/CAD systems. Key details regarding specific quantitative acceptance criteria, test set sizes, ground truth establishment methodologies (expert qualifications, adjudication), and the use of training data (as would be relevant for machine learning) are not present in this document. The assessment appears to be based on functional verification and validation and a qualitative comparison to existing technology.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1