K Number
K102251
Device Name
DASH KNEE
Manufacturer
Date Cleared
2011-05-17

(281 days)

Product Code
Regulation Number
882.4560
Panel
OR
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

DASH knee is intended to be an intraoperative image guided localization system to enable minimally invasive surgery. It links a freehand probe, tracked by a passive marker sensor system to virtual computer image space on an individual 3D-model of the patient's bone, which is generated through acquiring multiple landmarks on the bone surface. The system is indicated for any medical condition in which the use of stereotactic surgery may be appropriate and where a reference to a rigid anatomical structure, such as the skull, a long bone, or vertebra, can be identified relative to the anatomy. The system aids the surgeon to accurately navigate a knee prosthesis to the intraoperatively planned position. Ligament balancing and measurements of bone alignment are provided by DASH knee.

Example orthopedic surgical procedures include but are not limited to:

  • · Total Knee Replacement
  • Ligament Balancing
Device Description

Dash is an image guided surgery system for total knee replacement surgery based on landmark based visualization of the femur and tibia. It is intended to enable operational navigation in orthopedic surgery. It links a surgical instrument, tracked by passive markers to virtual computer image space on an individual 3D-model of the patient's bone, which is generated through acquiring multiple landmarks on the bone surface. DASH knee uses the registered landmarks to navigate the femoral and tibial cutting guides to the optimally position.

DASH knee software registers the patient data needed for navigating the surgery intraoperatively. No preoperative CT-scanning is necessary.

AI/ML Overview

The provided text describes the K102251 510(k) Summary for the BrainLAB DASH knee system. This document outlines the device, its intended use, comparisons to predicate devices, and verification/validation activities. However, it does not contain specific acceptance criteria, reported device performance metrics in a quantitative manner, or detailed study results that would typically be presented in a table format with numerical values.

Therefore, I cannot directly extract and fill the table for acceptance criteria and reported device performance, nor can I provide answers for all the requested study design details as the information is not present in the given text.

Here's what can be inferred or stated based on the provided text, along with the limitations:

1. Table of Acceptance Criteria and Reported Device Performance

  • Acceptance Criteria: The document implies that the system must function correctly according to its specifications and ensure safety. It mentions "correct system functionality as it has been specified" and "correct behavior of the system for all possible procedures," but no specific numerical targets or thresholds are given.
  • Reported Device Performance: The document states that "All tests have been successfully completed" and that the device was found "substantially equivalent" to predicate devices. This indicates that the device met its internal performance benchmarks, but these benchmarks or the numerical results are not disclosed.
Acceptance Criteria (Implied)Reported Device Performance (Implied)
Correct system functionality during registration/navigationAll tests successfully completed; system functions as specified.
Correct software algorithm behaviorAll tests successfully completed; software functions as specified.
Correct behavior for all possible workflowsAll tests successfully completed; system functions as specified for all procedures.
Safety measures against defined risks are effectiveSafety measures tested and found effective.
Substantial equivalence to predicate devicesDevice found substantially equivalent to K073615 and K014256.

2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

  • Sample Size for Test Set: Not specified. The document mentions "usability workshops (use labs)" with "plastic bones (sawbones)" and "cadaver lab" testing, but no specific number of instruments, tests, or cadavers is provided.
  • Data Provenance: Not explicitly stated. The testing appears to be primarily in a lab setting ("usability workshops," "cadaver lab") and therefore would be considered prospective in nature, generating new simulated or ex vivo data. There is no mention of patient data or country of origin for the data.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

  • Number of Experts & Qualifications: Not specified. The document mentions "testing persons went through same procedure like for the non clinical use lab sessions" during cadaver testing, implying the involvement of individuals simulating surgical procedures. However, their qualification, number, or their role in establishing a formal "ground truth" (beyond following specified procedures) is not detailed.

4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

  • Adjudication Method: Not specified. The document describes a series of verification and validation steps, including comparing "registration values... to external measured reference values" and testing for "correct behavior." This implies a comparison to a predefined standard or ideal outcome, but no formal adjudication process for discrepancies is mentioned.

5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

  • No MRMC comparative effectiveness study involving human readers and AI assistance is described. The DASH knee is an image-guided surgery system, not an AI diagnostic tool that assists human readers in interpreting images. Its purpose is to guide surgical instruments, not to enhance human interpretation of images.

6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

  • Yes, a form of standalone testing was performed for the software algorithms. The document states: "After the verification of the instruments in combination with the software the verification of the software algorithms itself has been performed." This implies that the core algorithms were tested independently of full human interaction, likely against defined computational outputs or expected results.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

  • The ground truth appears to be based on external measured reference values for registration accuracy and predefined specifications/expected behavior for software functionality and workflow correctness. In the cadaver and sawbone labs, the ground truth would likely be the anatomically correct positions or optimal surgical trajectories as determined by established surgical principles and perhaps simulated by experienced users. It is not expert consensus from reviewing patient data, nor is it pathology or long-term outcomes data as these are pre-market non-clinical validations.

8. The sample size for the training set

  • Not applicable as this is not an AI/machine learning device in the contemporary sense that requires a "training set" for model development. The system uses "landmark based visualization" and "acquiring multiple landmarks on the bone surface" to generate a 3D model. Its functionality is based on established algorithms in image guidance and tracking, not on learning from a large dataset.

9. How the ground truth for the training set was established

  • Not applicable, for the same reason as point 8.

§ 882.4560 Stereotaxic instrument.

(a)
Identification. A stereotaxic instrument is a device consisting of a rigid frame with a calibrated guide mechanism for precisely positioning probes or other devices within a patient's brain, spinal cord, or other part of the nervous system.(b)
Classification. Class II (performance standards).