K Number
K243810
Manufacturer
Date Cleared
2025-06-04

(175 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

TraumaCad Neo is indicated for assisting healthcare professionals to analyze orthopedic conditions and to plan orthopedic procedures by overlaying on relevant radiological images visual information such as measurements and prosthesis templates. Clinical judgment and experience are required to properly use the software. The software is not intended for primary radiological image interpretation or radiological appraisal. Device is not intended for use on mobile phones.

Device Description

TraumaCad Neo 1.1 allows surgeons to evaluate digital images while performing various pre-operative surgical planning and evaluation of images. This software application enables surgeons to plan operations on screen, execute measurements, and facilitates a film-less orthopedic practice. TraumaCad Neo 1.1 also allows post-operative review and assessment of X-ray images obtained after the surgical procedure, with a feature for automatic surgery outcome analysis of postoperative total hip arthroplasty images. The program features an extensive regularly updated library of digital templates from leading manufacturers. TraumaCad Neo supports DICOM and is communicating with Quentry®, a proprietary web-based cloud service from Brainlab and with other healthcare data platforms, such as PACS solutions. It is through these healthcare data platforms, where the medical staff can upload images to plan their expected results prior to the procedure to create a smooth surgical workflow from start to finish.

AI/ML Overview

Here's the breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) Clearance Letter for TraumaCad Neo (1.1):

1. Table of Acceptance Criteria and Reported Device Performance

Device: TraumaCad Neo (1.1)

Feature TestedAcceptance CriteriaReported Device Performance
Implant Presence DetectionNot explicitly stated as a percentage, but implied by the overall landmark detection criteria. Should be highly accurate.The machine learning algorithm was able to correctly determine implant presence 99% of the time.
2D Landmark Detection AccuracyAt least 80% of the analyzed femur and implant stem shafts were within 4 mm distance to their ground-truth landmark annotation.92% of the landmarks were successfully detected automatically within 4mm distance from their corresponding ground-truth landmark annotations. This criterion was met.

2. Sample Size Used for the Test Set and Data Provenance

  • Total Data Pool: 349 original X-ray images.
  • Implant Detection Evaluation: All 349 images from 186 patients were used.
  • Landmark Detection Evaluation: 184 images from 184 patients were used. Each of these images was augmented 3 times, leading to a sample size of over 1000 images for landmark detection testing.
  • Data Provenance:
    • Images from multiple clinical sites.
    • Wide variety of scanner models, implants, and patient characteristics.
    • Includes images from seven unique X-ray device manufacturers with 11 unique X-ray device models.
    • An independent test dataset (comprising approximately 28% of the test images) from an independent clinical site and X-ray manufacturer was allocated to test generalizability to unseen data.
    • All images are standing pelvic X-rays.
    • Pixel spacing is between 0.1 and 0.2 mm in both x and y axes.
    • Approximately 57% of the test set images are from females, while 43% are from male patients.
    • All patients are adults (≥18 years old), predominantly between the ages of 50 and 80 (68% of the entire test set).
    • Balanced distribution of implant laterality.
    • Contains Cup and Stem Implants from multiple manufacturers in a range of sizes.
  • Retrospective or Prospective: Not explicitly stated, but the description of separate training and testing data and the use of "original X-ray images" (as opposed to images acquired specifically for the study) suggests this was a retrospective study.

3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

  • Number of Experts: Not explicitly stated as a specific number. The document mentions "ground-truth annotations done by qualified and trained personnel."
  • Qualifications of Experts: The document states "qualified and trained personnel," but does not specify their medical or technical qualifications (e.g., radiologist with X years of experience).

4. Adjudication Method for the Test Set

  • The document mentions "ground-truth annotations done by qualified and trained personnel." It does not specify an adjudication method such as 2+1 or 3+1 (where multiple experts independently annotate and discrepancies are resolved by a third or majority vote). Therefore, the adjudication method is not explicitly described. It could be that a single qualified expert provided the ground truth, or a consensus method was used but not detailed.

5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

  • No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not conducted or reported in this 510(k) summary. The evaluation focuses on the standalone performance of the AI/ML algorithm.
  • Effect Size: Since no MRMC study was performed, there is no reported effect size on how much human readers improve with AI vs. without AI assistance. The device is indicated for "assisting healthcare professionals," implying a human-in-the-loop context, but the performance evaluation is solely on the automated components.

6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

  • Yes, a standalone performance study was clearly conducted. The performance evaluation section explicitly describes the training, testing, and validation of "AI/ML models" for "automatic postoperative total hip arthroplasty analysis" by identifying implants and detecting landmarks. The acceptance criteria and reported results are solely for the algorithm's performance against ground truth.

7. The Type of Ground Truth Used

  • The ground truth used was expert annotations. The document states, "Accuracy of implant presence and 2D landmark detection have been tested against ground-truth annotations done by qualified and trained personnel."

8. The Sample Size for the Training Set

  • The document explicitly states that the "training data was totally separate from the performance testing data." However, the sample size for the training set is not provided. It only mentions that "The AI/ML models were trained with supervision on X-ray image data from multiple clinical sites."

9. How the Ground Truth for the Training Set Was Established

  • The document implies a similar process for establishing ground truth for the training set as for the test set, stating that "The AI/ML models were trained with supervision on X-ray image data..." This suggests that human expert annotations were also used to establish the ground truth for the training set, just as they were for the test set. However, the specific details (number of experts, qualifications, adjudication) are not provided for the training set ground truth.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).