K Number
K251629
Date Cleared
2025-08-07

(71 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The UNiD™ Spine Analyzer is intended for assisting healthcare professionals in viewing and measuring images as well as planning orthopedic surgeries. The device allows surgeons and service providers to perform generic, as well as spine related measurements on images, and to plan surgical procedures. The device also includes tools for measuring anatomical components for placement of surgical implants. Clinical judgment and experience are required to properly use the software.

Device Description

The UNiD™ Spine Analyzer is a web-based application developed to perform preoperative and postoperative patient image measurements and simulate preoperative planning steps for spine surgery. It aims to make measurements on a patient image, simulate a surgical strategy, draw patient-specific rods or choose from a pre-selection of standard implants. The UNiD™ Spine Analyzer allows the user to:

  1. Measure radiological images using generic tools and "specialty" tools
  2. Plan and simulate aspects of surgical procedures
  3. Estimate the compensatory effects of the simulated surgical procedure on the patient's spine

The planning of surgical procedures is done by Medtronic as part of the service of pre-operative planning. The surgical plan may then be used to assist in designing patient-specific implants. Surgeons will have to validate the surgical plan before Medtronic manufactures any implant.

The UNiD™ Spine Analyzer interface is accessible in either standalone mode or connected mode.

AI/ML Overview

Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) clearance letter for the UNiD™ Spine Analyzer:

Overview of Device and Study Focus:

The UNiD™ Spine Analyzer is a web-based application designed to assist healthcare professionals in viewing, measuring, and planning orthopedic spine surgeries. This 510(k) submission primarily focuses on the update to the AI-enabled degenerative predictive model (Degenerative Predictive model). The study aims to demonstrate that this new version is non-inferior to the previous version (predicate device).


1. Table of Acceptance Criteria and Reported Device Performance

The core of the performance evaluation for this AI-enabled software function is focused on demonstrating non-inferiority of the updated "Degenerative Predictive model" to the predicate version.

Acceptance CriteriaReported Device PerformanceComments
AI-enabled Device Software Functions (AI-DSF):This section specifically concerns the updated Degenerative Predictive model. The acceptance criterion is non-inferiority compared to the predicate device.
Non-inferiority of the subject device (Degenerative Predictive model) vs. the predicate device (previous Degenerative Predictive model) using one-tailed paired T-tests for Non-Inferiority."The results from the degenerative predictive model performance testing met the defined acceptance criterion. The model showed non-inferiority compared to its predicate and is considered acceptable for use."This statement confirms that the new AI model successfully met the pre-defined non-inferiority threshold. The specific metric for non-inferiority was based on "MAEs (Mean Absolute Errors) obtained with the subject device and the ones obtained with the predicate device." However, the exact MAE values or the non-inferiority margin are not specified in this document. The statistical parameters were an alpha of 0.025 and at least 90% power. This implies that the MAE of the subject device was not significantly worse than that of the predicate device.
Software Verification: (Adherence to design specifications)Software verification was conducted on the UNiD™ Spine Analyzer in accordance with IEC 62304 through code review, unit testing, integration testing, and system-level integration.A standard software development and quality assurance process. Details on specific test pass rates or metrics are not provided in this summary.
Software Validation: (Satisfaction of requirements & user needs)Software validation was performed through user acceptance testing in accordance with IEC 82304-1.A standard software quality assurance process. This ensures the software functions as intended for the user. Details on user acceptance test outcomes are not provided in this summary.
Cybersecurity Testing: (Integrity, confidentiality, availability)Cybersecurity testing was conducted in accordance with ANSI AAMI SW96 and IEC 81001-5-1, including security risk assessment, threat modeling, vulnerability assessment, and penetration testing.Standard cybersecurity validation to ensure data and system security. Specific findings or metrics are not provided.
Usability Evaluation: (Software ergonomics, safety & effectiveness)Usability evaluation was conducted according to IEC 62366-1 to assess software ergonomics and ensure no significant risks.Standard usability validation to ensure ease of use and minimize user-related errors. Specific findings are not provided.

2. Sample Size Used for the Test Set and Data Provenance

  • Test Set Sample Size: 274 patient surgery cases.
  • Data Provenance:
    • Country of Origin: US only.
    • Retrospective/Prospective: The document states "Preoperative and post operative images from 1050 patient surgery cases were collected." This implies existing data, making it a retrospective collection.

3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

  • Number of Experts: Not explicitly stated as "experts." Instead, the document mentions "highly trained Medtronic measurement technicians."
  • Qualifications of Experts: "Highly trained Medtronic measurement technicians, operating within a quality-controlled environment." The specific professional background (e.g., radiologist, orthopedist) or years of experience are not provided. They were responsible for vetting image viability and performing measurements.

4. Adjudication Method for the Test Set

The document does not explicitly describe an adjudication method (like 2+1 or 3+1 for consensus). It states that "After the images were collected, they were then provided to and measured by highly trained Medtronic measurement technicians, operating within a quality-controlled environment." This suggests a single evaluation per case by these technicians, which then forms the basis for the ground truth. There's no mention of multiple technicians independently measuring and then adjudicating discrepancies.

5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

  • No, a formal MRMC comparative effectiveness study involving human readers assisting with AI vs. without AI assistance was not mentioned or described in this document. The study specifically focused on the AI model's performance (algorithm only) compared to its previous version, not the impact on human reader performance.

6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

  • Yes, a standalone (algorithm only) performance study was done. The entire "AI-enabled device software functions (AI-DSF)" section describes the evaluation of the new Degenerative Predictive model's output against the ground truth, comparing its performance (MAEs) directly to the predicate AI model. This evaluates the algorithm itself.

7. The Type of Ground Truth Used

  • Derived from Measured Images by Technicians: "Ground truth was derived from the measured images." These measurements were performed by the "highly trained Medtronic measurement technicians." This is a form of expert consensus/review, albeit by technicians rather than clinicians, and described as measurements on images. It is not pathology or outcomes data.

8. The Sample Size for the Training Set

  • Training Set Sample Size: 776 patient surgery cases.

9. How the Ground Truth for the Training Set Was Established

  • The document implies the ground truth for the training set was established in the same manner as the test set: through measurements performed by "highly trained Medtronic measurement technicians." The statement "Ground truth was derived from the measured images" applies to the overall data collection process before splitting into training and testing sets.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).