(27 days)
The UNiD Spine Analyzer is intended for assisting healthcare professionals in viewing and measuring images as well as planning orthopedic surgeries. The device allows surgeons and service providers to perform generic as well as spine related measurements on images, and to plan surgical procedures. The device also includes tools for measuring anatomical components for placement of surgical implants. Clinical judgment and experience are required to properly use the software.
The purpose of this submission is to update the UNiD Spine Analyzer with the addition of a new software feature: "Data base of implants". This component will allow a user to draw implants (cages, screws and rods) taken from a range of MEDICREA INTERNATIONAL implants, previously cleared in K08009, K083810, K163595, in addition to the design of custom-made implants specific to a unique patient. A catalog of these implants is provided in this submission.
The provided text is a 510(k) summary for the UNiD Spine Analyzer. It states that the submission is to add a new software feature, "Data base of implants," to an already cleared device (UNiD Spine Analyzer, K170172). Therefore, the acceptance criteria and performance data described in this document relate to the new feature and its integration, rather than a full study of the entire device's performance from scratch.
However, the 510(k) summary does not contain specific acceptance criteria tables or detailed performance study results (like sensitivity, specificity, AUC, or other quantitative measures typically found in standalone AI/ML device studies). It primarily focuses on demonstrating substantial equivalence by comparing features and outlining the type of testing performed.
Based on the information provided, here's what can be extracted and what is NOT available:
1. A table of acceptance criteria and the reported device performance
- Acceptance Criteria: Not explicitly stated as quantitative metrics (e.g., "accuracy > X%"). The document implies acceptance based on successful "verification and validation activities" for the new software feature. For a medical device, this typically means:
- The software correctly performs the functions it's designed for (e.g., implants are drawn accurately, catalog is accessible).
- The new feature doesn't introduce new safety or effectiveness issues.
- The software meets industry standards for medical device software development (e.g., IEC 62304).
- Reported Device Performance: No quantitative performance metrics (like accuracy, precision, etc.) are provided for the new "Database of implants" feature. The document only states that "Performance data for the modified UNiD Spine Analyzer consisted of verification and validation activities." and "The addition of the database of implants creates additional tools which were also tested, and documentation was provided."
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size: Not specified. The document only mentions "verification and validation activities" for the software feature itself, not a clinical data set for performance evaluation.
- Data Provenance: Not specified. Since this is about adding a database of implants and related drawing tools, it's less about analyzing patient image data for diagnosis and more about the software's functional correctness.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Number of experts & Qualifications: Not applicable/not specified. The "ground truth" for this specific submission likely relates to the accuracy of implant representation and placement tools, which would be verified against design specifications, engineering standards, and potentially input from orthopedic surgeons during development, rather than a clinical ground truth established by diagnosing cases.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Adjudication Method: Not applicable/not specified.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: No, an MRMC study was not described. The device, UNiD Spine Analyzer, assists healthcare professionals in viewing, measuring, and planning orthopedic surgeries. The specific update in this submission is the addition of an implant database. This generally falls under medical image management/measurement software (PACS-like functionality) rather than an AI/ML diagnostic or prognostic tool that would typically undergo MRMC studies. The software is explicitly stated to require "Human Intervention for interpretation and manipulation of images."
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Study: Not explicitly described in terms of clinical performance. The "verification and validation activities" confirm the software's functionality, but these are not presented as a standalone clinical performance study typically seen for AI algorithms making diagnostic interpretations. The device is a tool for human use, not an autonomous diagnostic algorithm.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- Type of Ground Truth: Not specified in the provided text. For a feature involving an implant database and drawing tools, ground truth would likely be based on design specifications, physical accuracy of implant models, and functional correctness according to orthopedic surgical planning principles, rather than clinical outcomes or pathology.
8. The sample size for the training set
- Training Set Sample Size: Not applicable. This document does not describe the development of a machine learning algorithm that learns from a training set of data. It describes the addition of a database and associated software tools.
9. How the ground truth for the training set was established
- Training Set Ground Truth Establishment: Not applicable.
Summary of what the document focuses on regarding device acceptance:
The document leverages the concept of "substantial equivalence" to a previously cleared version of the same device (K170172). The acceptance criteria for the new feature (database of implants) are implicit in the statement that "The addition of this new component (i.e., data base of cleared implants) to the UNiD Spine Analyzer does not raise new issues of safety or effectiveness compared to the previously cleared version of the UNiD Spine Analyzer." This implies that the testing (verification and validation) confirmed:
- The implant database functions as intended.
- The drawing tools work correctly.
- The new feature does not adversely affect the safety or performance of the existing cleared functionalities of the UNiD Spine Analyzer.
- The software development followed appropriate guidelines for medical device software ("Guidance for Industry and FDA Staff, 'Guidance for the Content of Premarket Submissions for Software Contained on Medical Devices'").
Essentially, for this 510(k) (which is an update to an existing device), the "proof" for acceptance is the demonstration that the change does not negatively impact safety or effectiveness, and the new feature itself is functionally sound, rather than a de novo clinical performance study against specific acceptance criteria.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).