(198 days)
The UNID™ Spine Analyzer is intended for assisting healthcare professionals in viewing and measuring images as well as planning orthopedic surgeries. The device allows surgeons and service providers to perform generic, as well as spine related measurements on images, and to plan surgical procedures. The device also includes tools for measuring anatomical components for placement of surgical implants. Clinical judgment and experience are required to properly use the software.
The MEDICREA UNiD Spine Analyzer was developed to perform preoperative and postoperative patient image measurements and simulate preoperative planning steps for spine surgery. This web-based, Software as a Medical Device (SaMD) application aims to simulate a surgical strategy, make measurements on a patient image, and draw patient-specific rods or choose from a pre-selection of standard implants and ordering the patient-specific rods. The UNiD Spine Analyzer allows the user to:
-
- Measure radiological images using generic tools and "specialty" tools
-
- Plan and simulate aspects of surgical procedures
The purpose of this submission is to request clearance for the UNiD Spine Analyzer v4.0. The changes introduced are as follows:
- . Addition of the Degenerative Predictive Model, which corresponds to a type of adult spinal fusion degenerative construct, trained with a retrospective longitudinal patient dataset.
- . Update to the existing Adult Predictive Model consisting of three predictive model modules trained with retrospective longitudinal patient datasets, where one was included in Adult Deformity Model 1 (TKA-12) and two included in Adult Deformity Model 2 (PTA-12 and PTA-34).
- Update to the existing Pediatric Predictive Model consisting of two predictive model modules trained with retrospective longitudinal patient datasets (PediaLL and PediaPT),
- Addition of the display of a Predicted Value derived from a static machine-learning based model . when the user views simulated quantitative radiographic parameters of a planned surgery, generated when the Degenerative, Adult or Pediatric Predictive Models are used.
- . The subject device update also includes the addition of implant templates among a preselected database of Medtronic standard implants cleared in in the following 510(k)s: K073291, K083026, K091813, K110543, K113528, K120368, K150135, K152277, K172199, K172328, and K201267.
The provided text describes the UNiD™ Spine Analyzer, a medical image management and processing system, and its submission for FDA 510(k) clearance. Here's information extracted regarding acceptance criteria and the study that proves the device meets them:
1. A table of acceptance criteria and the reported device performance:
The document does not explicitly present a table of acceptance criteria with corresponding performance metrics in a quantitative manner for the "Predictive Models" (Degenerative Predictive Model, Adult Predictive Model, Pediatric Predictive Model). Instead, it states that these additions are "similar to the display of reference and normative data, and does not raise new questions of safety and effectiveness when considered with existing methods of managing spinal compensation."
For the software as a whole, the acceptance criteria are described indirectly through the validation activities:
Acceptance Criteria Category | Reported Device Performance (as stated in the document) |
---|---|
Software Functionality | "The software was tested against the established Software Design Specifications for each of the test plans to assure the device performs as intended." |
Risk Management | "The device Hazard analysis was completed per ISO 14971, Application of Risk Management to Medical Devices and IEC 62304, Medical Device Software – Software Life-Cycle Processes, and risk control implemented to mitigate identified hazards." |
Overall Software Performance | "The testing results support that all the software specifications have met the acceptance criteria of each module and interaction of processes." and "The MEDICREA UNiD Spine Analyzer device passed all testing and supports the claims of substantial equivalence and safe operation." |
Usability (Human Factors) | "Validation activities included a usability study of the UNiD Spine Analyzer under actual use." This study demonstrated: |
- Comprehension of the Health Care professional with the UNiD Spine Analyzer,
- Appropriate human factors related to the UNiD Spine Analyzer, and
- Ease of use of the UNiD Spine Analyzer. |
2. Sample size used for the test set and the data provenance:
- Predictive Models: The predictive models (Degenerative, Adult, and Pediatric) were "trained with retrospective longitudinal patient datasets." No specific sample size for these datasets or their provenance (country of origin) is provided.
- Software Validation/Verification: The document does not specify a separate "test set" sample size for the software validation activities beyond stating that "the software was tested against the established Software Design Specifications."
- Usability Study: No specific sample size (number of users) is mentioned for the usability study.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
This information is not provided in the document. The document refers to the predictive models being "trained with retrospective longitudinal patient datasets" but does not detail how the ground truth for these training sets or any potential test sets was established, nor does it mention the involvement or qualifications of experts in this process for external validation. The clinical judgment of healthcare professionals is explicitly stated as required for proper software use, but not for ground truth establishment.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
This information is not provided in the document.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
A multi-reader, multi-case (MRMC) comparative effectiveness study was not conducted or reported for this submission. The document explicitly states: "There was no human clinical testing required to support the medical device as the indications for use is identical to the predicate device." The new features (predictive models) are presented as an "additional tool" similar to "display of reference and normative data" and are not claimed to improve human reader performance with a measurable effect size.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
The document mentions "Addition of the display of a Predicted Value derived from a static machine-learning based model" when the user views simulated quantitative radiographic parameters. This implies a standalone algorithmic prediction output. However, there are no specific performance metrics or a standalone study reported for the algorithm itself (e.g., accuracy of predictions against ground truth without human intervention).
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
For the predictive models, the ground truth for the training data was derived from "retrospective longitudinal patient datasets." The specific nature of this ground truth (e.g., direct surgical measurements, post-operative imaging, clinical outcomes) is not explicitly stated, beyond it being used to train models for "predicted spinal compensation."
8. The sample size for the training set:
The document states that the predictive models were "trained with retrospective longitudinal patient datasets" but does not specify the sample size for these training sets.
9. How the ground truth for the training set was established:
The document states the predictive models were trained using "retrospective longitudinal patient datasets." However, it does not detail the specific methodology for how the ground truth within these datasets was established (e.g., whether it was based on expert review of images, surgical records, or patient outcomes).
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).