(125 days)
The UNiD Spine Analyzer is intended for assisting healthcare professionals in viewing and measuring images as well as planning orthopedic surgeries. The device allows surgeons or service providers to perform generic, as well as spine related measurements on images, and to plan surgical procedures. The device also includes tools for measuring anatomical components for placement of surgical implants. Clinical judgement and experience are required to properly use the software.
UNiD Spine Analyzer is a software solution developed for the medical community. It is intended to be used to view images and perform spine related measurements and plan surgical procedures. The planning of surgical procedures can be done either by MEDICREA as part of the service of designing patient specific implant (surgeons will have to validate the planning submitted by MEDICREA before the manufacturing of any implants) or by the surgeon himself. The image formats supported encompass the standard image formats (jpeg, png, gif). Measurements (generic, measuring and surgical tools) can be overlaid to each image. UNiD Spine Analyzer offers the ability to plan certain surgical procedures, such as osteotomies of the spine, and templating implants (screws, cages and rods). Patient specific rods can be ordered to be manufactured by MEDICREA. UNiD Spine Analyzer is a web-based software.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The provided text doesn't explicitly state formal "acceptance criteria" with upper or lower bounds. Instead, it presents performance metrics of the subject device (UNiD Spine Analyzer) and compares them to its predicate (Surgimap 2.0), aiming to demonstrate substantial equivalence. The implication is that if the UNiD Spine Analyzer's performance is sufficiently comparable to the predicate, it meets the unstated acceptance criteria for safety and effectiveness.
Performance Metric | Acceptance Criteria (Implicit: Comparable to Predicate) | UNiD Spine Analyzer Reported Performance |
---|---|---|
Distance Measurement Accuracy | Comparable to Surgimap | Mean error: 0.23 mm; Standard deviation: 0.42 mm |
Angle Measurement Accuracy | Comparable to Surgimap | Mean error: 0.2°; Standard deviation: 0.4° |
Surgical Wedge Tool Accuracy | Comparable to Surgimap | Mean error: 0.25°; Standard deviation: 0.44° |
Surgical Cage Tool Accuracy | Comparable to Surgimap | Mean error: 0.4°; Standard deviation: 0.5° |
2. Sample Size Used for the Test Set and Data Provenance
The text states:
- "For basic measurement testing (angles and distances), random lines and angles have been drawn and measured by two different tools..."
- "For surgical tools (wedge and cage), sets of images were created with the wedge(s) or cage(s) to apply."
This indicates that the test set consisted of artificially generated lines, angles, and images with applied surgical tools, rather than a clinical dataset of patient images. Therefore:
- Sample Size: Not explicitly stated as a number of "cases" or "patients." It refers to "several configurations and values were tested" for basic measurements and "sets of images were created" for surgical tools. The exact numerical count of these configurations/images is not provided.
- Data Provenance: The data was synthetically generated or created for the purpose of testing, not derived from real-world patient data. There is no country of origin of data or indication of retrospective/prospective study, as it's not a clinical data set.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts
Based on the description, the "ground truth" for the test set was not established by human experts in the traditional sense of clinical assessment. Instead, for basic measurements, it appears the "true" values for the drawn lines and angles were known due to their synthetic generation. For surgical tools, the "true" placement or effect of the wedge/cage was inherently known from their application to the created images.
- No human experts were used to establish the "ground truth" for the test set; the ground truth was inherent in the synthetic generation of the test data.
4. Adjudication Method for the Test Set
Since the ground truth was inherent in the synthetic generation of the test data and not based on expert interpretation, no adjudication method was used. The comparison was between the measurements/applications of the UNiD Spine Analyzer and the predicate device (Surgimap), and the known synthetic values.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not explicitly stated or described. The study focused on the performance of the algorithm itself (standalone) and its comparison to the predicate software, not on how human readers perform with or without AI assistance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, a standalone performance study was done. The description focuses on the measurements and applications performed by the UNiD Spine Analyzer (and Surgimap) on the test data, independently of human interaction for interpretation post-measurement. The human intervention required is for "interpretation and manipulation of images" which is a general function of such software, not part of the performance evaluation method described.
7. The Type of Ground Truth Used
The ground truth used was synthetic/known values. For basic measurements (angles and distances), the values were known because "random lines and angles have been drawn". For surgical tools (wedge and cage), the effect was known because "sets of images were created with the wedge(s) or cage(s) to apply." This is not expert consensus, pathology, or outcomes data.
8. The Sample Size for the Training Set
The provided text does not mention a training set or its sample size. The description of the performance data focuses solely on verification and validation activities and testing against the predicate device using artificially generated data. This suggests that if the device uses machine learning, the details of its training were not disclosed in this section.
9. How the Ground Truth for the Training Set Was Established
Since no training set is mentioned, no information is provided on how its ground truth was established.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).