K Number
K221632
Device Name
Spine CAMP™
Date Cleared
2022-10-18

(134 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Spine CAMP™ is a fully-automated software that analyzes X-ray images of the spine to produce reports that contain static and/or motion metrics. Spine CAMP™ can be used to obtain metrics from sagittal plane radiographs of the lumbar and/or cervical spine and it can be used to visualize intervertebral motion via an image registration method referred to as "stabilization". The radiographic metrics can be used to characterize and assess spinal health in accordance with established guidance. For example, common clinical uses include assessing spinal stability, alignment, degeneration, fusion, motion preservation, and implant performance. The metrics produced by Spine CAMP™ are intended to be used to support qualified and licensed professional healthcare practitioners in clinical decision-making for skeletally mature patients of age 18 and above.

Device Description

Spine CAMP™ is a fully-automated image processing software device. It is designed to be used with X-ray images and is intended to aid medical professionals in the measurement and assessment of spinal parameters. Spine CAMP™ is capable of calculating distances, angles, linear displacements, angular displacements, and mathematical combinations of these metrics to characterize the morphology, alignment, and motion of the spine. These analysis results are presented in the form of reports, annotated images, and visualizations of intervertebral motion to support their interpretation.

AI/ML Overview

The Spine CAMP™ device uses automated software to analyze X-ray images of the spine and produce reports containing static and/or motion metrics. It can be used to obtain metrics from sagittal plane radiographs of the lumbar and/or cervical spine and visualize intervertebral motion. The metrics characterize and assess spinal health, including stability, alignment, degeneration, fusion, motion preservation, and implant performance. These metrics support clinical decision-making for skeletally mature patients aged 18 and above.

Here's an analysis of the acceptance criteria and study proving its efficacy:

1. Table of Acceptance Criteria and Reported Device Performance

The document does not explicitly state quantitative acceptance criteria in terms of thresholds (e.g., "accuracy > X%"). Instead, it focuses on demonstrating statistical correlation and equivalence with the predicate device, KIMAX QMA®. The performance is evaluated by comparing the outputs of Spine CAMPTM with those of the predicate device.

Acceptance Criteria (Implied)Reported Device Performance
Functional Equivalence: Device functions as intended."The software functioned as intended and all results observed were as expected."
Correlation and Statistical Equivalence with Predicate Device:"Statistical correlations and equivalence tests were performed by directly comparing vertebral landmark coordinates, image calibration, and radiographic metrics between Spine CAMP™ and the predicate device. This analysis demonstrated correlation and statistical equivalence for all variables evaluated." This indicates that the Spine CAMP™'s automated measurements (vertebral landmark coordinates, image calibration, and radiographic metrics) are highly consistent with and statistically indistinguishable from those produced manually by experienced operators using the predicate device. The "acceptance" is implicitly that these correlations and equivalences meet appropriate statistical thresholds for clinical interchangeability.
No New or Different Safety/Effectiveness Questions:"The minor differences between the subject and predicate devices (i.e., methods by which the inputs to the results calculator are produced) do not raise new or different questions regarding safety and effectiveness when used as labeled."

2. Sample Size Used for the Test Set and Data Provenance

  • Test Set Sample Size: 215 lateral cervical spine radiographs and 232 lateral lumbar spine radiographs.
  • Data Provenance: The document does not explicitly state the country of origin. It indicates that the data was previously analyzed by experienced operators using the predicate device, suggesting it is retrospective data.

3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

  • Number of Experts: Five experienced operators.
  • Qualifications: The document states "experienced operators" without further specific qualifications (e.g., radiologist, years of experience). However, their role was to generate ground truth using the predicate device.

4. Adjudication Method for the Test Set

The document does not specify an adjudication method like 2+1 or 3+1. The ground truth for the test set was established by having "five experienced operators using the predicate device." It's implied that these outputs from the predicate device (manual analysis) were directly used as the ground truth. There's no mention of a consensus process among the five operators to define a single ground truth per case if their results differed.

5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

No, a traditional MRMC comparative effectiveness study where human readers improve with AI vs. without AI assistance was not done. The study focused on demonstrating that the device itself performs comparably to the predicate device, which is operated manually by humans. The comparison is between the Spine CAMPTM (AI-driven automated measurements) and the predicate device (human-driven manual measurements). Therefore, an "effect size of how much human readers improve with AI vs without AI assistance" is not applicable to the design of this particular study.

6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

Yes, the primary study described is a standalone performance evaluation. The Spine CAMPTM is a "fully-automated software." The study directly compared its automated outputs to the manually generated ground truth from the predicate device. While the device's output is intended to "support qualified and licensed professional healthcare practitioners in clinical decision-making," the performance assessment itself is of the algorithm's standalone capabilities.

7. The Type of Ground Truth Used

The ground truth used was expert assessment using a predicate device. Specifically, it was derived from "experienced operators using the predicate device" (KIMAX QMA®) who manually obtained measurements.

8. The Sample Size for the Training Set

The document does not specify the sample size for the training set. It mentions that "The data labels used to train Spine CAMP™'s AI models were derived directly from the KIMAX QMA® technology," but no numbers are provided for this training data.

9. How the Ground Truth for the Training Set Was Established

The ground truth for the training set was established by manual analysis using the predicate device, KIMAX QMA®. "The data labels used to train Spine CAMP™'s AI models were derived directly from the KIMAX QMA® technology." This implies that experienced users of the predicate device generated the "ground truth" labels that were then used to train the AI models within Spine CAMPTM.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).