K Number
K231917
Device Name
VEA Align
Manufacturer
Date Cleared
2024-01-05

(190 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

This cloud-based software is intended for orthopedic applications in both pediatric and adult populations. 2D X-ray images acquired in EOS imaging's imaging systems is the foundation and resource to display the interactive landmarks overlayed on the frontal and lateral images. These landmarks are available for users to assess patient-specific global alignment. For additional assessment, alignment parameters compared to published normative values may be available. This product serves as a tool to aid in the analysis of spinal deformities, degenerative diseases, lower limb alignment disorders, and deformities through precise angle and length measurements. It is suitable for use with adult and pediatric patients aged 7 years and older. Clinical judgment and experience are required to properly use the software.

Device Description

VEA Align is a software indicated for assisting healthcare professionals with global alignment assessment through clinical parameters computation. The product uses biplanar 2D X-ray images, exclusively generated by EOS imaging's EOS (K152788) and EOSedge (K202394) systems and generates an initial placement of the patient anatomic landmarks on the images using a machine learning-based algorithm. The user may adjust the landmarks to align with the patient's anatomy. Landmark locations require user validation. The clinical parameters communicated to the user are inferred from the landmarks and are recalculated as the user adjusts the landmarks. The product is hosted on a cloud infrastructure and relies on VEA Portal for support capabilities. such as user access control and data access. 2D X-ray image transmissions from healthcare institutions to the cloud are managed by VEA Portal is a Class I 510(k)-exempt device (LMD).

AI/ML Overview

The provided text describes the VEA Align device and its performance testing to support its substantial equivalence to a predicate device. However, it does not contain a detailed table of acceptance criteria with reported device performance metrics that would typically be found in a comprehensive study report. It states that "Direct comparison between skeletal landmark locations between the subject VEA Align device and predicate sterEOS Workstation (K172346) met acceptance criteria for algorithm performance," but it does not quantify these criteria or the specific performance results.

Therefore, some of the requested information cannot be directly extracted from the provided text. I will provide what is available and note what is missing.

Here's the breakdown of the information:

1. Table of Acceptance Criteria and Reported Device Performance

The document states: "Direct comparison between skeletal landmark locations between the subject VEA Align device and predicate sterEOS Workstation (K172346) met acceptance criteria for algorithm performance." However, the specific quantitative acceptance criteria (e.g., maximum allowable error for landmark placement) and the actual numerical performance results (e.g., mean absolute error) are not provided in this text.

Acceptance CriteriaReported Device Performance
Not specified quantified acceptance criteria for landmark location comparison.Met acceptance criteria for algorithm performance for direct comparison between skeletal landmark locations and the predicate device. Specific metrics (e.g., mean error, standard deviation) are not provided.

2. Sample Size Used for the Test Set and Data Provenance

  • Test Set Sample Size: 555 patients.
  • Data Provenance: The images were acquired from "EOS (K152788) and EOSedge (K202394) systems." The country of origin and whether the data was retrospective or prospective are not explicitly stated.

3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Those Experts

This information is not provided in the text. The document refers to the predicate device manually deforming a 3D model through control points to match X-ray contours, which implies expert interaction in the past, but it does not describe how ground truth was established for the 555-patient test set for the VEA Align device.

4. Adjudication Method (e.g., 2+1, 3+1, none) for the Test Set

This information is not provided in the text.

5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and the effect size of how much human readers improve with AI vs. without AI assistance.

A MRMC comparative effectiveness study involving human readers with and without AI assistance is not mentioned in the provided text. The performance testing focuses on the standalone algorithm's comparison to the predicate device.

6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:

Yes, a standalone performance assessment was done. The text states:
"Standalone performance assessment of the machine learning algorithm. The testing dataset consisted of 555 patients... Direct comparison between skeletal landmark locations between the subject VEA Align device and predicate sterEOS Workstation (K172346) met acceptance criteria for algorithm performance."

7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)

The ground truth for the standalone performance assessment appears to be based on the "skeletal landmark locations" derived from the predicate sterEOS Workstation (K172346). This implies that the predicate's output, which involved manual deformation by users ("The 3D model is deformed manually by the user through control points up to matching accurately the X-ray contours. This deformation is performed by using the common linear least squares estimation algorithm."), served as the reference for the VEA Align's automated landmark placement. It is not explicitly stated that an independent expert consensus or pathology was used directly for the 555-patient test set for the standalone evaluation of VEA Align, but rather conformance to the predicate's output.

8. The Sample Size for the Training Set

The sample size for the training set is not explicitly stated in the provided text. It mentions that the machine learning algorithm was "trained from data generated by EOS Imaging's imaging systems", but it doesn't quantify the size of this training dataset.

9. How the Ground Truth for the Training Set Was Established

The text states that the machine learning algorithm learns to generate "an initial placement of the patient anatomic landmarks on the images" and that "The user may adjust the landmarks to align with the patient's anatomy." For the predicate device, it mentions "identification of anatomical landmarks" or "a model of bone structures derived from an a priori image data set from 175 patients (91 normal patients, 47 patients with moderate idiopathic scoliosis and 37 patients with severe idiopathic scoliosis), and dry isolated vertebrae data for spine modeling."

While it implies that human interaction and potentially pre-existing models established the ground truth used for training, the specific methodology and who established the ground truth labels for the VEA Align training set are not detailed. It implies the machine learning was "trained from data generated by EOS Imaging's imaging systems," which suggests leveraging existing data from their systems and prior approaches (potentially like the predicate).

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).