(185 days)
The sterEOS Workstation is intended for use in the fields of musculoskeletal radiology and orthopedics in both pediatric and adult populations as a general PACS device for acceptance, transfer, display, storage, and digital processing of 2D x-ray images of the musculoskeletal system including interactive 2D measurement tools.
When using 2D X-ray images obtained with the Biospace EOS System (K071546), the sterEOS Workstation provides interactive 3D measurement tools to aid in the analysis of scollosis and related disorders and deformities of the spine in adult patients as well as pediatric patients 16 vears and older. The 3D measurement tools include interactive analysis based on a model of bone structures derived from an a priori image data set from 175 patients (91 normal patients, 47 patients with moderate idiopathic scoliosis and 37 patients with severe idiopathic scoliosis), and dry isolated vertebrae data. The model of bone structures is not intended for use in patients with a Cobb's angle > 50 degrees and is not intended for use to assess individual vertebral abnormalities
The SterEOS Workstation is a general picture archiving and communications storage system for acceptance, transfer, display, storage, and digital processing of 2D x-ray images of the musculoskeletal system, including interactive 2D measurement tools.
When used with 2D X-ray images obtained with the Biospace EOS System (K071546), the sterEOS Workstation provides interactive 3D measurement tools to aid in the analysis of scoliosis and related disorders and deformities of the spine.
The provided FDA 510(k) summary for Biospace med's sterEOS Workstation does not explicitly state acceptance criteria or provide a detailed study report with specific performance metrics against those criteria. Instead, it describes a "comparative study" to demonstrate accuracy and equivalent performance. Therefore, I will extract what is available and highlight what is missing.
Here's the information based on the provided document:
Acceptance Criteria and Performance
The document does not explicitly present a table of acceptance criteria with corresponding performance metrics. It generally states: "A comparative study was conducted in a clinical setting to demonstrate accuracy of clinical parameters calculated in the 3D space. The results of this study validate the 3D reconstruction software and demonstrate the equivalent performance of the device."
Without explicit acceptance criteria (e.g., "3D reconstruction accuracy must be within X mm of ground truth for Y% of cases"), it's impossible to create the requested table.
Study Details
-
Sample size used for the test set and the data provenance:
- The document mentions an "a priori image data set from 175 patients (91 normal patients, 47 patients with moderate idiopathic scoliosis and 37 patients with severe idiopathic scoliosis)" which was used to derive the model of bone structures for the 3D measurement tools. This dataset appears to be part of the training/development of the model, not explicitly a test set for evaluating the final device's performance.
- The document does not specify the sample size of the "clinical setting" comparative study used to "demonstrate accuracy of clinical parameters calculated in the 3D space."
- Data Provenance: Not specified (e.g., country of origin, retrospective or prospective).
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not specified.
-
Adjudication method for the test set: Not specified.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- An MRMC study is not explicitly mentioned. The "comparative study" focuses on the accuracy of device-calculated parameters rather than human reader improvement with aid.
- Therefore, no effect size for human reader improvement is provided.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- The "comparative study" appears to assess the accuracy of the device's 3D measurement tools, which implies a standalone performance evaluation of the algorithm's output (measurements) compared to some form of ground truth. However, the details are sparse.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- The document implies that the "accuracy of clinical parameters calculated in the 3D space" was demonstrated. This suggests that the ground truth would likely be established by clinical experts (e.g., orthopedic surgeons, radiologists) using established manual measurement techniques or potentially more advanced imaging modalities for comparison (though not specified).
- The mention of "dry isolated vertebrae data" contributing to the model could imply some physical measurement ground truth for model development, but not necessarily for the clinical performance test set.
-
The sample size for the training set:
- The "model of bone structures" used by the 3D measurement tools was "derived from an a priori image data set from 175 patients (91 normal patients, 47 patients with moderate idiopathic scoliosis and 37 patients with severe idiopathic scoliosis), and dry isolated vertebrae data." This dataset of 175 patients (plus dry vertebrae data) serves as the primary training/development data for the 3D model.
-
How the ground truth for the training set was established:
- The document states the model was "derived from an a priori image data set." It does not explicitly state how the "ground truth" (e.g., true 3D shape, vertebral parameters) for these 175 patients or the dry isolated vertebrae was established. It's inferred that these would have been meticulously measured or reconstructed using high-fidelity methods during the model development phase.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).