K Number
K202519
Manufacturer
Date Cleared
2020-10-27

(56 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The OrthoNext ™ Platform system is indicated for assisting healthcare professionals in preoperative planning of orthopedic surgery. The device allows for overlaying of Orthofix Product templates on radiological images, and includes tools for performing measurements on the image and for positioning the template. Clinical judgments and experience are required to properly use the software.
The OrthoNext ™ Platform system is not to be used for mammography.

Device Description

The OrthoNextTM Platform is a web-based platform module system, to allow surgeons to evaluate digital images while performing various pre-operative treatment planning, evaluation of images and post-operative treatment planning. This software application enables surgeons to import radiological images, display various 2D views of the images, overlays the positioning of the Orthofix devices template and simulate the treatment options, generate parameters and/or measurements to be verified or adjusted by the surgeons based on their clinical judgment.

AI/ML Overview

The provided document is a 510(k) summary for the OrthoNext™ Platform System. This type of submission generally focuses on demonstrating substantial equivalence to a predicate device rather than presenting a full clinical study with specific acceptance criteria and detailed performance metrics as would be found in a PMA or de novo submission.

Based on the document, here's what can be extracted and what is not explicitly provided:

1. A table of acceptance criteria and the reported device performance

The document does not explicitly state quantitative acceptance criteria or detailed reported device performance in a table format. Instead, it relies on demonstrating equivalence to the predicate device and successful non-clinical testing. The "Performance Analysis" section states: "Subject device has similar configuration, and operating principle as the predicate device. Non-clinical software testing on operative treatment planning of orthopedic surgery using OrthoNext ™ Platform system produces results comparable to planning using acetate overlays but with the additional advantages of digital planning and simulations including ease of use, library, case documentation, access to a wider arrange of tools, and secure accessibility."

The "Conclusion" section indirectly describes the "performance" by stating that "The successful non-clinical testing demonstrates the safety and effectiveness of the OrthoNext ™ Platform system when used for the defined indications for use and demonstrates that the subject device, for which this Traditional 510(k) is submitted, performs as well as or better than the legally marketed predicate devices."

The types of testing performed are listed: "Unit, System/Integration and Acceptance test levels. Testing included also security, negative testing, error message handling, stress testing, platform testing, workflow testing, functional testing, multi-user/external access testing, data integrity testing, compatibility testing, load testing, regression testing, and hazard mitigation testing." However, specific acceptance criteria for each of these tests are not provided.

2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

This information is not provided in the document. The filing focuses on non-clinical software testing: "Non-clinical software testing on operative treatment planning of orthopedic surgery using OrthoNext ™ Platform system produces results comparable to planning using acetate overlays..." The nature of this "testing" doesn't seem to involve a "test set" of patient data in the typical sense of a clinical trial.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

This information is not provided. The document mentions that the device is "indicated for assisting healthcare professionals in preoperative planning of orthopedic surgery" and that "Clinical judgments and experience are required to properly use the software." However, it does not detail any expert review process for a test set or ground truth establishment.

4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

This information is not provided.

5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

An MRMC comparative effectiveness study was not performed or at least not reported in this 510(k) summary. The document states: "The review of clinical literatures on similar devices support the clinical performance of the Subject device with no additional clinical data." This indicates that no new clinical study (like an MRMC) was conducted for this submission.

6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

A standalone performance study of the algorithm alone (without human-in-the-loop) was not explicitly described with specific performance metrics. The nature of the device, which "assists healthcare professionals" and requires "clinical judgments and experience," implies a human-in-the-loop interaction as its primary mode of use. The software testing mentioned is "non-clinical software testing," which would assess the software's functionality and accuracy in performing its intended tasks (e.g., measurements, template overlay) rather than a diagnostic standalone performance.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

For the non-clinical software testing, the document suggests the "ground truth" or reference for comparison was "planning using acetate overlays." This implies that the accuracy of the software's measurements and template positioning was compared against established practices using physical overlays. No mention of expert consensus, pathology, or outcomes data for establishing ground truth is made for the device's performance evaluation in this document.

8. The sample size for the training set

This information is not provided. The document describes "non-clinical software testing," and given the nature of the device (a planning and measurement tool, not an AI for diagnosis), a "training set" in the context of machine learning model development is most likely not applicable or not disclosed. The device performs functions like overlaying templates and performing measurements, which are rule-based software operations rather than typically requiring a "training set" in the AI sense.

9. How the ground truth for the training set was established

As a training set is likely not applicable or not disclosed, the method for establishing its ground truth is also not provided.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).