K Number
K153612
Date Cleared
2016-03-15

(89 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The Arthrex VIP Web Portal is intended for use as a software interface of imaging information from a medical scanner such as a CT scanner. It is also intended as software for displaying/editing implant placement and surgical treatment options that were generated in the OrthoVis Desktop Software by trained COS technicians. The Arthrex VIP Web Portal is intended for use with the Arthrex Glenoid Intelligent Reusable Instrument System (Arthrex Glenoid IRIS) and with the Arthrex OrthoVis Preoperative Plan. It is indicated for use with the following glenoid implant lines: Arthrex Univers Apex, Arthrex Univers II, and Arthrex Univers Revers.

Device Description

The ArthrexVIP Web Portal is composed of software intended for use to facilitate upload of medical images, preoperative planning, and plan approval of placement and orientation of total shoulder joint replacement components. Each surgeon user's uploaded images are associated with specific cases and associated with that surgeon's profile. Uploaded images can be downloaded from the portal by COS technicians and used to create preoperative plans (see 510(k) K151568) in the OrthoVis Desktop Software. The surgeon user is then able to login to the ArthrexVIP Web Portal to review the plan and either approve or modify the location and/or orientation of the joint replacement component. The approved plan is then downloaded by COS technicians for production (see 510(k) K151500 and K151568) as part of the Arthrex Glenoid IRIS device.

AI/ML Overview

The provided document is a 510(k) summary for the ArthrexVIP Web Portal, which is a software device intended for use in preoperative planning for shoulder joint replacement. This document primarily focuses on demonstrating substantial equivalence to a predicate device and does not contain detailed information about a study proving the device meets specific acceptance criteria in the format requested.

Here's an attempt to answer the questions based on the limited information available in the document, along with an explanation of why certain information cannot be provided:

1. A table of acceptance criteria and the reported device performance

The document does not provide a table of acceptance criteria or specific reported device performance metrics tied to such criteria. The submission focuses on demonstrating "substantial equivalence" to a predicate device through non-clinical testing, rather than establishing performance against defined criteria.

2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

The document does not specify a test set sample size or data provenance for any clinical performance evaluation. The non-clinical testing performed includes "Software verification and validation," "Regression Testing," "Unit Testing," "Code reviews and checks," and "Integration Testing." These are software development and quality assurance activities, not studies involving human subjects or medical image data in a clinical context.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

No information is provided regarding experts, ground truth establishment, or clinical test sets.

4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

No information is provided regarding adjudication methods, as no clinical test set is described.

5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

No MRMC study was mentioned or performed. The device is a "software interface" for displaying/editing implant placement and surgical treatment options, not an AI-assisted diagnostic tool that would typically involve improving human reader performance.

6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

No information is provided about a standalone algorithm performance study. The device is described as a "software interface" and a tool for displaying/editing options generated by "trained COS technicians" and reviewed by "surgeon users," indicating a human-in-the-loop design.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

The document does not describe the establishment of a "ground truth" in a clinical sense for performance evaluation. The testing performed ("Software verification and validation," etc.) would involve internal quality metrics and ensuring the software functions as designed, rather than comparison to a clinical ground truth.

8. The sample size for the training set

No training set is mentioned. This device is not described as an AI/ML algorithm that learns from data in a training set. It is a software interface and planning tool.

9. How the ground truth for the training set was established

Not applicable, as no training set for an AI/ML algorithm is described.

Summary of what can be extracted from the document regarding acceptance criteria and studies:

The document states:

  • Non-Clinical Testing: "The following testing was performed to demonstrate substantial equivalency of the ArthrexVIP Web Portal to the OrthoVis Web Portal: Software verification and validation, Regression Testing, Unit Testing, Code reviews and checks, Integration Testing, Dimensional Validation (performed on predicate device and code has not changed for the subject device)."
  • Clinical Testing: "Clinical testing was not necessary to determine substantial equivalence between to the predicate."

This 510(k) submission relies on non-clinical software validation and verification activities to establish substantial equivalence to a predicate device, rather than explicit clinical performance criteria with associated studies involving patient data or experts. Therefore, most of the requested information regarding acceptance criteria, sample sizes, ground truth, and human reader performance is not present in this document.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).