K Number
K171068
Device Name
OrthoView 7.2
Date Cleared
2017-10-18

(191 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
Predicate For
N/A
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

OrthoView is indicated for use when a suitable licensed and qualified healthcare professional requires access to medical images with the intention of using such images to plan or review a surgical procedure. OrthoView provides a set of tools and templates (representing prosthetic and fixation devices) to assist the healthcare professional in planning their surgery. The device is not to be used for mammography.

Device Description

OrthoView 7.2 is dedicated, digital, pre-operative planning and templating software used to create detailed pre-operative plans quickly and easily from digital x-ray images. OrthoView 7.2 is software to be used for medical purposes, performing these purposes without being part of a hardware medical device. The device provides one or more capabilities relating to the acceptance, transfer, display, storage, and digital processing of medical images.

AI/ML Overview

The provided text is a 510(k) Summary for OrthoView 7.2, a pre-operative planning and templating software. This type of submission focuses on demonstrating substantial equivalence to a predicate device rather than providing extensive clinical study data for acceptance criteria.

The document does not contain explicit quantitative acceptance criteria or a dedicated study section with specific performance metrics (e.g., accuracy, sensitivity, specificity) for the device's clinical use. Instead, it relies on demonstrating substantial equivalence through a comparison of technological characteristics, intended use, and a history of safe and effective use of the predicate device.

Here's an analysis based on the information available:

1. Table of Acceptance Criteria and Reported Device Performance:

As noted, the document does not present quantitative acceptance criteria or specific performance metrics as typically found in a clinical performance study. The "acceptance criteria" can be inferred as the demonstration of "substantial equivalence" to the predicate device, OrthoView 4, by proving that OrthoView 7.2 performs as intended and is safe and effective in a non-inferior manner to its predecessor.

CategoryAcceptance Criteria (Inferred from Substantial Equivalence and Testing)Reported Device Performance
Functional EquivalenceDevice functionality (image viewing, manipulation, templating, measurements, reporting, saving) should be identical or improved compared to the predicate, within the scope of intended use.OrthoView 7.2's core functions (Image Loading, Image Manipulation, Scaling, Analysis Methods, Landmarks, Contours, Cut Positions, Reduction, Measurements, Reporting, Saving/Commit, Image Storage) are reported as "IDENTICAL" to OrthoView 4. Some minor extensions (e.g., horizontal/vertical alignment tools, online template access, improved analysis display, Active Directory integration) are mentioned as improvements.
SafetyNo new safety concerns should be raised compared to the predicate device.OrthoView has been in commercial distribution since 2001, has never been subject to recall or medical device report, and has proven safe in clinical usage. Risk analysis (ISO 14971:2007) indicates the same risk profile as the predicate.
EffectivenessThe device should perform as intended for pre-operative planning and templating, similar to the predicate.Each new release (including 7.2) underwent thorough testing, and clinical features were evaluated by a surgeon (within a non-clinical environment). Testing verified accuracy and performance are adequate and as intended.
Technical ComplianceConformance to relevant medical device standards and guidance documents.Device complies with ISO 14971:2012, NEMA PS 3.1 - 3.20 (2016) (DICOM), IEC 62304:2006, IEC 62336:2015, ISO-15223-1:2012, ISO 14155:2011, and FDA guidance documents for software and image management devices.

2. Sample Size Used for the Test Set and Data Provenance:

  • Sample Size for Test Set: The document mentions "procedure-specific images" and "a fully configured system installed on hospital representative environments," but does not specify a numerical sample size for the test images.
  • Data Provenance: The document states that "All manual testing is performed... using procedure-specific images to emulate as close as possible intended use." It doesn't explicitly state the country of origin for these test images or whether they were retrospective or prospective. Given the submitter's location (UK), it's plausible the testing environment or images might be related to UK practices, but this is not confirmed.

3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications:

  • Number of Experts: The document states, "All new features are checked by a surgeon to verify clinical performance." For non-clinical tests, it notes, "Each release over time has experienced thorough testing and each new release has had its clinical features evaluated by a surgeon (within a non-clinical environment)." This implies at least one surgeon was involved in evaluating clinical performance.
  • Qualifications of Experts: The experts are referred to simply as "a surgeon" or "a suitably licensed and qualified healthcare professional." No further details on their specific qualifications (e.g., years of experience, subspecialty) are provided.

4. Adjudication Method for the Test Set:

  • The document does not describe a formal adjudication method (e.g., 2+1, 3+1). The evaluation mentioned ("checked by a surgeon") appears to be a single-reader assessment for clinical performance verification.

5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

  • The document does not mention or describe a MRMC comparative effectiveness study where human readers' performance with and without AI assistance was evaluated. The focus is on demonstrating equivalence to a previous version of the software, not on quantifying human performance improvement with AI.

6. Standalone (Algorithm Only Without Human-in-the-Loop Performance):

  • The performance assessment described appears to be a standalone (algorithm only) evaluation in the sense that the software's functions were tested, and its "clinical features" were verified by a surgeon in a non-clinical environment. The document confirms "OrthoView 7.2 is software to be used for medical purposes, performing these purposes without being part of a hardware medical device." However, the device's intended use is to assist a "licensed and qualified healthcare professional," implying it's designed to be used with a human in the loop for surgical planning and review. The clinical evaluation isn't comparing algorithm-only decisions to expert decisions, but rather the algorithm's functional correctness.

7. Type of Ground Truth Used:

  • The ground truth for the "clinical performance" verification appears to be expert consensus/opinion from a surgeon. The surgeon checks new features to verify their clinical performance, implying their judgment forms the basis of the "ground truth" for the functional correctness and clinical utility of these features. There is no mention of pathology, outcomes data, or other objective ground truth types.

8. Sample Size for the Training Set:

  • The document does not specify a sample size for any training set. OrthoView 7.2 is described as pre-operative planning and templating software that primarily uses tools and templates, rather than a machine learning or AI algorithm that would typically require a training set of images with established ground truth for learning. The improvements noted are more about extended functionality and user interface rather than a new AI model.

9. How the Ground Truth for the Training Set Was Established:

  • Since no training set is mentioned or implied for an AI/ML model, this information is not applicable and not provided. The software's capabilities are based on established geometric calculations, image manipulation techniques, and templating logic, rather than a learned model from data.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).