Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K171068
    Device Name
    OrthoView 7.2
    Date Cleared
    2017-10-18

    (191 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    OrthoView 7.2

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    OrthoView is indicated for use when a suitable licensed and qualified healthcare professional requires access to medical images with the intention of using such images to plan or review a surgical procedure. OrthoView provides a set of tools and templates (representing prosthetic and fixation devices) to assist the healthcare professional in planning their surgery. The device is not to be used for mammography.

    Device Description

    OrthoView 7.2 is dedicated, digital, pre-operative planning and templating software used to create detailed pre-operative plans quickly and easily from digital x-ray images. OrthoView 7.2 is software to be used for medical purposes, performing these purposes without being part of a hardware medical device. The device provides one or more capabilities relating to the acceptance, transfer, display, storage, and digital processing of medical images.

    AI/ML Overview

    The provided text is a 510(k) Summary for OrthoView 7.2, a pre-operative planning and templating software. This type of submission focuses on demonstrating substantial equivalence to a predicate device rather than providing extensive clinical study data for acceptance criteria.

    The document does not contain explicit quantitative acceptance criteria or a dedicated study section with specific performance metrics (e.g., accuracy, sensitivity, specificity) for the device's clinical use. Instead, it relies on demonstrating substantial equivalence through a comparison of technological characteristics, intended use, and a history of safe and effective use of the predicate device.

    Here's an analysis based on the information available:

    1. Table of Acceptance Criteria and Reported Device Performance:

    As noted, the document does not present quantitative acceptance criteria or specific performance metrics as typically found in a clinical performance study. The "acceptance criteria" can be inferred as the demonstration of "substantial equivalence" to the predicate device, OrthoView 4, by proving that OrthoView 7.2 performs as intended and is safe and effective in a non-inferior manner to its predecessor.

    CategoryAcceptance Criteria (Inferred from Substantial Equivalence and Testing)Reported Device Performance
    Functional EquivalenceDevice functionality (image viewing, manipulation, templating, measurements, reporting, saving) should be identical or improved compared to the predicate, within the scope of intended use.OrthoView 7.2's core functions (Image Loading, Image Manipulation, Scaling, Analysis Methods, Landmarks, Contours, Cut Positions, Reduction, Measurements, Reporting, Saving/Commit, Image Storage) are reported as "IDENTICAL" to OrthoView 4. Some minor extensions (e.g., horizontal/vertical alignment tools, online template access, improved analysis display, Active Directory integration) are mentioned as improvements.
    SafetyNo new safety concerns should be raised compared to the predicate device.OrthoView has been in commercial distribution since 2001, has never been subject to recall or medical device report, and has proven safe in clinical usage. Risk analysis (ISO 14971:2007) indicates the same risk profile as the predicate.
    EffectivenessThe device should perform as intended for pre-operative planning and templating, similar to the predicate.Each new release (including 7.2) underwent thorough testing, and clinical features were evaluated by a surgeon (within a non-clinical environment). Testing verified accuracy and performance are adequate and as intended.
    Technical ComplianceConformance to relevant medical device standards and guidance documents.Device complies with ISO 14971:2012, NEMA PS 3.1 - 3.20 (2016) (DICOM), IEC 62304:2006, IEC 62336:2015, ISO-15223-1:2012, ISO 14155:2011, and FDA guidance documents for software and image management devices.

    2. Sample Size Used for the Test Set and Data Provenance:

    • Sample Size for Test Set: The document mentions "procedure-specific images" and "a fully configured system installed on hospital representative environments," but does not specify a numerical sample size for the test images.
    • Data Provenance: The document states that "All manual testing is performed... using procedure-specific images to emulate as close as possible intended use." It doesn't explicitly state the country of origin for these test images or whether they were retrospective or prospective. Given the submitter's location (UK), it's plausible the testing environment or images might be related to UK practices, but this is not confirmed.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications:

    • Number of Experts: The document states, "All new features are checked by a surgeon to verify clinical performance." For non-clinical tests, it notes, "Each release over time has experienced thorough testing and each new release has had its clinical features evaluated by a surgeon (within a non-clinical environment)." This implies at least one surgeon was involved in evaluating clinical performance.
    • Qualifications of Experts: The experts are referred to simply as "a surgeon" or "a suitably licensed and qualified healthcare professional." No further details on their specific qualifications (e.g., years of experience, subspecialty) are provided.

    4. Adjudication Method for the Test Set:

    • The document does not describe a formal adjudication method (e.g., 2+1, 3+1). The evaluation mentioned ("checked by a surgeon") appears to be a single-reader assessment for clinical performance verification.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    • The document does not mention or describe a MRMC comparative effectiveness study where human readers' performance with and without AI assistance was evaluated. The focus is on demonstrating equivalence to a previous version of the software, not on quantifying human performance improvement with AI.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance):

    • The performance assessment described appears to be a standalone (algorithm only) evaluation in the sense that the software's functions were tested, and its "clinical features" were verified by a surgeon in a non-clinical environment. The document confirms "OrthoView 7.2 is software to be used for medical purposes, performing these purposes without being part of a hardware medical device." However, the device's intended use is to assist a "licensed and qualified healthcare professional," implying it's designed to be used with a human in the loop for surgical planning and review. The clinical evaluation isn't comparing algorithm-only decisions to expert decisions, but rather the algorithm's functional correctness.

    7. Type of Ground Truth Used:

    • The ground truth for the "clinical performance" verification appears to be expert consensus/opinion from a surgeon. The surgeon checks new features to verify their clinical performance, implying their judgment forms the basis of the "ground truth" for the functional correctness and clinical utility of these features. There is no mention of pathology, outcomes data, or other objective ground truth types.

    8. Sample Size for the Training Set:

    • The document does not specify a sample size for any training set. OrthoView 7.2 is described as pre-operative planning and templating software that primarily uses tools and templates, rather than a machine learning or AI algorithm that would typically require a training set of images with established ground truth for learning. The improvements noted are more about extended functionality and user interface rather than a new AI model.

    9. How the Ground Truth for the Training Set Was Established:

    • Since no training set is mentioned or implied for an AI/ML model, this information is not applicable and not provided. The software's capabilities are based on established geometric calculations, image manipulation techniques, and templating logic, rather than a learned model from data.
    Ask a Question

    Ask a specific question about this device

    K Number
    K063327
    Device Name
    ORTHOVIEW
    Date Cleared
    2006-11-22

    (19 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    ORTHOVIEW

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Orthoview™ is indicated for use when a suitably licensed and qualified healthcare professional requires access to medical images with the intention of using such images, in conjunction with templates for prosthetic and fixation devices, for the purposes of choosing the nature and characteristics of the prosthetic/fixation device to be used when planning a potential surgical procedure. In addition, Trauma and Osteotomy modules and Trauma Templates are provided to extend the range of functionality available to the healthcare professional.

    Device Description

    Orthoview™ is intended to provide the following for the Operator (a suitably qualified and trained healthcare professional):

    • To be downloaded from the Internet and to be unlocked using a Meridian . Technique Ltd provided key.
    • . Grant access rights only to authorized users (via PC password system).
    • . Receive X-Ray images in a digital format from third party X-Ray machines/ X-Ray digitisers or PACS systems.
    • Process such images securely with respect to patient confidentiality, patient . identification and image integrity.
    • . Allow the image to be retrieved for processing as follows:
      • Scaling of the image. o
      • Selection of appropriate prosthetic and fixing device manufacturer and ୍ size range templates.
      • O Overlaying the template on the image and permitting selection of appropriate size of prosthetic/fixing.
      • o Provide additional functionality in the form of Trauma and Osteotomy modules
      • Print and archive appropriate reports. ୍
      • Receive and store templates for prostheses and fixations supplied by ் Meridian Technique Ltd for particular manufacturer's range of products.
      • o Provide traceability of operator, date and decisions made.
    AI/ML Overview

    The provided text describes the Orthoview™ device, a Picture Archiving and Communications (PACS) System. However, it explicitly states that clinical testing was considered unnecessary, and instead, non-clinical verification and validation were performed. Therefore, there is no information in the provided text regarding acceptance criteria based on clinical performance or a study demonstrating the device meets such criteria in a clinical setting.

    The information below is derived from the non-clinical assessment as described in the provided text.

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria (Functionality)Reported Device Performance
    Patient and Procedure SelectionConfirmed as operating according to specified requirements.
    Image ScalingConfirmed as operating according to specified requirements.
    Procedure PlanningConfirmed as operating according to specified requirements.
    Templating and Trauma (Fracture) reductionConfirmed as operating according to specified requirements.
    Osteotomy and alleviation of congenital deformityConfirmed as operating according to specified requirements.
    Committing and Saving operating session DataConfirmed as operating according to specified requirements.
    Compilation and Printing of associated ReportsConfirmed as operating according to specified requirements.

    2. Sample size used for the test set and the data provenance

    The document does not specify a "test set" in terms of patient data or images. The testing was non-clinical verification and validation of software functions. It does not mention the number of images or cases used for this internal testing, nor their provenance. The device "Receives X-Ray images in a digital format from third party X-Ray machines/ X-Ray digitisers or PACS systems," but the testing described does not detail specific datasets used.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    Not applicable. The ground truth for the non-clinical functional testing appears to be the "specified requirements" of the software itself. There is no mention of experts establishing ground truth for a clinical test set because no clinical testing was performed.

    4. Adjudication method for the test set

    Not applicable. There was no clinical test set requiring adjudication. The non-clinical verification and validation involved confirming software functions operated according to specified requirements.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No MRMC study was performed. The device is a PACS system with templating tools, and the submission explicitly states clinical testing was unnecessary.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The context describes "non-clinical tests" where "Verification and Validation of Orthoview™ indicates that the requirements for intended use and associated performance characteristics are satisfied." This would inherently involve testing the algorithm's functions (e.g., image scaling, templating overlay, data saving) without a human-in-the-loop for clinical decision-making evaluation. The device's intended use, however, always involves a "suitably licensed and qualified healthcare professional" for interpretation and planning.

    7. The type of ground truth used

    For the non-clinical testing, the "ground truth" was the pre-defined software requirements and specifications. The tests aimed to confirm that the software functions (e.g., patient selection, image scaling, templating, data saving/printing) operated precisely as designed and expected by the developers.

    8. The sample size for the training set

    Not applicable. The document does not describe the use of machine learning or AI models requiring a training set in the contemporary sense. The device is described as a PACS system with templating and processing capabilities, not a diagnostic AI algorithm.

    9. How the ground truth for the training set was established

    Not applicable, as there was no described training set.

    Ask a Question

    Ask a specific question about this device

    K Number
    K032401
    Device Name
    ORTHOVIEW
    Manufacturer
    Date Cleared
    2003-08-14

    (10 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    ORTHOVIEW

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Orthoview™ is indicated for use when a suitably licensed and qualified healthcare professional requires access to medical images with the intention of using such images, in conjunction with templates for prosthetic devices, for the purposes of choosing the nature and characteristics of the prosthetic device to be used when planning a potential surgical procedure.

    Device Description

    Orthoview™ is a software device that permits the orthopedic surgeon to pre-plan surgical procedures by permitting image viewing and manipulation and prosthetic template overlay within a PACS workstation or standalone environment.

    AI/ML Overview

    The Orthoview™ device, a software for pre-operative surgical planning using digital prosthetic template overlay, was evaluated for its performance.

    1. Acceptance Criteria and Reported Device Performance

    The acceptance criteria for Orthoview™ were established implicitly through a comparison to traditional hand-scoring templating methods. The device was deemed acceptable if it demonstrated "similar accuracy" to these traditional methods.

    Acceptance CriteriaReported Device Performance
    Similar accuracy to traditional templating methods.The comparison concluded that "Orthoview™ provides an accurate alternative to traditional templating methods" and offers "similar accuracy" to these methods for determining prosthetic size.
    Provide an alternative to traditional templating.The study confirms that Orthoview™ "provides an accurate alternative to traditional templating methods."

    2. Sample Size and Data Provenance for the Test Set

    • Sample Size: Not explicitly stated. The document mentions "a retrospective technique of comparing the performance of 'templating' using a hand-scoring method versus the scoring achieved by Orthoview™," implying a test set of some size was used but the specific number of cases or images is not provided.
    • Data Provenance: Retrospective. The study utilized a "retrospective technique." The country of origin for the data is not specified.

    3. Number of Experts and Qualifications for Ground Truth of Test Set

    • Number of Experts: Not explicitly stated, but the assessment was "carried out by experienced healthcare professionals."
    • Qualifications of Experts: Described as "experienced healthcare professionals." More specific qualifications (e.g., number of years of experience, specific specialty like orthopedic surgeon) are not provided.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not explicitly stated. The document describes a comparison between "hand-scoring method versus the scoring achieved by Orthoview™," implying a direct comparison without detailing a specific adjudication process for discrepancies.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • A MRMC comparative effectiveness study was not explicitly reported. The document focuses on the performance of the device itself compared to traditional methods, rather than
      human readers using AI vs. without AI assistance.

    6. Standalone (Algorithm Only) Performance

    • A standalone performance study was implicitly done. The study compared the "scoring achieved by Orthoview™" against a hand-scoring method, suggesting an evaluation of the algorithm's output independently, even if human interaction is required for its use. The device is a "software device" that permits "image viewing and manipulation and prosthetic template overlay," implying the algorithm generates the templating suggestion.

    7. Type of Ground Truth Used

    • Type of Ground Truth: The ground truth was established by "the actual assessment and determination of prosthetic size using traditional templating methods using X-Ray film and template overlay" (referred to as "hand-scoring"). This can be categorized as expert consensus based on traditional methods.

    8. Sample Size for the Training Set

    • Sample Size: Not applicable/not provided. As a templating and overlay software, and given the release date (2003), it's highly probable that Orthoview™ relied on predefined algorithms and templates rather than a machine learning model requiring a distinct training set. The descriptions focus on its functionality and comparison to traditional methods. If an internal development set was used, it is not mentioned.

    9. How Ground Truth for the Training Set Was Established

    • How Ground Truth Was Established: Not applicable/not provided, as no training set is mentioned or implied for an AI/ML model for this device. The software likely implements rules and calculations for scaling and overlay, with templating designs provided by prosthetic manufacturers.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1