Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K990868
    Date Cleared
    1999-03-30

    (14 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The ViewPoint is intended for use as a device which uses diagnostic images of the patient acquired specifically to assist the physician with presurgical planning and to provide orientation and reference information during intra-operative procedures.

    The ViewPoint is indicated for use in:

    • Intra-cranial surgical procedures involving space occupying lesions or malformations (including soft tissue, vascular and osseous)
    • Spinal surgical procedures involving spinal stabilization, neural decompression, or resection of spinal neoplasms.
    Device Description

    The Passive Tool Option for the ViewPoint system allows for optical tracking of wireless tools. The position sensor assembly provided with this option emits an infrared signal that is reflected off reflective markers mounted on the tools.

    AI/ML Overview

    This document describes the ViewPoint Passive Tool Option, a Class II Image Assisted Surgery Device. The submission focuses on demonstrating substantial equivalence to a predicate device (ViewPoint 3.0 Software, K970604) rather than presenting a standalone study with detailed performance metrics and acceptance criteria in the conventional sense (e.g., sensitivity, specificity, accuracy against a gold standard).

    Here's an analysis based on the provided text, addressing the requested points:


    1. Acceptance Criteria and Reported Device Performance

    The acceptance criteria for the ViewPoint Passive Tool Option are implicitly defined by demonstrating substantial equivalence to the predicate device, K970604. Specifically, the key performance parameter directly mentioned and compared is "Average Tool Accuracy."

    Acceptance Criteria (from Predicate Device K970604)Reported Device Performance (ViewPoint Passive Tool Option)Met?
    Average Tool Accuracy: 2.0 - 5.0 mm"Same" as predicate deviceYes
    Tools: A long and short tool with a minimum of four IREDs per tool.A long and short wireless tool with a minimum of three reflective markers per tool.Yes (conceptually equivalent in function)
    Type of Detector: Infrared signals emitted from diodes on a hand-held tool are detected by a Position Sensor Assembly (PSA) with two optical detectors.Infrared signals emitted from the PSA are reflected off of reflective markers mounted on the tool. The reflected signal is detected by the PSA with two optical detectors.Yes (conceptually equivalent in function)
    Active Digitizer Volume: Silo shape comprised of 0.5m radius hemisphere and a cylinder with 0.5m radius and 0.5m length."Same"Yes
    Registration Technique: Scanned Fiducials and Anatomical Fiducials."Same"Yes
    Operating Software Structure: UNIX environment with three major processes: Import, Surgery Application and Foot Switch. Uses a Graphical User Interface."Same"Yes
    Image Manipulation: MPR and surface rendering."Same"Yes
    Other Features: Detector Positioning Feature."Same"Yes
    Intended Use: As stated for the predicate device."Same"Yes
    Indications for Use: As stated for the predicate device."Same"Yes

    Note: The primary "acceptance criteria" here is substantial equivalence. The document does not describe specific numerical thresholds for new standalone performance tests beyond comparison to the predicate. The "Average Tool Accuracy" is the only direct numerical performance metric mentioned.


    2. Sample Size Used for the Test Set and the Data Provenance

    The document does not describe a separate "test set" in the context of a clinical study or performance evaluation with a specific number of instances. The evaluation is primarily a comparative analysis against a predicate device's specifications.

    Therefore:

    • Sample size for the test set: Not applicable, as no new test set (e.g., patient cases) for performance evaluation is described.
    • Data provenance: Not applicable. The "data" here refers to the design specifications and performance claims of the predicate device, not patient data.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts

    Not applicable. As there is no described test set of patient cases requiring ground truth establishment, no experts were used for this purpose in the context of this submission. The ground truth for the predicate device's performance would have been established during its own clearance process, but that information is not provided here.


    4. Adjudication Method for the Test Set

    Not applicable. No new test set for performance evaluation is described.


    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve With AI vs Without AI Assistance

    Not applicable. This device is an "Image Assisted Surgery Device Option" and not an AI/CADe system designed for interpreting diagnostic images. It provides orientation and reference information during surgery, not interpretive assistance for image diagnosis. Therefore, an MRMC study comparing human reader performance with and without "AI assistance" is not relevant to this device's function or the information provided.


    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    The device itself is a "Passive Tool Option" for an "Image Assisted Surgery Device," meaning it's a component designed to be used with a human surgeon for intra-operative guidance. Its function is to track tools. The "Average Tool Accuracy" of 2.0 - 5.0 mm represents a standalone performance characteristic of the tool tracking system itself, independent of a specific surgical outcome. However, this is a technical specification, not an "algorithm-only" performance in the sense of an AI model's diagnostic output.

    So, while the tool accuracy is a standalone metric, it's not a standalone clinical performance study in the way an AI diagnostic algorithm would be evaluated. The document states the "Average Tool Accuracy" for the predicate device is 2.0-5.0mm, and the new device performs the "Same."


    7. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, etc.)

    For the key performance metric of "Average Tool Accuracy" (2.0 - 5.0 mm), the ground truth would typically be established by precise, independent measurement systems in a controlled laboratory or phantom setting using known reference points. This is a technical specification, not a clinical ground truth related to patient disease. The document does not specify how the predicate device's accuracy was measured, only that the new device's accuracy is "Same."


    8. The Sample Size for the Training Set

    Not applicable. This device is a hardware option for an image-assisted surgery system and its performance is evaluated through engineering comparisons. It does not appear to be an AI/machine learning device that requires a training set in the conventional sense.


    9. How the Ground Truth for the Training Set Was Established

    Not applicable, as there is no described training set.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1