Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K200534
    Device Name
    VisualEyes
    Manufacturer
    Date Cleared
    2020-08-12

    (163 days)

    Product Code
    Regulation Number
    882.1460
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K964646, K131681

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The VisualEyes system provides information to assist in the nystagmographic evaluation, diagnosis and documentation of vestibular disorders. Nystagmus of the eye is recorded by use of a goggle mounted with cameras. These images are measured, recorded, displayed and stored in the software. This information then can be used by a trained medical professional to assist in diagnosing vestibular disorders. The target population for VisualEyes system is 5 years of age and above.

    Device Description

    VisualEyes 505/515/ 525 is a software program that analyzes eye movements recorded from a camera mounted to a video goggle. A standard Video Nystagmography (VNG) protocol is used for the testing. VisualEyes 505/515/ 525 is and update/change, replacing the existing VisualEyes 515/525 release 1 (510(k) cleared under K152112). The software is intended to run on a Microsoft Windows PC platform. The "525" system is a full featured system (all vestibular tests as listed below) while the "515" system has a subset of the "525" features. "505" is a simple video recording mode.

    AI/ML Overview

    The provided text describes the acceptance criteria and a study to demonstrate the substantial equivalence of the VisualEyes 505/515/525 system to its predicate devices. However, it does not detail specific quantitative acceptance criteria or a traditional statistical study with performance metrics like sensitivity, specificity, or AUC as might be done for an AI/algorithm-only performance study.

    Instead, the study aims to show substantial equivalence by verifying that the new software generates the same clinical findings as the predicate devices.

    Here's a breakdown of the available information:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not provide a table of quantitative acceptance criteria (e.g., minimum sensitivity, specificity, or agreement thresholds) in the way one might expect for a standalone AI performance evaluation.

    Instead, the acceptance criterion for the comparison study was to demonstrate "negligible statistical difference beneath the specified acceptance criteria" between the new VisualEyes software and the predicate devices. The "reported device performance" is simply the conclusion that this criterion was met.

    Acceptance CriterionReported Device Performance
    Demonstrate that VisualEyes 505/515/525 produces "negligible statistical difference beneath the specified acceptance criteria" compared to predicate devices for clinical findings."all data sets showed a negligible statistical difference beneath the specified acceptance criteria."
    "There were no differences found in internal bench testing comparisons or the external beta testing statistical comparisons."
    "all the data between the new VisualEyes software and the data collected and analyzed with both predicate devices are substantially equivalent."

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size (Test Set): Not explicitly stated. The document mentions "various groups in different geographical locations externally" for beta testing and that "the same subject" was tested on both the new and predicate devices. However, the exact number of subjects or cases is not provided.
    • Data Provenance: Retrospective, as it involved collecting data sequentially from subjects using both the new and predicate devices after the new software was developed. The beta testing was conducted in "external sites that had either MMT or IA existing predicate devices," implying a real-world clinical setting. The geographical locations are described as "different geographical locations externally," implying a multi-site study, but specific countries are not mentioned.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Number of Experts: Two.
    • Qualifications of Experts: "licensed internal clinical audiologists." No specific experience level (e.g., "10 years of experience") is provided.

    4. Adjudication Method for the Test Set

    The document does not describe an adjudication method in the traditional sense of multiple readers independently assessing cases and then resolving discrepancies. Instead, the two clinical audiologists reviewed and compared the test results, stating: "It is the professional opinion of both clinical reviewers of the validation that all the data between the new VisualEyes software and the data collected and analyzed with both predicate devices are substantially equivalent." This suggests a consensus-based review rather than a formal adjudication process.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly described or performed to measure the improvement of human readers with AI assistance versus without. The study focused on the substantial equivalence of the new software as a device, not on AI-assisted human performance improvement.

    6. Standalone (Algorithm-Only) Performance Study

    Yes, in a way. The study evaluates the "VisualEyes 505/515/525 software" which is described as a "software program that analyzes eye movements." The comparison is between the output and findings generated by the new software versus the predicate software. While it's part of a system with goggles and cameras, the evaluation focuses on the analytical software component as a standalone entity in its ability to produce equivalent clinical findings.

    7. Type of Ground Truth Used

    The "ground truth" implicitly referred to here is the "clinical findings" and "test results" generated by the predicate devices. The new software's output was compared to these established predicate device results to determine equivalence. It's a "comparison to predicate" truth rather than an independent gold standard like pathology or long-term outcomes.

    8. Sample Size for the Training Set

    Not applicable/Not provided. The VisualEyes 505/515/525 is described as an "update/change" and "software program that analyzes eye movements," and "the technological principles for VisualEyes 3 is based on refinements from VisualEyes 2." It's not presented as a machine learning model that undergoes explicit "training" with a separate dataset. It's more of a software update with algorithm refinements.

    9. How the Ground Truth for the Training Set was Established

    Not applicable/Not provided, as there is no mention of a separate training set or machine learning training process. The software's development likely involved engineering and refinement based on existing knowledge and the performance of previous versions (VisualEyes 2 and the reference devices).

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1