Search Results
Found 3 results
510(k) Data Aggregation
(163 days)
The VisualEyes system provides information to assist in the nystagmographic evaluation, diagnosis and documentation of vestibular disorders. Nystagmus of the eye is recorded by use of a goggle mounted with cameras. These images are measured, recorded, displayed and stored in the software. This information then can be used by a trained medical professional to assist in diagnosing vestibular disorders. The target population for VisualEyes system is 5 years of age and above.
VisualEyes 505/515/ 525 is a software program that analyzes eye movements recorded from a camera mounted to a video goggle. A standard Video Nystagmography (VNG) protocol is used for the testing. VisualEyes 505/515/ 525 is and update/change, replacing the existing VisualEyes 515/525 release 1 (510(k) cleared under K152112). The software is intended to run on a Microsoft Windows PC platform. The "525" system is a full featured system (all vestibular tests as listed below) while the "515" system has a subset of the "525" features. "505" is a simple video recording mode.
The provided text describes the acceptance criteria and a study to demonstrate the substantial equivalence of the VisualEyes 505/515/525 system to its predicate devices. However, it does not detail specific quantitative acceptance criteria or a traditional statistical study with performance metrics like sensitivity, specificity, or AUC as might be done for an AI/algorithm-only performance study.
Instead, the study aims to show substantial equivalence by verifying that the new software generates the same clinical findings as the predicate devices.
Here's a breakdown of the available information:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not provide a table of quantitative acceptance criteria (e.g., minimum sensitivity, specificity, or agreement thresholds) in the way one might expect for a standalone AI performance evaluation.
Instead, the acceptance criterion for the comparison study was to demonstrate "negligible statistical difference beneath the specified acceptance criteria" between the new VisualEyes software and the predicate devices. The "reported device performance" is simply the conclusion that this criterion was met.
Acceptance Criterion | Reported Device Performance |
---|---|
Demonstrate that VisualEyes 505/515/525 produces "negligible statistical difference beneath the specified acceptance criteria" compared to predicate devices for clinical findings. | "all data sets showed a negligible statistical difference beneath the specified acceptance criteria." |
"There were no differences found in internal bench testing comparisons or the external beta testing statistical comparisons." | |
"all the data between the new VisualEyes software and the data collected and analyzed with both predicate devices are substantially equivalent." |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size (Test Set): Not explicitly stated. The document mentions "various groups in different geographical locations externally" for beta testing and that "the same subject" was tested on both the new and predicate devices. However, the exact number of subjects or cases is not provided.
- Data Provenance: Retrospective, as it involved collecting data sequentially from subjects using both the new and predicate devices after the new software was developed. The beta testing was conducted in "external sites that had either MMT or IA existing predicate devices," implying a real-world clinical setting. The geographical locations are described as "different geographical locations externally," implying a multi-site study, but specific countries are not mentioned.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: Two.
- Qualifications of Experts: "licensed internal clinical audiologists." No specific experience level (e.g., "10 years of experience") is provided.
4. Adjudication Method for the Test Set
The document does not describe an adjudication method in the traditional sense of multiple readers independently assessing cases and then resolving discrepancies. Instead, the two clinical audiologists reviewed and compared the test results, stating: "It is the professional opinion of both clinical reviewers of the validation that all the data between the new VisualEyes software and the data collected and analyzed with both predicate devices are substantially equivalent." This suggests a consensus-based review rather than a formal adjudication process.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly described or performed to measure the improvement of human readers with AI assistance versus without. The study focused on the substantial equivalence of the new software as a device, not on AI-assisted human performance improvement.
6. Standalone (Algorithm-Only) Performance Study
Yes, in a way. The study evaluates the "VisualEyes 505/515/525 software" which is described as a "software program that analyzes eye movements." The comparison is between the output and findings generated by the new software versus the predicate software. While it's part of a system with goggles and cameras, the evaluation focuses on the analytical software component as a standalone entity in its ability to produce equivalent clinical findings.
7. Type of Ground Truth Used
The "ground truth" implicitly referred to here is the "clinical findings" and "test results" generated by the predicate devices. The new software's output was compared to these established predicate device results to determine equivalence. It's a "comparison to predicate" truth rather than an independent gold standard like pathology or long-term outcomes.
8. Sample Size for the Training Set
Not applicable/Not provided. The VisualEyes 505/515/525 is described as an "update/change" and "software program that analyzes eye movements," and "the technological principles for VisualEyes 3 is based on refinements from VisualEyes 2." It's not presented as a machine learning model that undergoes explicit "training" with a separate dataset. It's more of a software update with algorithm refinements.
9. How the Ground Truth for the Training Set was Established
Not applicable/Not provided, as there is no mention of a separate training set or machine learning training process. The software's development likely involved engineering and refinement based on existing knowledge and the performance of previous versions (VisualEyes 2 and the reference devices).
Ask a specific question about this device
(168 days)
The VisualEyes system provides information to assist in the nystagmographic evaluation, diagnosis and documentation of vestibular disorders. Nystagmus of the eye is recorded by use of a goggle mounted with cameras. These images are measured, recorded, displayed and stored in the software. This information then can be used by a trained medical professional to assist in diagnosing vestibular disorders. The target population for VisualEyes system is 5 years of age+
VisualEyes 505/515/ 525 is a software program that analyzes eye movements recorded from a camera mounted to a video goggle. A standard Video Nystagmography (VNG) protocol is used for the testing. VisualEyes 505/515/ 525 is and update/change, replacing the existing VisualEyes 515/525 release 1 (510(k) cleared under K152112). The software is intended to run on a Microsoft Windows PC platform. The "525" system is a full featured system (all vestibular tests as listed below) while the "515" system has a subset of the "525″ features. "505″ is a simple video recording mode. The VisualEyes 505/ 515/ 525 software is designed to perform the following vestibular tests: Spontaneous Nystagmus Test, Gaze Test, Smooth Pursuit Test, Saccade Test, Optokinetic Test, Dix-Hallpike, Positional Test, Caloric Test, SHA, Step, Visual VOR, VOR Suppression, Visual Eyes 505. The system consists of a head mounted goggle/mask, a camera unit and a software application running on a standard PC.
The provided text describes a 510(k) submission for the Interacoustics VisualEyes device. The focus of the performance tests is on demonstrating substantial equivalence to predicate devices rather than proving the device meets specific acceptance criteria for a novel AI/ML algorithm.
Therefore, many of the typical acceptance criteria and study details relevant to AI/ML device performance (like sensitivity, specificity, AUC, human-in-the-loop performance, expert ground truth establishment for a novel algorithm, etc.) are not explicitly mentioned in this document. The document focuses on showing that the new version of VisualEyes (revision 2) performs similarly to its predecessor (revision 1) and another cleared predicate device (VIDEO EYE TRAKKER).
However, I can extract the relevant information from the provided text as best as possible, interpreting the "acceptance criteria" in this context as the demonstration of substantial equivalence through comparable performance.
Here's the breakdown based on the provided document:
Acceptance Criteria and Reported Device Performance for Substantial Equivalence
Since this is a 510(k) submission demonstrating substantial equivalence to a predicate device, the "acceptance criteria" can be interpreted as the demonstration that the "key algorithms for detecting and analysing nystagmus" are similar between the new device and the predicate devices. The reported performance is the qualitative finding of "equivalence."
Table of Acceptance Criteria and Reported Device Performance
Acceptance Criterion (Implicit) | Reported Device Performance |
---|---|
Functional Equivalence: The device's eye movement analysis algorithms (IA Curve tracker) should perform comparably to those in the predicate devices. | "Same – An algorithm comparison evaluation has been performed. This evaluation shows high correlation between the algorithm used in the predicate device and the new as the algorithms are the same." |
"All results showed equivalence between the predicates and the subject, this means that results processed in predicates are showing equivalence to results from the subject device." | |
"We have performed a comparison validation between VisualEyes 505/515/ 525 and the predicate devices. All similarities and differences have been discussed. We trust that the results of these comparisons demonstrate that the VisualEyes 505/515/ 525 is substantially equivalent to the marketed predicate devices." | |
Clinical Performance Equivalence: The device should perform as specified, safely and effectively, in clinical comparisons. | "We have performed clinical comparisons between the three systems. These activities, testing and validation show that VisualEyes 505/515/ 525 perform as specified and is safe and effective." |
No Essential/Major Differences: No differences should exist that adversely affect safety and effectiveness. | "We did not find any essential or major differences between the devices." |
"Any deviations between VisualEyes 505/515/ 525 and predicate devices are appraised to have no adverse effect on the safety and effectiveness of the device." |
Study Details for Demonstrating Substantial Equivalence
-
Sample size used for the test set and the data provenance:
- The document states: "The demonstration was carried out as a side by side comparison where the same patient was analysed by the subject device and the predicate device simultaneously."
- It also mentions: "All tests were performed on test subjects with conjugate eye movements."
- However, the exact number of patients/subjects in the test set is not specified.
- Data provenance (country of origin, retrospective/prospective): Not specified.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- The document does not describe ground truth established by experts in the typical sense for AI/ML performance validation (e.g., for disease diagnosis).
- Instead, the "ground truth" for this substantial equivalence study appears to be the output of the predicate device, as the goal was to show that the new device's processing ("IA Curve Tracker") yielded "high correlation" with and "equivalence" to the predicate device's output. The predicate device itself is implicitly considered the "truth" for comparative purposes.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not applicable as the comparison was directly between the output of the subject device and the predicate device, not against a human-adjudicated ground truth.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC study was described. The study focused on the algorithmic comparison between devices, not human reader performance with or without AI assistance. The device's purpose is to assist a trained medical professional, but its performance was validated on the algorithm's output comparison.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, this appears to be the primary method of validation for the algorithm. The document states: "One camera recorded the left eye and was processed in the predicate device and the other recorded the right eye and was processed in subject device." This indicates a direct comparison of the algorithmic output, separate from human interpretation or human-in-the-loop performance.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc):
- For the algorithmic comparison, the "ground truth" was effectively the output of the predicate device. The study aimed to show that the subject device's processing produced equivalent results to the cleared predicate device for the "key algorithms for detecting and analysing nystagmus."
- For the overall clinical performance, it states "These activities, testing and validation show that VisualEyes 505/515/ 525 perform as specified and is safe and effective," which implies a broader clinical assessment but no specific details on how "truth" was established for clinical outcomes beyond direct comparison to the predicate.
-
The sample size for the training set:
- Not applicable/Not mentioned. This document describes a 510(k) submission for an update to an existing device, focusing on substantial equivalence tests, not the development or training of a de novo AI/ML model. The algorithm is stated as being "the same" as the predicate's "IA Curve tracker."
-
How the ground truth for the training set was established:
- Not applicable/Not mentioned for the reasons stated above.
Ask a specific question about this device
(153 days)
The VisualEyes 515/ VisualEyes 525 system provides information to assist in the nystagmographic evaluation, diagnosis and documentation of vestibular disorders. Nystagmus of the eye is recorded by use of a goggle mounted with cameras. These images are measured, recorded, displayed and stored in the software. This information then can be used by a tained medical professional to assist in diagnosing vestibular disorders. The target population for VisualEyes 515/525 system is 5 years of age+
VisualEyes 515/ 525 is a software program that analyzes eye movements recorded from a camera mounted to a video goggle. A standard Video Nystagmography (VNG) protocol is used for the testing. VisualEyes 515/ 525 is replacing the existing Micromedical Technologies Spectrum vestibular testing software system and Interacoustics VN415 and VO425 vestibular testing software (510(k) cleared under K964646 and K072254). The software system will work with the existing Micromedical VisualEyes Goggle and Interacoustics VN415/VO425 Goggle. The goggle hardware is not part of this submission and is still assumed covered by K964646 and K072254. The software is intended to run on a Microsoft Windows PC platform. The "525″ system is a full featured system (all vestibular tests as listed below) while the "515" system has a limited number of tests (indicated with a * below).
VNG in general is used to record nystagmus during oculomotor tests such as saccades, pursuit and gaze testing, optokinetics and also calorics.The VisualEyes 515/ 525 software performs the following standard vestibular tests: *Spontaneous Nystagmus, Gaze, Smooth Pursuit, Saccade, Optokinetic, *Positionals, *Dix-Hallpikes and *Caloric tests. These are exactly the same standard tests that are performed in the predicate devices and are described in the ANSI standard (ANSI S3.45-1999, "American National Standard Procedures for Testing Basic Vestibular Function"). There are no difference in any settings or parameters in these default tests in any of the devices. The clinical validation tests showed that each test was performed in exactly the same manner and resulted in similar findings when comparing VE525 to the predicate devices.
The provided text describes the Interacoustics VisualEyes 515/525 system, a video nystagmography (VNG) device, and its substantial equivalence determination to predicate devices. However, it does not contain the specific details required to fully address all aspects of your request, particularly regarding quantitative acceptance criteria, statistical study design, and sample sizes for training and test sets.
Here's an analysis based on the information provided, highlighting what is present and what is missing:
Acceptance Criteria and Study Details for VisualEyes 515/525
The provided document describes a substantial equivalence (SE) determination based on a comparison to predicate devices, rather than a clinical trial with predefined quantitative acceptance criteria and a detailed statistical analysis of performance metrics. Therefore, a table of acceptance criteria with reported device performance, a sample size for the test set, detailed ground truth establishment, and MRMC study details cannot be fully constructed from the given text.
The core of the "study" described is a side-by-side comparison to demonstrate the algorithms for detecting and analyzing nystagmus were similar to the predicate devices.
1. Table of Acceptance Criteria and Reported Device Performance
Based on the provided text, no numerical acceptance criteria (e.g., sensitivity, specificity, accuracy targets) are explicitly stated, nor are specific quantitative performance metrics reported for the VisualEyes 515/525. The "acceptance criteria" appear to be based on demonstrating "equivalence" in functionality and results compared to predicate devices.
Acceptance Criterion (Inferred from SE determination) | Reported Device Performance (Inferred) |
---|---|
Similar algorithms for detecting and analyzing nystagmus to predicate devices | "All results showed equivalence between the predicates and the subject, this means that results processed in predicates are showing equivalence to results from the subject device." |
Performance as specified | "All these activities, testing and validation show that VisualEyes 515/ 525 perform as specified and is safe and effective." |
No essential or major differences affecting safety and effectiveness compared to predicate devices | "We did not find any essential or major differences between the devices." |
2. Sample Size Used for the Test Set and Data Provenance
The document states: "The demonstration was carried out as a side by side comparison where the same patient was analysed by the subject device and the predicate device simultaneously."
- Sample Size for Test Set: Not explicitly stated. The text refers to "the same patient" in a singular sense, but then mentions "test subjects" in plural. It is unclear how many patients/subjects were included in this comparison.
- Data Provenance: Not explicitly stated (e.g., country of origin). The study involved simultaneous analysis of "the same patient" with conjugate eye movements. It is a prospective comparison in the sense that data was collected specifically for this validation, but not necessarily a "clinical trial" as commonly understood.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
This information is not provided in the document. The study described is a comparison between devices, not primarily against an independent expert-established ground truth. The implication is that the predicate devices' outputs serve as a reference for equivalence.
4. Adjudication Method for the Test Set
This information is not provided. Given the side-by-side comparison, it's possible the "adjudication" was a direct functional comparison of the outputs, but no formal adjudication method is outlined.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No, an MRMC comparative effectiveness study was not explicitly described. The document focuses on device-to-device equivalence rather than human-reader performance with or without AI assistance. The device assists trained medical professionals, but its impact on human reader improvement or an effect size is not discussed.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
The described "performance tests" involving the side-by-side comparison with predicate devices can be considered a form of standalone algorithm evaluation when comparing the output signals and processed data directly. However, the device is explicitly intended to "provide information to assist in the nystagmographic evaluation, diagnosis and documentation of vestibular disorders," implying a human-in-the-loop for diagnosis. The study focused on the equivalence of the raw measurements and processed data between the new and predicate devices.
7. The Type of Ground Truth Used
The "ground truth" for the performance comparison in this context appears to be the output of the legally marketed predicate devices (Micromedical Technologies Spectrum and Interacoustics VN415/VO425 systems). The goal was to show that the VisualEyes system produced "similar findings" and "equivalence" to these established devices.
"One camera recorded the left eye and was processed in the predicate device and the other recorded the right eye and was processed in subject device. All results showed equivalence between the predicates and the subject..."
8. The Sample Size for the Training Set
This information is not provided. The document makes no mention of a "training set" for the algorithms, which is typical for machine learning-based devices. The algorithms likely rely on established biomechanical models for eye movement rather than a distinct training phase.
9. How the Ground Truth for the Training Set Was Established
As no training set is mentioned (see point 8), this information is not provided.
Summary of Missing Information:
The provided 510(k) summary is focused on establishing substantial equivalence to previously cleared predicate devices, emphasizing functional and technological similarity. It does not contain the detailed statistical analysis, quantitative performance metrics, and specific study design elements (like explicit sample sizes for testing/training, ground truth establishment by experts, or MRMC studies) that would typically be found in direct performance studies or clinical trials for novel devices or AI/ML-driven diagnostics. The "study" described is a technical comparison rather than a full-fledged clinical validation with predefined endpoints.
Ask a specific question about this device
Page 1 of 1