Search Results
Found 35 results
510(k) Data Aggregation
(266 days)
Springs, Colorado 80918
Re: K243709
Trade/Device Name: NeuroEars-Anna™
Regulation Number: 21 CFR 882.1460
| Nystagmograph, vestibular analysis |
| Classification & Regulation #: | Class II per 21 CFR §882.1460
I-Portal Video Oculography Eye Tracking System (VOG) (K143607)** | |
| Regulation Number No. | 21 CFR 882.1460
Microsoft Windows PC platform. | The I-Portal device functions as a nystagmograph, defined by 21 CFR 882.1460
The I-Portal device functions as a nystagmograph, defined by 21 CFR 882.1460 as "a device used
The NeuroEars-Anna™ system provides information to assist in the nystagmographic evaluation, diagnosis, and documentation of vestibular disorders. Nystagmus of the eye is recorded using a head-mounted display equipped with eye-tracking cameras. These images are measured, recorded, displayed, and stored in the software. This information can then be used by trained medical professionals to assist in diagnosing vestibular disorders.
The NeuroEars-Anna™ system is intended for use in individuals aged 12 years and older, based on the physical compatibility of the FOVE VR headset (FOVE Inc., Japan). While the ANSI S3.45 standard does not define age-based limitations, the FOVE VR headset is generally suitable for individuals aged 12 and above. For improved fit, soft materials such as sponge pads may be used in cases where the headset does not conform properly to the user's face. This applies to both pediatric and adult patients. Any additional padding should be used only if it does not interfere with eye-tracking performance or measurement accuracy and must follow the manufacturer's instructions for proper use.
NeuroEars-Anna™ is a standalone software device that analyzes eye movements to assist medical professionals in the nystagmographic evaluation, diagnosis, and documentation of vestibular disorders. The NeuroEars-Anna™ software is intended to be used with off the shelf hardware including the HMD, PC, and monitors.
The NeuroEears-Anna™ software is designed to perform the following vestibular tests:
- Spontaneous Nystagmus Test
- Gaze-Evoked Nystagmus Test
- Head Shaking Nystagmus Test
- Fistula Nystagmus Test
- Dix-Hallpike Test
- Positional Test
- Smooth Pursuit Test
- Random Saccade Test
- Saccadometry Test
- Optokinetic Nystagmus Test
- Subjective Visual Vertical/Subjective Visual Horizontal (SVV/SVH)
- Caloric Test
- Video Frenzel
NeuroEars-Anna™ is a software program that analyzes eye movements recorded from an eye-tracking camera mounted on a head-mounted display (HMD) with eye-tracking specifications suggested by ANSI/ASA S3.45-2009 (Reaffirmed by ANSI April 16, 2024 version). The HMD devices used can be commercial products such as the FOVE0 (powerd by FOVE Inc., Japan), which meet these minimum eye-tracking specifications. The software is intended to run on a Microsoft Windows PC platform.
Here's a breakdown of the acceptance criteria and study information for NeuroEars-Anna™, based on the provided FDA 510(k) Clearance Letter:
1. Table of Acceptance Criteria and Reported Device Performance
Performance Test | Acceptance Criteria | Reported Device Performance | Pass/Fail |
---|---|---|---|
Eye Tracking Camera Frame Rate | • Minimum 60 Hz | • Hardware specification standard 120 Hz | Pass |
Eye Tracking Accuracy | • Horizontal error: 0.1° to 1.0° | ||
• Vertical error: 0.4° to 1.0° | • Hardware specification standard: 1.15° median accuracy for uniform distribution across screen ( |
Ask a specific question about this device
(203 days)
Taastrup, DK-2630 Denmark
Re: K242198
Trade/Device Name: ICS Dizcovery (1091) Regulation Number: 21 CFR 882.1460
ICS Dizcovery is used in the assessment of the vestibular-ocularreflex (VOR) and nystagmus by measuring, recording, displaying, and analyzing eye and head movements.
The 1091 ICS Dizcovery is used in the assessment of patients with complaints of a vestibular nature such as dizziness, disequilibrium, and vertigo. The ICS Dizcovery type 1091 does not treat or diagnose the patient; the diagnosis is determined by the credentialed physician.
ICS Dizcovery system is intended to be used by qualified medical personnel. Typical device users are Neurologists, Ears, Nose, and Throat specialists (ENTs), Audiologists, Physical Therapists, and Technicians supervised by one of the four mentioned typical users. Professionals with knowledge of diagnosing balance disorders. Users are assumed to have prior knowledge of the medical and scientific facts underlying the procedures offered by the ICS Dizcovery system.
The intended patient population are children in the age of 10 to 18 and adults in age range from 18 to 99 years with a complaint of dizziness, balance disorder, or vestibular disease.
The ICS Dizcovery device is a portable video-oculography (VOG) device. The device shall be indicated for videonystagmography for the assessment of patients with complaints of dizziness, disequilibrium, and vertigo. It provides an assessment of the vestibular-ocular reflex (VOR) and nystagmus by measuring, recording, displaying, and analyzing eye and head movements.
The ICS Dizcovery is a wearable measurement system in the form of goggles, with built-in cameras and a motion sensor that simultaneously track eye- and head-movement, respectively. The goggles have two cameras that collect the eye movement video. The goggle device is connected to a PC with Otosuite Vestibular PC software (through a USB cable). The Otosuite Vestibular software functions by processing the binocular camera data collection. The collection of eye movement data is analysed to eye movement in respect to various stimuli throughout testing.
The ICS Dizcovery system is made up of the following components:
- A pair of binocular video goggles for testing, controlled through the Otosuite Vestibular PC software
- Otosuite Vestibular PC software for controlling tests, displaying test data for the various ● tests, reviewing, and printing test results, as well as and managing patients and patient data, users, and test devices.
- A set of Shadeshift® Vision Denied panels; these panels deny the patients vision throughout ● testing whilst allowing the clinician to continue tracking the eyes in darkness. Vision denied testing is needed throughout the test workflow.
- . A set of LCD panels; these automatically shift between dark states to facilitate a userfriendly approach to Skew Deviation testing. This removes the need for manually denying vision of each eye separately with the use of a hand or a paddle.
- A face cushion which inserts into the goggle frame to promote comfort whilst wearing and allow a better fit to face.
Here's a breakdown of the acceptance criteria and study information for the ICS Dizcovery device, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Feature/Test | Acceptance Criteria | Reported Device Performance |
---|---|---|
Test Methods | ||
Biocompatibility (Cytotoxicity, Sensitization, Irritation) | No issues found upon testing | No issues were found during biocompatibility testing. |
Electrical Safety (IEC 60601-1:2005+AMD1:2012+AMD2:2020) | Compliance with standard | The ICS Dizcovery system was tested to and complies with this standard. |
EMC (IEC 60601-1-2:2014+AMD1:2020, IEC 60601-4-2 Ed. 1.0 (2016)) | Compliance with standards | The ICS Dizcovery system was tested to and complies with these standards. |
Usability (IEC 60601-1-6:2010+A1:2013+A2:2020, IEC 62366-1:2015+AMD1:2020) | Compliance with standards | The ICS Dizcovery system was tested to and complies with these standards. |
Software Lifecycle (IEC 62304:2006 + A1:2015) | Compliance with standard and FDA guidance | Software Verification and Validation testing were conducted, and Basic Documentation Level was provided as recommended by FDA's Guidance. |
Laser Safety (IEC 60825-1:2014+ISH1:2017+ISH2:2017) | Compliance with standard | The ICS Dizcovery system was tested to and complies with this standard. |
Photobiological Safety (IEC 62471:2006) | Compliance with standard | The ICS Dizcovery system was tested to and complies with this standard. |
Basic Vestibular Function Testing (ANSI S3.45-2009 (Reaffirmed 2019)) | Compliance with standard | The ICS Dizcovery system was tested to and complies with this standard. |
Mechanical Testing (Goggle stimuli projection system, Scratch resistance of mirror, Head strap pull cycle, USB cable bend and pull cycle) | Performance not degraded over useful lifetime | The ICS Dizcovery successfully underwent mechanical testing to ensure that the performance of the device is not degraded by wear and tear over its useful lifetime. |
Design Verification & Validation | All tests meet required acceptance criteria | All tests were verified to meet the required acceptance criteria. The performance testing demonstrated that the differences in the design and performance do not affect intended use or raise new questions on safety and effectiveness. |
2. Sample Size Used for the Test Set and Data Provenance
The document does not explicitly state the sample size for any specific test sets. However, it does refer to "Design Verification & Validation activities" and "Reliability testing... conducted... by a third party." It does not provide information about the country of origin of the data or whether it was retrospective or prospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
This information is not provided in the document. The document mentions that a "credentialled physician will use this data to make a diagnosis," but this refers to the intended use of the device, not the establishment of ground truth for testing.
4. Adjudication Method for the Test Set
This information is not provided in the document.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size
No MRMC comparative effectiveness study is mentioned in the provided text. The document focuses on demonstrating substantial equivalence to a predicate device through non-clinical performance data and technical comparisons.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
The document implies standalone performance testing as part of the "Software Verification and Validation testing" and "Design Verification & Validation activities." The ICS Dizcovery system "functions by processing the binocular camera data collection. The collection of eye movement data is analysed to eye movement in respect to various stimuli throughout testing." However, the results (e.g., specific metrics for eye movement analysis accuracy) of this standalone algorithm performance are not detailed beyond a general statement of compliance and meeting acceptance criteria.
7. The Type of Ground Truth Used
The document does not explicitly detail the type of ground truth used for specific tests. For the overall validation, it implicitly relies on established engineering and safety standards (e.g., IEC, ANSI) as the "ground truth" against which the device's performance is measured. The "Design Verification & Validation activities" would have involved testing against predefined specifications.
8. The Sample Size for the Training Set
This information is not provided in the document. The text indicates that the device has "software" that processes and analyzes data, implying an algorithm that might have been trained, but no details on training data are given.
9. How the Ground Truth for the Training Set Was Established
This information is not provided in the document.
Ask a specific question about this device
(115 days)
Uniti 1/3 Padova, 35127 Italy
Re: K242726
Trade/Device Name: Synapsys VHIT Regulation Number: 21 CFR 882.1460
VHIT Device Trade Name Common Name Nystagmograph Classification Name Nystagmograph Regulation Number 882.1460
The Synapsys VHIT is a medical device designed to assess the vestibular-ocular-reflex (VOR) by measuring, recording, displaying, and analyzing eye and head movements.
Synapsys VHT (Video Head Impulse Test) allows to assess the vestibular-ocular-reflex (VOR) by measuring, and analyzing eye and head movements. Synapsys VHT does not require the patient to wear goggles, since it consists of a remote camera placed 90 cm from the subject, framing the subject's eyes and head. The device features real-time eyes detection and tracking, thanks to a built-in infrared illuminator hat lights up the subject. Head displacements are retrieved using algorithms that extract motion data from real time images recorded by the camera.
The provided document is an FDA 510(k) summary for the Inventis Synapsys VHIT device. It details a comparative analysis with a predicate device, ICS Impulse, rather than a standalone study of the Synapsys VHIT's performance against defined acceptance criteria for accuracy or clinical effectiveness. The information is limited regarding the specific details requested about statistical methodology and ground truth establishment for a standalone performance study.
Based on the provided text, here's what can be extracted and inferred regarding the "acceptance criteria" and "study that proves the device meets the acceptance criteria":
General Approach and "Acceptance Criteria"
The study described is a comparative analysis rather than a direct performance assessment against a pre-defined absolute accuracy (e.g., Sensitivity/Specificity targets). The core "acceptance criterion" appears to be the agreement between the Synapsys VHIT and the predicate device (ICS Impulse) in VOR Gain measurements.
- Acceptance Criteria for VOR Gain Measurement: The primary criterion for the comparative analysis was that "the average VOR Gain computed by the two devices fell within a confidence interval of ±0.1."
Study Details Related to "Acceptance Criteria" and "Device Performance"
Here's the information organized as requested, with details extracted from the document:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria (for Comparative Study) | Reported Device Performance |
---|---|
Average VOR Gain (Synapsys VHIT) vs. Average VOR Gain (ICS Impulse) difference within a confidence interval of ±0.1. | "The results of the comparative analysis demonstrate that the VOR gain differences fall within the predefined confidence interval of ±0.1." |
Note: The document primarily focuses on demonstrating substantial equivalence to a predicate device through VOR gain agreement, not on establishing independent accuracy metrics like sensitivity or specificity against a definitive ground truth of vestibular function.
2. Sample size used for the test set and the data provenance
- Sample Size: Not explicitly stated. The document mentions tests on "pathological and nonpathological patients" but does not provide the number of patients or the number of tests performed.
- Data Provenance: Not explicitly stated (e.g., country of origin). The study used "pathological and nonpathological patients," implying real patient data. It is a prospective comparison of measurements from two devices on the same subjects.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
This information is not provided in the document. The "ground truth" in this comparative study is the measurement obtained from the predicate device (ICS Impulse) and consistency in eye/head movement trajectories between the two devices, rather than an independent expert-derived clinical diagnosis.
4. Adjudication method for the test set
This information is not provided. As this was a comparative measurement study rather than a diagnostic performance study where human readers would adjudicate results, an adjudication method in the traditional sense (e.g., for image interpretation) is unlikely to have been applied. The comparison was statistical on the quantitative VOR gain values and qualitative assessment of trajectories.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No. An MRMC study was not performed as described. The device is a diagnostic tool for measuring VOR; it is not an AI-powered image analysis tool that assists human readers in making diagnoses. The study performed a direct comparison of quantitative measurements from two medical devices.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, in spirit. The comparison was between the measurements generated by the Synapsys VHIT device (which uses "software algorithms for eye tracking and head pose estimation") and the predicate device. While human operators perform the tests, the analysis of VOR gain is an algorithmic output. The study's focus was on the agreement of the device's algorithmic output with that of the predicate. It did not involve assessing the device's diagnostic performance (e.g., sensitivity/specificity for a specific condition) against an independent clinical ground truth in a standalone manner.
7. The type of ground truth used
The "ground truth" in this study was the measurements provided by the legally marketed predicate device (ICS Impulse). Additionally, the consistency of "trajectories of head and eye movements" observed by both devices served as a secondary form of 'truth' or agreement. This is a common approach for demonstrating substantial equivalence for quantitative measurement devices.
8. The sample size for the training set
Not applicable/Not Provided. The document describes a verification and validation study for a medical device that includes "software algorithms." It does not provide details of an AI/ML model's training set as one might find for a deep learning algorithm. The "software algorithms" mentioned are likely deterministic signal processing or computer vision algorithms for eye and head tracking, rather than a machine learning model that requires a "training set" in the conventional sense of supervised learning. If there was an ML component, the training data details are not disclosed.
9. How the ground truth for the training set was established
Not applicable/Not Provided. As mentioned above, the document does not describe a training set for an AI/ML model. Therefore, how its "ground truth" was established is not relevant or not provided.
Ask a specific question about this device
(273 days)
Taichung City, 411 Taiwan
Re: K223047
Trade/Device Name: NeuroSwift Pro Regulation Number: 21 CFR 882.1460
Classification Panel: | Neurology |
| Product Code: | GWN |
| Regelation Number | 882.1460
NeuroSwift Pro is intended for Viewing and recording eye movements in support of identifying vestibular disorders.The system is to be used by trained healthcare personnel in an appropriate healthcare setting. This system provides no diagnosis and does not provide diagnostic recommendations. The target population is 12+ years of age.
The NeuroSwift Pro is intended for viewing and recording eye movements in support of identifying vestibular disorders. The system is to be used by trained healthcare personnel in an appropriate healthcare setting. This system provides no diagnosis and does not provide diagnostic recommendations. NeuroSwift Pro. model NS01-2 contains goggles and software. The NeuroSwift Pro is a combination of hardware and software, designed to provide information for clinicians as a supplement in clinical decision-making by eye movements. The NeuroSwift Pro goggle is a see-through binocular video goggle with a pair of light reflectors and 3 metered, nylon braided USB cable connecting to the computer interface. Accompanying components include a stable and sturdy head strap, face cushion, and a lightweight eye cover. The hardware provides high-definition video recording capability. The NeuroSwift Pro software is a computer interface designed to record eye movement videos and simultaneously display the visual target(s) on the computer screen. The software provides vestibular test modes and calibration functions. Initially, the software instructs the users to follow calibration functions. Then, the examiner can observe spontaneous nystagmus using the eye cover. The examiner can perform oculomotor tests with the goggles, while the patient follows visual targets on the screen. During gaze tests, the patient will fixate on stationary white spots that are positioned at center, right, left, up and down. In saccade tests the patient is asked to stare at the target moving in horizontal, vertical, or mixed pattern. In pursuit tests the patient's ability to track a target that moves in a sinusoidal or triangular pattern across the screen. The optokinetic function provides a large moving checkerboard pattern on the screen. Patients can change in various body positions as directed by the clinician. The software measures the eye movement via slow phase velocity and measures the eye position shift and trace. The recorded video and test protocols are processed in the NeuroSwift Pro software. The NeuroSwift Pro generates the traces of eye movements, eye velocities, and analytical data, which allows the clinician to determine the response of the patient according to the test functions.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:
Acceptance Criteria and Device Performance
The provided document details a performance comparison test to demonstrate substantial equivalence to a predicate device. The acceptance criteria are implicit in the conclusion that the device "meets the pre-specified acceptance criteria" and that all criteria generated a "Pass" status.
Table of Acceptance Criteria and Reported Device Performance
Description | Acceptance Criteria (Implicit) | Reported Device Performance |
---|---|---|
Calibration | Equivalent to predicate device | Met (indicated as "same") |
Spontaneous nystagmus | Equivalent to predicate device | Met (indicated as "same") |
Gaze | Equivalent to predicate device | Met (indicated as "same") |
Saccade | Equivalent to predicate device | Met (indicated as "same") |
Pursuit | Equivalent to predicate device | Met (indicated as "same") |
Optokinetic | Equivalent to predicate device | Met (indicated as "same") |
Positional | Equivalent to predicate device | Met (indicated as "same") |
Calibration ability | Pass | Passed |
Eye movement direction | Pass | Passed |
Deming regression results | Within 95% CI agreement with predicate device | Passed (analyzed and plotted for comparison) |
Study Details
1. Sample Size and Data Provenance:
- Test Set Sample Size: 10 subjects.
- Data Provenance: The document does not explicitly state the country of origin. It appears to be a prospective study as subjects underwent testing with both devices specifically for this evaluation.
2. Number and Qualifications of Experts for Ground Truth:
- Number of Experts: Two (2) healthcare professionals (referred to as "Evaluators").
- Qualifications of Experts: They are described as "experienced healthcare professionals." More specific qualifications (e.g., audiologist, neurologist, years of experience) are not provided.
3. Adjudication Method for the Test Set:
- The evaluation involved "two experienced healthcare professionals (Evaluators)" who were responsible for "observing the eye movements, generating test reports for each vestibular function test, and comparing the results between the two devices."
- A "data analyzer utilized the Deming regression method (95% CI) to analyze and plot the data for comparison."
- Finally, "The evaluators then assessed and rated the 'Pass/Fail' status for all criteria of all tested subjects, including the calibration ability, eye movement direction, and Deming regression results."
- This suggests a consensus-based adjudication method where both evaluators assessed and rated the results, likely agreeing on the "Pass/Fail" status after reviewing the data and Deming regression analysis. It's not a specified 2+1 or 3+1 method; rather, it appears to be a joint assessment and rating by the two evaluators, supported by quantitative analysis.
4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- Was it done?: No, a traditional MRMC study comparing human readers with and without AI assistance was not conducted. This study's purpose was to demonstrate the equivalence of the new NeuroSwift Pro device to a predicate device, not the improvement of human reader performance with AI. The device itself (NeuroSwift Pro) is for "viewing and recording eye movements in support of identifying vestibular disorders" and "provides no diagnosis and does not provide diagnostic recommendations." Therefore, it's not an AI-assisted diagnostic tool in the typical sense that would necessitate an MRMC "human + AI vs human only" study.
- Effect Size: N/A, as an MRMC study comparing human readers with AI assistance was not performed.
5. Standalone Performance (Algorithm Only):
- Was it done?: Yes, to an extent. While not explicitly framed as "algorithm only performance," the study conducted a performance comparison test where the device's output (eye movements, test reports, and quantitative data like slow phase velocity, eye position shift, and trace) was compared to that of the predicate device. The Deming regression analysis specifically evaluated the agreement of quantitative measurements between the two devices. This could be considered a form of standalone performance evaluation in the context of device function and measurement accuracy, rather than diagnostic accuracy.
6. Type of Ground Truth Used:
- The ground truth in this comparative study was the performance of the legally marketed predicate device (VisualEyes 505/515/525) and the expert assessment of the evaluators. The study aimed to demonstrate that the NeuroSwift Pro's measurements and qualitative assessments ("Pass/Fail" for calibration, eye movement direction) were equivalent to those obtained from the predicate device, as interpreted by experienced healthcare professionals. It's a comparative ground truth rather than an independent gold standard like pathology or long-term clinical outcomes.
7. Sample Size for Training Set:
- The document does not provide information on a training set for any machine learning components. This is likely because the device primarily functions as an eye movement recording and analysis system (nystagmograph), not explicitly an AI/ML diagnostic algorithm that would require a separate training dataset for model development. The focus is on the accuracy and equivalence of its measurements and recording capabilities.
8. How Ground Truth for Training Set was Established:
- N/A, as information on a training set is not provided.
Ask a specific question about this device
(265 days)
|
| Reference Devices: | RightEye Vision System (K181771)
Class II (21 CFR 882.1460
| Nystagmograph (GWN)
21 CFR 882.1460
The Retitrack™ is intended for recording, viewing, and analyzing temporal characteristics of fixation and saccadic responses when viewing a visual stimulus. The Retitrack™ is intended for use by healthcare practitioners in healthcare settings (e.g., physician's office, clinic, laboratory).
The Retitrack™ is a monocular, bench-top saccadometer that incorporates scanning laser ophthalmoscope (SLO) technology and eye tracking software to record, view, measure, and analyze eye motion. The Retitrack™ is comprised of an optical head containing an illumination system and an optical system; a base unit with a computer, electronics, and a power distribution system; connections for external input/output devices (e.g., monitor, keyboard, mouse, and storage media); a patient forehead and chin rest; and operational software.
The Retitrack™ interacts with the patient by directing light from an infrared (840 nm) superluminescent diode (SLD) into the patient's eye. The only parts of the device that contact the patient are the forehead and chin rest with adjustable temple pads and an optional attachable head strap to stabilize the patient's head.
The Retitrack™ uses the SLD light to scan the patient's retina in two dimensions while the patient is viewing a visual stimulus. The optical imaging system detects the reflected (or returned) light from the retina and creates high-resolution, digital retinal video sequences over time. The eye tracking software uses eye motion corrected frames to measure the translational retinal movement over time. The device displays the analysis of the eye motion results and saves the retinal video and a report. The Retitrack™ does not provide a diagnosis or treatment recommendation.
The Retitrack™ has separate tests that measure fixation stability (including microsaccades and drift) and visually guided horizontal saccade tracking. The Retitrack™ can be programmed by the user with specific visual stimuli presentations, including a single fixed stimulus to measure fixation stability or two alternating stimuli in different orientations to measure horizontal saccades. For the fixation stability test, the Retitrack™ analyzes the fixation responses, including microsaccade amplitude, microsaccade frequency, microsaccade velocity, drift velocity, and drift ratio over time. For the saccade tracking tests, the Retitrack™ analyzes the saccadic responses, including duration, amplitude, target accuracy, latency, and velocity.
The provided text describes the Retitrack™ device and its performance testing to demonstrate substantial equivalence to a predicate device. However, it does not explicitly state "acceptance criteria" in the format of a table or provide specific values for the device to meet. Instead, it describes various performance tests and their outcomes, implying that successful completion of these tests serves as the criteria for acceptance.
Therefore, the following information is extracted and presented based on what is available in the text, and where specific acceptance criteria are not provided, the reported performance is described as the outcome of the validation.
Acceptance Criteria and Device Performance
Since explicit quantitative acceptance criteria for all aspects are not provided, the "Acceptance Criteria" column will describe the objective of the test, and the "Reported Device Performance" will detail the findings.
Acceptance Criteria (Objective of Test) | Reported Device Performance |
---|---|
Verify compliance with safety standards (e.g., IEC 60601-1, IEC 60601-1-2, IEC 60825-1, ANSI Z80.36) | Device demonstrated compliance with all listed standards, including IEC 60601-1:2005 + AMD2:2012 + AMD2:2020, IEC 60601-1-2:2014, IEC 60825-1:2014, and ANSI Z80.36-2021. It is classified as Group 1 scanning instrument (light hazard $\leq$ 1.32 mW at the eye) and Class 1 laser product. |
Software verification and validation (function, GUI, analysis algorithm, usability) | Software functions, graphical user interface (GUI), analysis algorithm, and usability were verified and validated with representative intended users in a simulated use environment. (No specific metrics provided, but implied successful). |
Eye movement measurement accuracy and tracking performance (bench testing) | Demonstrated accuracy and tracking performance. (No specific metrics provided, but implied successful). Spatial resolution reported as 200 videos for fixation stability and >300 videos for horizontal saccade tracking. |
* **Data Provenance:** The document does not specify the country of origin for the human subject data. It also does not explicitly state whether the study was retrospective or prospective, but the description of "human subjects" and "recorded... while pupil videos were recorded simultaneously" implies a prospective data collection.
2. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:
* The document does not provide information on the number of experts used or their qualifications for establishing ground truth for the test set. The ground truth appears to be based on the device's ability to accurately measure expected responses or on comparative analysis with another tracking method, rather than expert consensus on a diagnosis or interpretation.
-
Adjudication Method for the Test Set:
- The document does not specify any adjudication method for the test set. The validation seems to rely on quantitative measurement comparisons and correlations rather than subjective interpretations requiring adjudication.
-
Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- No, an MRMC comparative effectiveness study was not done in the context of human readers improving with AI vs. without AI assistance. The device is an "Eye Movement Monitor" and the studies reported focus on its measurement accuracy and equivalence to other tracking methods, not on assisting human interpretation of images or data.
-
Standalone Performance (Algorithm Only without Human-in-the-Loop Performance):
- Yes, standalone performance was assessed. The device itself is an automated measurement tool. The performance tests described (e.g., "Fixation and saccade measurements were successfully measured for all subjects," "linear relationship... found between the expected response and the measured retinal response," "good agreement between the pupil and retinal tracking methods") refer to the algorithm's direct measurement capabilities without human interpretation as part of the primary output.
-
Type of Ground Truth Used:
- The ground truth appears to be based on:
- Expected responses: For saccade amplitude and velocity, the device's measurements were compared against "expected response" (likely defined by the stimulus presented).
- Comparative method: For retinal vs. pupil tracking, the ground truth for comparison was the "pupil videos... processed with a standalone pupil tracking algorithm."
- This is not typical "expert consensus" or "pathology" ground truth as might be seen for diagnostic imaging devices. It's an engineering and physiological measurement validation.
- The ground truth appears to be based on:
-
Sample Size for the Training Set:
- The document does not provide information on the sample size used for the training set.
-
How the Ground Truth for the Training Set Was Established:
- The document does not provide information on how the ground truth for an implied training set (if any for the analysis algorithm's development) was established.
Ask a specific question about this device
(30 days)
Jersey 07059
Re: K203082
Trade/Device Name: Insight Infrared Video Goggles Regulation Number: 21 CFR 882.1460
Infrared Video Goggles, Video Frenzels, Video Goggles
Classification Name - Nystagmograph (21 CFR § 882.1460
The Insight Infrared Video Goggles are intended for viewing and recording eye movements in support of identifying vestibular disorders in patients. The device is intended for use only by a trained healthcare professional in an appropriate healthcare setting. This device provides no diagnoses nor does it provide diagnostic recommendations. The target population is 12+ years of age.
The Insight Infrared Video Goggles system displays and records eye movements on a computer from cameras mounted to goggles worn by a patient The eye movements called nystagmus are part of the body's balance system and can be analyzed by a trained clinician to provide objective information during a vestibular exam. The goggles are designed to block all external light from the patient's eyes so they are unable to fixate on anything in their visual field. This is an important performance characteristic of the googles since the eyes can suppress abnormal nystagmus when not occluded. The goggles have a durable plastic shell that houses two (2) cameras, infrared LED lights, two (2) switch-driven visible lights, and a face cushion. The goggles connect to the computer with a 4 m USB cable and are designed to be worn by the patient for 10 to 15 minutes on average in various body positions as directed by the clinician.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) summary for the Insight Infrared Video Goggles (K203082):
Device: Insight Infrared Video Goggles
Intended Use: Viewing and recording eye movements in support of identifying vestibular disorders in patients. For use by trained healthcare professionals, target population 12+ years of age. Provides no diagnoses or diagnostic recommendations.
1. Table of Acceptance Criteria and Reported Device Performance
The core performance claims for this device revolve around its ability to clearly view and record eye movements for vestibular assessment, mirroring the capabilities of the predicate device. The study design reflects this by focusing on the subjective assessment of eye movement visibility by trained clinicians.
Acceptance Criteria | Reported Device Performance |
---|---|
Ability to view eye movements clearly enough for assessment during relevant Vestibular Function Tests (spontaneous nystagmus, gaze-evoked nystagmus, positional/positional nystagmus). | "YES" for all criteria for all subjects tested by both clinicians. |
Performance present (i.e., eye movements visible and clear for assessment). | 100% PASS |
Demonstrated equivalent performance to the VisualEyes Video Eye Monitor (K964325) for shared vestibular tests. | Results "demonstrate equivalent performance." |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: 5 subjects.
- Data Provenance: Not explicitly stated regarding country of origin, but implied to be a prospective study conducted specifically for this submission. The testing was comparative against a predicate device.
3. Number of Experts Used to Establish Ground Truth and Qualifications
- Number of Experts: Two trained clinicians.
- Qualifications of Experts: Described as "trained clinicians." No specific years of experience or board certifications (e.g., radiologist) are provided in this document, but their role is to assess the visibility and clarity of eye movements, not to make diagnostic calls. The indication for use specifies "trained healthcare professional," and the predicate comparisons mention a similar level of training for the intended operator (audiologists, ENT doctors, physicians, vestibular rehabilitation specialists, or licensed healthcare personnel).
4. Adjudication Method for the Test Set
- Adjudication Method: Not explicitly described as a formal adjudication process. The document states that "two trained clinicians viewed the eye movements." Since the Insight Infrared Video Goggles were rated "YES" for all criteria by both clinicians for all subjects tested, perfect concordance was achieved, negating the need for a specific adjudication rule like 2+1 or 3+1.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- MRMC Study: No, a formal MRMC comparative effectiveness study in the sense of human readers improving with AI vs. without AI assistance was not performed.
- This device is not an AI-assisted diagnostic tool; it's a hardware device for viewing and recording eye movements.
- The comparative performance testing focused on whether the device itself could provide clear visibility of eye movements, equivalent to a predicate device, as assessed by human clinicians. It does not measure the improvement of human readers' diagnostic accuracy with or without the device, but rather the device's functional equivalence.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance)
- Standalone Performance: Not applicable in the context of an algorithm's (AI) performance. This device is a hardware component (goggles with cameras) that provides visual data for human interpretation. The "performance" being evaluated is the clarity of the video output as perceived by human clinicians, not an automated algorithm's output. The device "provides no diagnoses nor does it provide diagnostic recommendations."
7. Type of Ground Truth Used
- Type of Ground Truth: Expert consensus on the visibility and clarity of eye movements offered by the device. The "ground truth" here is the subjective assessment by the trained clinicians that the eye movements were adequately visible for performing the vestibular function tests. It is not based on pathology, outcomes data, or a definitive clinical diagnosis.
8. Sample Size for the Training Set
- Sample Size for Training Set: Not applicable. This device is a hardware system for capturing video, not an AI/ML algorithm that requires a training set. The "testing" described is performance validation against basic functionality, not machine learning model training.
9. How Ground Truth for Training Set Was Established
- Ground Truth for Training Set: Not applicable, as there is no training set for this type of device.
Ask a specific question about this device
(163 days)
Alle 1 Middelfart, 5500 Denmark
Re: K200534
Trade/Device Name: VisualEyes Regulation Number: 21 CFR 882.1460
analysis device Nystagmograph, apparatus, vestibular analysis GWN Neurology Class II (According to 21 CFR 882.1460
The VisualEyes system provides information to assist in the nystagmographic evaluation, diagnosis and documentation of vestibular disorders. Nystagmus of the eye is recorded by use of a goggle mounted with cameras. These images are measured, recorded, displayed and stored in the software. This information then can be used by a trained medical professional to assist in diagnosing vestibular disorders. The target population for VisualEyes system is 5 years of age and above.
VisualEyes 505/515/ 525 is a software program that analyzes eye movements recorded from a camera mounted to a video goggle. A standard Video Nystagmography (VNG) protocol is used for the testing. VisualEyes 505/515/ 525 is and update/change, replacing the existing VisualEyes 515/525 release 1 (510(k) cleared under K152112). The software is intended to run on a Microsoft Windows PC platform. The "525" system is a full featured system (all vestibular tests as listed below) while the "515" system has a subset of the "525" features. "505" is a simple video recording mode.
The provided text describes the acceptance criteria and a study to demonstrate the substantial equivalence of the VisualEyes 505/515/525 system to its predicate devices. However, it does not detail specific quantitative acceptance criteria or a traditional statistical study with performance metrics like sensitivity, specificity, or AUC as might be done for an AI/algorithm-only performance study.
Instead, the study aims to show substantial equivalence by verifying that the new software generates the same clinical findings as the predicate devices.
Here's a breakdown of the available information:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not provide a table of quantitative acceptance criteria (e.g., minimum sensitivity, specificity, or agreement thresholds) in the way one might expect for a standalone AI performance evaluation.
Instead, the acceptance criterion for the comparison study was to demonstrate "negligible statistical difference beneath the specified acceptance criteria" between the new VisualEyes software and the predicate devices. The "reported device performance" is simply the conclusion that this criterion was met.
Acceptance Criterion | Reported Device Performance |
---|---|
Demonstrate that VisualEyes 505/515/525 produces "negligible statistical difference beneath the specified acceptance criteria" compared to predicate devices for clinical findings. | "all data sets showed a negligible statistical difference beneath the specified acceptance criteria." |
"There were no differences found in internal bench testing comparisons or the external beta testing statistical comparisons." | |
"all the data between the new VisualEyes software and the data collected and analyzed with both predicate devices are substantially equivalent." |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size (Test Set): Not explicitly stated. The document mentions "various groups in different geographical locations externally" for beta testing and that "the same subject" was tested on both the new and predicate devices. However, the exact number of subjects or cases is not provided.
- Data Provenance: Retrospective, as it involved collecting data sequentially from subjects using both the new and predicate devices after the new software was developed. The beta testing was conducted in "external sites that had either MMT or IA existing predicate devices," implying a real-world clinical setting. The geographical locations are described as "different geographical locations externally," implying a multi-site study, but specific countries are not mentioned.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: Two.
- Qualifications of Experts: "licensed internal clinical audiologists." No specific experience level (e.g., "10 years of experience") is provided.
4. Adjudication Method for the Test Set
The document does not describe an adjudication method in the traditional sense of multiple readers independently assessing cases and then resolving discrepancies. Instead, the two clinical audiologists reviewed and compared the test results, stating: "It is the professional opinion of both clinical reviewers of the validation that all the data between the new VisualEyes software and the data collected and analyzed with both predicate devices are substantially equivalent." This suggests a consensus-based review rather than a formal adjudication process.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly described or performed to measure the improvement of human readers with AI assistance versus without. The study focused on the substantial equivalence of the new software as a device, not on AI-assisted human performance improvement.
6. Standalone (Algorithm-Only) Performance Study
Yes, in a way. The study evaluates the "VisualEyes 505/515/525 software" which is described as a "software program that analyzes eye movements." The comparison is between the output and findings generated by the new software versus the predicate software. While it's part of a system with goggles and cameras, the evaluation focuses on the analytical software component as a standalone entity in its ability to produce equivalent clinical findings.
7. Type of Ground Truth Used
The "ground truth" implicitly referred to here is the "clinical findings" and "test results" generated by the predicate devices. The new software's output was compared to these established predicate device results to determine equivalence. It's a "comparison to predicate" truth rather than an independent gold standard like pathology or long-term outcomes.
8. Sample Size for the Training Set
Not applicable/Not provided. The VisualEyes 505/515/525 is described as an "update/change" and "software program that analyzes eye movements," and "the technological principles for VisualEyes 3 is based on refinements from VisualEyes 2." It's not presented as a machine learning model that undergoes explicit "training" with a separate dataset. It's more of a software update with algorithm refinements.
9. How the Ground Truth for the Training Set was Established
Not applicable/Not provided, as there is no mention of a separate training set or machine learning training process. The software's development likely involved engineering and refinement based on existing knowledge and the performance of previous versions (VisualEyes 2 and the reference devices).
Ask a specific question about this device
(109 days)
Pittsburgh, Pennsylvania 15238
Re: K192186
Trade/Device Name: I-Norm 100 18-45 Regulation Number: 21 CFR 882.1460
Common/Usual Name: Nystagmograph Classification Name: Nystaqmoqraph Product Classification: Class II, § 882.1460
All I-Portal® devices function as nystagmographs, defined by 21 CFR 882.1460
All I-Portal® devices function as nystagmographs, defined by 21 CFR 882.1460 as "devices used to measure
I-Norm 100_18-45 is a quantitative tool for the comparison of patient measurements to a database of known normal subjects ages 18-45. I-Norm 100_18-45 presents data for interpretation by qualified medical personnel trained in vestibular, neurotologic, and neuro-ophtalmic diagnostic testing. I-Norm 100_18-45 is applicable to individuals only in the 18-45 age range.
The proposed intended use for I-Norm 100 18-45 is as following:
I-Norm 100 18-45 is designed to be used with the family of I-Portal® devices:
-
I-Portal® Neuro Otologic Test Center (NOTC) (cleared by K083603, K143607)
-
I-Portal® Video-Nystagmography System (VNG) SVNG-2 (cleared by K143607)
-
I-Portal® Portable Assessment System™ Nystagmograph (I-PAS™) (cleared by K171884)
The I-Portal® devices with I-Norm 100_18-45 are nystagmograph devices intended for use in vestibular and neurotologic diagnostic testing. The addition of the I-Norm 100_18-45 does not change the intended use or the indication for use for these devices from their original clearances listed above.
I-Norm 100_18-45 is a software package that contains normative data for the segment of population ages 18 to 45, for oculomotor, vestibular, reaction time and cognitive (OVRT-C) tests delivered on the family of I-Portal® devices described below.
All I-Portal® devices function as nystagmographs, defined by 21 CFR 882.1460 as "devices used to measure, record, or visually display the involuntary movements (nystagmus) of the eyeball." Through their nystagmograph functionality, the I-Portal® devices are indicated for use as a measurement tool to assist trained physicians in their analysis of vestibular and neurotologic disorders, requiring a separation of central and peripheral nervous system deficits. The I-Portal® devices are used in an institutional environment on the order of a physician.
The I-Portal® NOTC features a rotational chair, optokinetic (OKN) optical stimulus, Pursuit Tracker (PT) laser target generator, I-Portal Video Oculography (VOG), I-Portal® and VEST™ software, and a test enclosure equipped with a communication system.
The I-Portal® VNG offers a subset of the NOTC tests and additional vestibular tests through a device with a smaller physical footprint. The VNG has many of the same elements used in the NOTC configuration: OKN optical stimulus, PT laser target generator, VOG, I-Portal and VEST™ software platforms.
The I-PAS™ is a portable, compact, 3D, head-mounted display system with integrated eye tracking that offers a set of tests equivalent to the VNG and additional vestibular and oculomotor tests, but with a smaller physical footprint. Stimuli are presented on a high-resolution digital screen (2560 x 1440 pixels) mounted in a virtual reality-like goggle system.
Software: All I-Portal® devices use VEST™ software to deliver a battery of oculomotor, vestibular, reaction time and cognitive (OVRT-C) tests and I-Portal® software to record eye movements and reaction time responses during these tests. The analysis of OVRT-C tests performed by VEST™ software provides the clinician with a number of physiological measurements to aid in the assessment of vestibular and neurotologic disorders. This application seeks to add I-Norm 100_18-45, a normative oculomotor, vestibular, reaction time and cognitive test database for ages 18-45 to the VEST™ software.
This document describes the I-Norm 100_18-45 software, a quantitative tool that compares patient measurements to a database of known normal subjects aged 18-45. It is designed to be used with Neuro Kinetics, Inc.'s I-Portal® devices (NOTC, VNG, I-PAS™).
Here's an analysis of the acceptance criteria and the study proving the device meets them:
1. Table of Acceptance Criteria and Reported Device Performance
The core acceptance criterion for the I-Norm 100_18-45 software is the establishment and presentation of a normative oculomotor and vestibular database for ages 18-45 across various tests. The device performance is demonstrated by the calculation of 95% Reference Intervals (RI) with 90% Confidence Intervals (CI) for a wide range of variables across 16 different oculomotor, vestibular, reaction time, and cognitive (OVRT-C) tests.
The document does not explicitly state pre-defined "acceptance criteria" in a numerical target format (e.g., "accuracy > X%"). Instead, the performance is demonstrated by the successful collection, analysis, and presentation of these normative ranges. The implication is that the derived normative ranges themselves, with their associated confidence intervals, represent the "performance" for establishing what is considered "normal" for the target population.
Table 5-2 (pages 8-10 of the input) serves as the primary evidence of the reported device performance, providing the calculated normative ranges. A representative sample from Table 5-2 is shown below to illustrate the type of reported performance:
Representative Performance Data from Table 5-2 (I-Norm 100_18-45)
Test | Variable | RI Lower Limit | RI Upper Limit | 90% CI for Lower Limit RI | 90% CI for Upper Limit RI |
---|---|---|---|---|---|
1. Saccade Random, Horizontal | Latency (sec) | n/a | 0.22 | n/a | 0.21 - 0.22 |
Accuracy (%) | 81 | 103 | 80 - 82 | 101 - 105 | |
Peak Velocity (deg/sec) for eye displacement of: 30 (deg) | 356 | n/a | 355 - 374 | n/a | |
3. Smooth Pursuit Horizontal 0.1Hz | Velocity gain | 0.78 | 1.07 | 0.76 - 0.80 | 1.07 - 1.08 |
Asymmetry (%) | -8.80 | 7.53 | (-9.91) – (-8.17) | 7.26 - 7.91 | |
5. OKN 20deg/s | Average eye velocity - for CCW stimuli (deg/sec) | -20.05 | -12.15 | (-20.31) — (-19.84) | (-13.00) — (-11.37) |
12. Visual Reaction Time | Latency (msec) | n/a | 343 | n/a | 335 - 350 |
16. Antisaccades | Error Rate (%) = % of pro-saccade errors | 0 | 50 | 0-0 | 50.00 -50.00 |
Note: "n/a" indicates a limit not of clinical interest for that specific variable (e.g., only an upper limit for latency, or only a lower limit for velocities/gains).
The "acceptance criteria" can be implicitly understood as the successful generation of these statistically robust normative ranges, allowing the device to perform its intended function of comparison. The software verification and validation testing mentioned under "Non-Clinical Performance Data" also implies acceptance criteria related to software functionality and display of norms.
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: The study collected data from a total of 466 subjects (males and females).
- Data Provenance:
- Country of Origin: United States. Subjects were recruited at 3 different sites: the University of Miami, Naval Medical Center San Diego, and Madigan Army Medical Center.
- Retrospective or Prospective: The data collection described appears to be prospective, as subjects were "recruited" and "tested" with a battery of tests.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
This study is focused on establishing normative data for a specific age range, not on classifying or diagnosing specific conditions based on expert consensus. Therefore, the concept of "experts establishing ground truth" in the traditional sense (e.g., for disease diagnosis) isn't directly applicable here.
Instead, the "ground truth" for the normative data is derived from a carefully selected healthy population (ages 18-45) that met strict inclusion/exclusion criteria designed to ensure they were free from conditions that could impact oculomotor and vestibular tests. The establishment of this "healthy" ground truth was implicitly overseen by the Institutional Review Boards (IRBs) that approved the protocols and the researchers conducting the study, who are qualified in the field of vestibular, neurotologic, and neuro-ophthalmic diagnostic testing. The specific number and qualifications of these researchers/clinicians supervising data collection and subject selection are not detailed, but the general context implies medical and scientific expertise.
4. Adjudication Method for the Test Set
Since the study aims to establish normative ranges from healthy individuals rather than diagnose conditions requiring interpretation of ambiguous data, an explicit "adjudication method" for the test set (like 2+1 or 3+1 consensus) was not performed or needed. The raw data from the healthy subjects themselves, after passing the exclusion criteria, formed the basis for the normative ranges.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No MRMC comparative effectiveness study was mentioned or performed. This study's purpose was to establish normative data for the device, not to assess human reader improvement with AI assistance. The I-Norm 100_18-45 provides a quantitative comparison tool, but it does not act as an AI that assists in interpreting complex images or signals in a way that typically necessitates an MRMC study to show human performance improvement. It's a reference database rather than an assistive AI interpretation tool.
6. Standalone (i.e., algorithm only without human-in-the-loop performance) Performance
The device, I-Norm 100_18-45, is a software package that contains normative data. Its "performance" is its ability to accurately store and present these normative ranges and to allow comparison of patient data to these norms. It is not an algorithm that independently diagnoses or makes decisions, but rather a reference tool.
The "standalone performance" is demonstrated by the calculation of the 95% Reference Intervals (RI) and 90% Confidence Intervals (CI) presented in Table 5-2. These numerical values are a direct output of the algorithm's statistical analysis of the collected data, independent of human interpretation for their derivation. The software then displays these values and visually indicates if patient data fall within or outside these norms. The "Software Verification and Validation Testing" also confirms that the software correctly displays these norms as intended.
7. Type of Ground Truth Used
The ground truth used was normative data derived from a large cohort of healthy individuals (ages 18-45) who met specific inclusion/exclusion criteria. This is essentially "healthy population data" as a form of ground truth for establishing what constitutes "normal" ranges for the specific physiological measurements obtained by the I-Portal® devices. There was no "expert consensus" on individual cases of disease, pathology, or outcomes data used to establish this normative ground truth.
8. Sample Size for the Training Set
The document does not explicitly delineate a "training set" and a "test set" in the typical machine learning sense for an AI model that learns a function. Instead, the entire dataset of 466 subjects was used to establish the normative database. This database, once established, acts as the reference for future patient comparisons. In this context, the 466 subjects serve as the data from which the "normative model" (i.e., the reference intervals) was derived.
9. How the Ground Truth for the Training Set Was Established
As noted above, there isn't a "training set" in the conventional AI sense. The ground truth for establishing the normative database (which acts as the reference for the device) was established as follows:
- Subject Selection: Healthy individuals aged 18-45 were recruited from the general population, including nonprofessional athletes, civilians, and military service members.
- Inclusion/Exclusion Criteria: Strict criteria approved by IRBs were used to select subjects. Exclusion criteria were focused on conditions/diseases that could impact oculomotor and vestibular tests (e.g., history of brain injury, severe neuropsychiatric disorders, neurodegenerative disorders of hearing and balance, cerebrovascular disorders, certain ear operations, systemic disorders like renal failure, cirrhosis, and pregnancy).
- Data Collection: Participants underwent a battery of oculomotor, vestibular, reaction time, and cognitive (OVRT-C) tests using I-Portal® NOTC or I-PAS™ devices.
- Statistical Analysis: The collected data from these rigorously selected healthy subjects were analyzed using a univariate general linear model to assess the effect of age, gender, and combined age x gender. Non-parametric methods were then used to calculate the 95% (2.5th and 97.5th) reference interval (RI) with 90% confidence interval (CI) for the lower and upper limits of each variable. This statistical derivation from a verified healthy population constitutes the "ground truth" for the normative data.
Ask a specific question about this device
(87 days)
Maryland 20814
Re: K181771
Trade/Device Name: RightEye Vision System Regulation Number: 21 CFR 882.1460
Nystagmograph |
Classification Name: 21 CFR 882.1460
|
| Product Code | GWN (21 CFR 882.1460
| GWN (21 CFR 882.1460
Conclusion:
The RightEye Vision System falls within the type of device requlated under 21 CFR 882.1460
The RightEye Vision System is intended for recording, viewing, and analyzing eye movements in support of identifying visual tracking impairment in human subjects.
RightEye Vision System detects involuntary eye movement behavior for the purpose of visual tracking. The RightEye Vision System is designed to provide accurate and reliable information for users to supplement and inform clinical decision-making. RightEye Vision System provides objective metrics acquired from eve movements measured and recorded by a hardware eye tracker that are not observable in clinical observation. Results of each RightEye Vision System assessment are transferred and stored remotely on a web server. All personal health information data are encrypted with HTTPS (HTTP Secure) protocol. The remote web server software calculates metrics from the assessment data and provides quantitative outputs and supporting graphics. The software can track the results over time showing changes in metrics, trendlines, graphs, visuals, and qaze replay. The user accesses these results by logging into the RightEye web portal.
The RightEye Vision System is designed to run on Windows 10 operating systems and the web portal has been optimized for Chrome. RightEye Vision System is programmed to run on a specific hardware setup, the Tobii Dynavox i15. It is deployed as a pre-loaded system on hardware provided and managed by RightEye.
The provided text does not contain detailed acceptance criteria and a study to prove the device meets these criteria in the format requested.
The document is a 510(k) premarket notification summary for the RightEye Vision System, which focuses on demonstrating substantial equivalence to a predicate device (EYE-SYNC K152915).
While it mentions that "Validation testing, including test-retest reliability and accuracy, has confirmed the performance of the RightEye Vision System for its intended use," it does not provide specific acceptance criteria, reported performance metrics, sample sizes, provenance, details about human experts, adjudication methods, or effects of AI assistance.
The information provided about "Supporting Information" and "Conclusion" is general and summarizes that software testing was conducted and performance was confirmed, but it does not detail how that performance was measured against specific, quantifiable acceptance criteria.
Therefore, I cannot populate the table or answer most of the questions based on the provided text.
Ask a specific question about this device
(30 days)
Buffalo, MN 55313
Re: K182214
Trade/Device Name: DizzyDoctor® System 1.0.0 Regulation Number: 21 CFR 882.1460
Eye Movement Monitor |
| Classification Name: | Nystagmograph (21CRF 882.1460
The DizzyDoctor® System 1.0.0 Eye Movement Monitor is indicated for use in the medical office, and in the home setting for monitoring patients with a diagnosis of dizziness caused by peripheral vestibular disorders who are under the supervision of a physician. The device detects abnormal eye movements in standard positional maneuvers by recording, tracking, storing and displaying vertical, horizontal eye movements. This device provides no diagnosis and does not provide diagnostic recommendations.
Dizziness and postural instability are common in patients in Otolaryngology practice. Accurate diagnosis and choice of treatment is hampered by difficulties in obtaining thorough histories and perceptions that physical examination is complex. The DizzyDoctor® System 1.0.0 broadens physician access to video recordings of abnormal eye movement disorders with an easily operated device for in-office use by health professionals and for in-home use by patients experiencing exacerbating episodes of dizziness outside the office setting.
Using mobile and web-based technology, The DizzyDoctor® System 1.0.0 allows recording of patient of abnormal eye movements in response to standard head positions used for monitoring peripheral vestibular disorders such as Benign Paroxysmal Vertigo. It consists of Vertigo Recording Googles (VRG) with a secure holder for the patient's iPhone Application for step-by-step audio instructions for medically-recognized Dix-Hallpike maneuvers, gyroscopic feedback for enabling correct head positioning, and accurate video-recording of eye movements in response to standard head positions used for assessing balance disorders.
The VRG secure docking station for the iPhone which aligns with the patient's pupil. The VRG has no direct electrical connection with external devices or equipment, and uses light from two LEDs during recording sessions. The VRG uses a standard iPhone compatible macro lens to adjust the focal length of the iPhone camera lens, and secures with a flexible headband.
Key components of the patient's iPhone support the DizzyDoctor® System 1.0.0 including: an accelerometer and gyroscope, a video camera, storage of video recordings, audio voice/speaker system for real-time interaction with the patient, standard software for downloading and playing mobile applications from external App vendors, software for web-based processes including uploading stored videos.
The DizzyDoctor® Mobile App provides audio support for step-by-step procedures in recording eye movements in relation to positional changes during self-testing, The DizzyDoctor® System is supported by a comprehensive web-based platform for secure patient and physician registration, as well as uploading, processing and downloading videos from the professional- and self-testing for abnormal eye movements. Processed videos are accessed and viewed by physicians on their desk-top office computers.
The provided document is a 510(k) summary for the DizzyDoctor® System 1.0.0, an Eye Movement Monitor. It details the device, its intended use, and comparative performance testing against a predicate device. However, it does not contain a specific table of acceptance criteria for algorithm performance (such as sensitivity, specificity, or AUC) or a dedicated study section proving the device meets such criteria in the manner typically seen for AI/ML device submissions.
Instead, the document focuses on demonstrating substantial equivalence to a predicate device through various types of testing, including:
- Biocompatibility testing
- Software verification and validation
- Usability/human factors (engineering) testing
- Performance, electrical safety, and electromagnetic compatibility (EMC) testing
The "Usability/human factors (engineering) testing" section is the closest to addressing performance with human factors, but it doesn't provide precise quantitative acceptance criteria or detailed results in the format requested for AI/ML performance.
Therefore, I cannot populate a table of acceptance criteria and reported device performance from this document in the format of AI/ML metrics. Similarly, direct answers to many of the subsequent questions (sample size for test set, data provenance, number of experts for ground truth, adjudication method, MRMC study, standalone performance, ground truth type for test set/training set, training set size) are not explicitly stated for an AI/ML component's performance study because the submission does not describe an AI/ML algorithm being validated in that manner for medical device functionality.
Here's what I can extract and infer based on the provided text, particularly focusing on the "Usability/human factors (engineering) testing" section, as it's the most relevant to a performance evaluation of the device in a user context:
Summary of Device Performance and Testing from the Document (as it pertains to functionality and usability, not AI/ML algorithm performance verification):
The DizzyDoctor® System 1.0.0 is an Eye Movement Monitor indicated for detecting abnormal eye movements in standard positional maneuvers by recording, tracking, storing, and displaying vertical, horizontal, and torsional eye movements. It is designed for use in medical offices and in the home setting for patients with dizziness caused by peripheral vestibular disorders. The device provides no diagnosis and does not provide diagnostic recommendations.
1. Table of Acceptance Criteria and Reported Device Performance
As noted, the document does not specify acceptance criteria in terms of AI/ML performance metrics (e.g., sensitivity, specificity, AUC). The performance evaluations described are related to usability, safety, and functional equivalence to a predicate device. The closest to "performance" in a clinical context within this document is the usability study with audiologists assessing eye movement recordings.
Performance Aspect (Inferred from Usability Studies) | Acceptance Criteria (Inferred) | Reported Device Performance |
---|---|---|
User interface task completion (Study 1, 3) | Competent completion, ability to self-correct if difficulties arise | Subjects completed tasks competently. Two subjects had difficulty but self-corrected and completed setup, self-test, and after-test activities completely and accurately. All subjects accomplished operational tasks after a software revision (Study 3). |
Agreement on pathological nystagmus (Study 2) | 100% agreement between audiologists on presence/absence of pathological nystagmus from DizzyDoctor® System and predicate device recordings | Audiologists agreed 100% of the time with respect to the presence or absence of pathological nystagmus in video recordings from the subject and predicate devices. |
2. Sample Sizes Used for the Test Set and Data Provenance:
- Study 1 (Usability):
- Sample Size: 30 subjects (10 vertiginous, 10 non-vertiginous users, plus unspecified number for general usability as implied by comprehensive task evaluation).
- Data Provenance: Not explicitly stated (e.g., country of origin, retrospective/prospective). It appears to be prospective usability testing.
- Study 2 (Clinical Performance/Usability):
- Sample Size: Number of subjects from whom eye movement recordings were evaluated is not specified (e.g., "the subject and predicate devices").
- Data Provenance: Not explicitly stated (e.g., country of origin, retrospective/prospective).
- Study 3 (Usability/Software Revision):
- Sample Size: 5 subjects.
- Data Provenance: Not explicitly stated (e.g., country of origin, retrospective/prospective). It appears to be prospective usability testing.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications:
- Study 2: Two audiologists were used to evaluate eye movement recordings and nystagmographs from both the DizzyDoctor® System and the predicate device.
- Qualifications: "Audiologists" are specified. Further details on their experience (e.g., years of experience, subspecialty) are not provided.
4. Adjudication Method for the Test Set:
- Study 2: The method was a direct comparison of agreement. "The results indicated that the audiologists agreed 100% of the time with respect to the presence or absence of pathological nystagmus in the video recordings from the subject and predicate devices." This implies that they reached 100% agreement, so no specific adjudication method (like 2+1 or 3+1) was necessary to resolve discrepancies, as there were none reported for the specific assessment.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size:
- An fMRMC study (where "f" denotes "free-response") was not explicitly conducted as a formal comparative effectiveness study where human readers improve with AI assistance.
- However, Study 2 involved two audiologists evaluating recordings, which has elements of a multi-reader study. The comparison was between devices (DizzyDoctor® vs. predicate) and the audiologists' agreement on pathological nystagmus, not an assessment of AI assistance improving human reader performance. The device itself is an "Eye Movement Monitor," not an AI diagnostic tool providing interpretations to a human. There is no mention of an effect size for human readers improving with AI vs. without AI assistance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done:
- The document describes the DizzyDoctor® System 1.0.0 as an "Eye Movement Monitor" that detects, records, tracks, stores, and displays eye movements. It explicitly states: "This device provides no diagnosis and does not provide diagnostic recommendations." This indicates that the device's function is primarily data capture and display, not an AI algorithm providing a standalone diagnostic output. Therefore, a standalone performance study of an AI algorithm is not relevant or described in this submission.
7. The Type of Ground Truth Used:
- Study 2: The ground truth for the presence or absence of pathological nystagmus was established by the consensus/agreement of two audiologists, who viewed recordings from both the DizzyDoctor® System and the predicate device. This aligns with "expert consensus" as a type of ground truth.
8. The Sample Size for the Training Set:
- The document describes software verification and validation, but it primarily focuses on the device's functional performance, usability, and regulatory compliance, not on an AI/ML model that would require a distinct "training set" in the common sense of machine learning. Therefore, a sample size for a training set is not applicable or provided in this document as it doesn't detail an AI/ML algorithm that learns from data.
9. How the Ground Truth for the Training Set Was Established:
- Not applicable as the document does not describe the training of an AI/ML model for diagnostic or interpretive purposes. The "software revision" mentioned in Study 3 seems to be a general software update validated for usability, not an iterative improvement of an AI model's performance based on ground truth data.
Ask a specific question about this device
Page 1 of 4