(30 days)
The DizzyDoctor® System 1.0.0 Eye Movement Monitor is indicated for use in the medical office, and in the home setting for monitoring patients with a diagnosis of dizziness caused by peripheral vestibular disorders who are under the supervision of a physician. The device detects abnormal eye movements in standard positional maneuvers by recording, tracking, storing and displaying vertical, horizontal eye movements. This device provides no diagnosis and does not provide diagnostic recommendations.
Dizziness and postural instability are common in patients in Otolaryngology practice. Accurate diagnosis and choice of treatment is hampered by difficulties in obtaining thorough histories and perceptions that physical examination is complex. The DizzyDoctor® System 1.0.0 broadens physician access to video recordings of abnormal eye movement disorders with an easily operated device for in-office use by health professionals and for in-home use by patients experiencing exacerbating episodes of dizziness outside the office setting.
Using mobile and web-based technology, The DizzyDoctor® System 1.0.0 allows recording of patient of abnormal eye movements in response to standard head positions used for monitoring peripheral vestibular disorders such as Benign Paroxysmal Vertigo. It consists of Vertigo Recording Googles (VRG) with a secure holder for the patient's iPhone Application for step-by-step audio instructions for medically-recognized Dix-Hallpike maneuvers, gyroscopic feedback for enabling correct head positioning, and accurate video-recording of eye movements in response to standard head positions used for assessing balance disorders.
The VRG secure docking station for the iPhone which aligns with the patient's pupil. The VRG has no direct electrical connection with external devices or equipment, and uses light from two LEDs during recording sessions. The VRG uses a standard iPhone compatible macro lens to adjust the focal length of the iPhone camera lens, and secures with a flexible headband.
Key components of the patient's iPhone support the DizzyDoctor® System 1.0.0 including: an accelerometer and gyroscope, a video camera, storage of video recordings, audio voice/speaker system for real-time interaction with the patient, standard software for downloading and playing mobile applications from external App vendors, software for web-based processes including uploading stored videos.
The DizzyDoctor® Mobile App provides audio support for step-by-step procedures in recording eye movements in relation to positional changes during self-testing, The DizzyDoctor® System is supported by a comprehensive web-based platform for secure patient and physician registration, as well as uploading, processing and downloading videos from the professional- and self-testing for abnormal eye movements. Processed videos are accessed and viewed by physicians on their desk-top office computers.
The provided document is a 510(k) summary for the DizzyDoctor® System 1.0.0, an Eye Movement Monitor. It details the device, its intended use, and comparative performance testing against a predicate device. However, it does not contain a specific table of acceptance criteria for algorithm performance (such as sensitivity, specificity, or AUC) or a dedicated study section proving the device meets such criteria in the manner typically seen for AI/ML device submissions.
Instead, the document focuses on demonstrating substantial equivalence to a predicate device through various types of testing, including:
- Biocompatibility testing
- Software verification and validation
- Usability/human factors (engineering) testing
- Performance, electrical safety, and electromagnetic compatibility (EMC) testing
The "Usability/human factors (engineering) testing" section is the closest to addressing performance with human factors, but it doesn't provide precise quantitative acceptance criteria or detailed results in the format requested for AI/ML performance.
Therefore, I cannot populate a table of acceptance criteria and reported device performance from this document in the format of AI/ML metrics. Similarly, direct answers to many of the subsequent questions (sample size for test set, data provenance, number of experts for ground truth, adjudication method, MRMC study, standalone performance, ground truth type for test set/training set, training set size) are not explicitly stated for an AI/ML component's performance study because the submission does not describe an AI/ML algorithm being validated in that manner for medical device functionality.
Here's what I can extract and infer based on the provided text, particularly focusing on the "Usability/human factors (engineering) testing" section, as it's the most relevant to a performance evaluation of the device in a user context:
Summary of Device Performance and Testing from the Document (as it pertains to functionality and usability, not AI/ML algorithm performance verification):
The DizzyDoctor® System 1.0.0 is an Eye Movement Monitor indicated for detecting abnormal eye movements in standard positional maneuvers by recording, tracking, storing, and displaying vertical, horizontal, and torsional eye movements. It is designed for use in medical offices and in the home setting for patients with dizziness caused by peripheral vestibular disorders. The device provides no diagnosis and does not provide diagnostic recommendations.
1. Table of Acceptance Criteria and Reported Device Performance
As noted, the document does not specify acceptance criteria in terms of AI/ML performance metrics (e.g., sensitivity, specificity, AUC). The performance evaluations described are related to usability, safety, and functional equivalence to a predicate device. The closest to "performance" in a clinical context within this document is the usability study with audiologists assessing eye movement recordings.
Performance Aspect (Inferred from Usability Studies) | Acceptance Criteria (Inferred) | Reported Device Performance |
---|---|---|
User interface task completion (Study 1, 3) | Competent completion, ability to self-correct if difficulties arise | Subjects completed tasks competently. Two subjects had difficulty but self-corrected and completed setup, self-test, and after-test activities completely and accurately. All subjects accomplished operational tasks after a software revision (Study 3). |
Agreement on pathological nystagmus (Study 2) | 100% agreement between audiologists on presence/absence of pathological nystagmus from DizzyDoctor® System and predicate device recordings | Audiologists agreed 100% of the time with respect to the presence or absence of pathological nystagmus in video recordings from the subject and predicate devices. |
2. Sample Sizes Used for the Test Set and Data Provenance:
- Study 1 (Usability):
- Sample Size: 30 subjects (10 vertiginous, 10 non-vertiginous users, plus unspecified number for general usability as implied by comprehensive task evaluation).
- Data Provenance: Not explicitly stated (e.g., country of origin, retrospective/prospective). It appears to be prospective usability testing.
- Study 2 (Clinical Performance/Usability):
- Sample Size: Number of subjects from whom eye movement recordings were evaluated is not specified (e.g., "the subject and predicate devices").
- Data Provenance: Not explicitly stated (e.g., country of origin, retrospective/prospective).
- Study 3 (Usability/Software Revision):
- Sample Size: 5 subjects.
- Data Provenance: Not explicitly stated (e.g., country of origin, retrospective/prospective). It appears to be prospective usability testing.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications:
- Study 2: Two audiologists were used to evaluate eye movement recordings and nystagmographs from both the DizzyDoctor® System and the predicate device.
- Qualifications: "Audiologists" are specified. Further details on their experience (e.g., years of experience, subspecialty) are not provided.
4. Adjudication Method for the Test Set:
- Study 2: The method was a direct comparison of agreement. "The results indicated that the audiologists agreed 100% of the time with respect to the presence or absence of pathological nystagmus in the video recordings from the subject and predicate devices." This implies that they reached 100% agreement, so no specific adjudication method (like 2+1 or 3+1) was necessary to resolve discrepancies, as there were none reported for the specific assessment.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size:
- An fMRMC study (where "f" denotes "free-response") was not explicitly conducted as a formal comparative effectiveness study where human readers improve with AI assistance.
- However, Study 2 involved two audiologists evaluating recordings, which has elements of a multi-reader study. The comparison was between devices (DizzyDoctor® vs. predicate) and the audiologists' agreement on pathological nystagmus, not an assessment of AI assistance improving human reader performance. The device itself is an "Eye Movement Monitor," not an AI diagnostic tool providing interpretations to a human. There is no mention of an effect size for human readers improving with AI vs. without AI assistance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done:
- The document describes the DizzyDoctor® System 1.0.0 as an "Eye Movement Monitor" that detects, records, tracks, stores, and displays eye movements. It explicitly states: "This device provides no diagnosis and does not provide diagnostic recommendations." This indicates that the device's function is primarily data capture and display, not an AI algorithm providing a standalone diagnostic output. Therefore, a standalone performance study of an AI algorithm is not relevant or described in this submission.
7. The Type of Ground Truth Used:
- Study 2: The ground truth for the presence or absence of pathological nystagmus was established by the consensus/agreement of two audiologists, who viewed recordings from both the DizzyDoctor® System and the predicate device. This aligns with "expert consensus" as a type of ground truth.
8. The Sample Size for the Training Set:
- The document describes software verification and validation, but it primarily focuses on the device's functional performance, usability, and regulatory compliance, not on an AI/ML model that would require a distinct "training set" in the common sense of machine learning. Therefore, a sample size for a training set is not applicable or provided in this document as it doesn't detail an AI/ML algorithm that learns from data.
9. How the Ground Truth for the Training Set Was Established:
- Not applicable as the document does not describe the training of an AI/ML model for diagnostic or interpretive purposes. The "software revision" mentioned in Study 3 seems to be a general software update validated for usability, not an iterative improvement of an AI model's performance based on ground truth data.
§ 882.1460 Nystagmograph.
(a)
Identification. A nystagmograph is a device used to measure, record, or visually display the involuntary movements (nystagmus) of the eyeball.(b)
Classification. Class II (performance standards).