Search Results
Found 21 results
510(k) Data Aggregation
(163 days)
Interacoustics A/S
The VisualEyes system provides information to assist in the nystagmographic evaluation, diagnosis and documentation of vestibular disorders. Nystagmus of the eye is recorded by use of a goggle mounted with cameras. These images are measured, recorded, displayed and stored in the software. This information then can be used by a trained medical professional to assist in diagnosing vestibular disorders. The target population for VisualEyes system is 5 years of age and above.
VisualEyes 505/515/ 525 is a software program that analyzes eye movements recorded from a camera mounted to a video goggle. A standard Video Nystagmography (VNG) protocol is used for the testing. VisualEyes 505/515/ 525 is and update/change, replacing the existing VisualEyes 515/525 release 1 (510(k) cleared under K152112). The software is intended to run on a Microsoft Windows PC platform. The "525" system is a full featured system (all vestibular tests as listed below) while the "515" system has a subset of the "525" features. "505" is a simple video recording mode.
The provided text describes the acceptance criteria and a study to demonstrate the substantial equivalence of the VisualEyes 505/515/525 system to its predicate devices. However, it does not detail specific quantitative acceptance criteria or a traditional statistical study with performance metrics like sensitivity, specificity, or AUC as might be done for an AI/algorithm-only performance study.
Instead, the study aims to show substantial equivalence by verifying that the new software generates the same clinical findings as the predicate devices.
Here's a breakdown of the available information:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not provide a table of quantitative acceptance criteria (e.g., minimum sensitivity, specificity, or agreement thresholds) in the way one might expect for a standalone AI performance evaluation.
Instead, the acceptance criterion for the comparison study was to demonstrate "negligible statistical difference beneath the specified acceptance criteria" between the new VisualEyes software and the predicate devices. The "reported device performance" is simply the conclusion that this criterion was met.
Acceptance Criterion | Reported Device Performance |
---|---|
Demonstrate that VisualEyes 505/515/525 produces "negligible statistical difference beneath the specified acceptance criteria" compared to predicate devices for clinical findings. | "all data sets showed a negligible statistical difference beneath the specified acceptance criteria." |
"There were no differences found in internal bench testing comparisons or the external beta testing statistical comparisons." | |
"all the data between the new VisualEyes software and the data collected and analyzed with both predicate devices are substantially equivalent." |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size (Test Set): Not explicitly stated. The document mentions "various groups in different geographical locations externally" for beta testing and that "the same subject" was tested on both the new and predicate devices. However, the exact number of subjects or cases is not provided.
- Data Provenance: Retrospective, as it involved collecting data sequentially from subjects using both the new and predicate devices after the new software was developed. The beta testing was conducted in "external sites that had either MMT or IA existing predicate devices," implying a real-world clinical setting. The geographical locations are described as "different geographical locations externally," implying a multi-site study, but specific countries are not mentioned.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: Two.
- Qualifications of Experts: "licensed internal clinical audiologists." No specific experience level (e.g., "10 years of experience") is provided.
4. Adjudication Method for the Test Set
The document does not describe an adjudication method in the traditional sense of multiple readers independently assessing cases and then resolving discrepancies. Instead, the two clinical audiologists reviewed and compared the test results, stating: "It is the professional opinion of both clinical reviewers of the validation that all the data between the new VisualEyes software and the data collected and analyzed with both predicate devices are substantially equivalent." This suggests a consensus-based review rather than a formal adjudication process.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly described or performed to measure the improvement of human readers with AI assistance versus without. The study focused on the substantial equivalence of the new software as a device, not on AI-assisted human performance improvement.
6. Standalone (Algorithm-Only) Performance Study
Yes, in a way. The study evaluates the "VisualEyes 505/515/525 software" which is described as a "software program that analyzes eye movements." The comparison is between the output and findings generated by the new software versus the predicate software. While it's part of a system with goggles and cameras, the evaluation focuses on the analytical software component as a standalone entity in its ability to produce equivalent clinical findings.
7. Type of Ground Truth Used
The "ground truth" implicitly referred to here is the "clinical findings" and "test results" generated by the predicate devices. The new software's output was compared to these established predicate device results to determine equivalence. It's a "comparison to predicate" truth rather than an independent gold standard like pathology or long-term outcomes.
8. Sample Size for the Training Set
Not applicable/Not provided. The VisualEyes 505/515/525 is described as an "update/change" and "software program that analyzes eye movements," and "the technological principles for VisualEyes 3 is based on refinements from VisualEyes 2." It's not presented as a machine learning model that undergoes explicit "training" with a separate dataset. It's more of a software update with algorithm refinements.
9. How the Ground Truth for the Training Set was Established
Not applicable/Not provided, as there is no mention of a separate training set or machine learning training process. The software's development likely involved engineering and refinement based on existing knowledge and the performance of previous versions (VisualEyes 2 and the reference devices).
Ask a specific question about this device
(163 days)
Interacoustics A/S
The Orion rotary chair is an optional accessory for VisualEyes 515 eye movement recording systems.
The VisualEyes™ system provides information to assist in the nystagmographic evaluation, diagnosis and documentation of vestibular disorders. VNG testing evaluates nystagmus using goggles mounted with cameras. These images are measured, recorded, displayed and stored in the software. This information can then be used by a trained medical professional to assist in diagnosing vestibular disorders. The target population for videonystagmography is five years of age and above.
The Orion is a rotary chair designed to assess the Vestibular Ocula Reflex (VOR).
Orion rotary chair includes these three variants
- Orion Reclining
- Orion Auto Traverse
- Orion Comprehensive .
The Orion is considered an accessory to vestibular examination system software designated VisualEyes 525 and VisualEyes 515 manufactured by Interacoustics (FDA 510(k) K200534).
The Orion is an update of the System 2000 by Micromedical ((FDA 510(k) ID K922037).
The provided text is a 510(k) summary for a medical device (Orion Rotary Chair). It does not contain details about acceptance criteria or a study proving the device meets those criteria in a quantitative sense.
Instead, the document focuses on demonstrating substantial equivalence to a predicate device (System 2000 by Micromedical). The "performance tests" mentioned are primarily comparative, asserting that the new device performs "as specified" and is "safe and effective" based on these comparisons, rather than against predefined, quantitative acceptance criteria. There are no clinical tests performed.
Therefore, many of the requested items cannot be extracted from the provided text.
Here is a summary of what can be extracted and what cannot:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly state quantitative acceptance criteria or a formal table comparing them against reported performance. Instead, it offers a "Comparison table" between the Orion device and its predicate (System 2000) for various descriptive and technical characteristics, aiming to show equivalence rather than adherence to specific numeric performance thresholds.
Description | Orion Model (Reclining) Performance (Reported as 'Same' or 'Equivalent') | System 2000 Model (Reclining) Performance (Predicate) | Equivalence Justification |
---|---|---|---|
Intended use | The Orion rotary chair is an optional accessory for VisualEyes 525/VisualEyes 515 eye movement recording systems | The System 2000 rotary chair is an optional accessory for eye movement recording systems | Same |
Software to support | VisualEyes 515/525 | VisualEyes 515/525 and Spectrum | Same (Spectrum is the replaced software). Evaluated in K163149. |
Chair and Controller | yes | yes | Same |
Video Goggle connector | yes USB | yes Firewire | Equivalent - USB and Firewire are communication protocols |
Patient Weight Maximum | 350 lbs (158kilo) | 350 lbs (158kilo) | Same |
VOR, VVOR, VFX | yes .01-.64 Hz | yes .01-.64 Hz | Same |
Step tests | max 200 | max 200 | Same |
VOR Analysis | yes | yes | Same |
Equipment cart | yes | yes | Same |
Ocularmotor Tests | yes | yes | Same |
VORTEQ/VHIT/DVA | Option | Option | Same |
EOG | Option | Option | Same |
Mechanical Foot Brake | Electric Lock | yes (manual foot brake) | Equivalent - both provide an immobile state for patient safety; electronic lock allows chair to be immobile before test and prevents rotation during test. |
Sinusoidal Frequency | 0.01 to 0.64 Hz | 0.01 to 0.64 Hz | Same |
Step Velocity (max) | 200 deg/sec | 200 deg/sec | Same |
Acceleration (max) | 100 deg/sec² | 100 deg/sec² | Same |
Auto Traverse / Comprehensive Models Differences | |||
Chair and Controller | Orion: Controller is inside the Chair base | System 2000: Controller is a separate unit | Equivalent - merely a design difference. |
Patient Weight Maximum | 400 lbs | 400 lbs | Same |
Step Velocity (max) | 350 deg/sec | 300 deg/sec | Equivalent - typical Step Velocity test parameters do not exceed 180 d/s; SVV test uses 300 d/s. Higher max velocity is not critical. |
Lateral movement speed | 0.8 cm/sec | 1 cm/sec | Equivalent - speed to go offset is not a critical parameter; lateral offset controls stimulation of Otolith organs, not the speed to attain it. |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
Not applicable. No formal test set or clinical study is described. The "performance tests" involved comparisons of the physical device features and operational parameters against the predicate device.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
Not applicable. No ground truth was established by experts for a test set, as no clinical study or diagnostic performance assessment involving human interpretation is described.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not applicable. No test set requiring adjudication is described.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
Not applicable. This is a rotary chair for vestibular testing, not an AI-assisted diagnostic imaging device that would typically undergo an MRMC study. No AI component is mentioned.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
Not applicable. This is a physical medical device (rotary chair) that requires operation by a trained medical professional; it is not a standalone algorithm.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
Not applicable. The basis for safety and effectiveness is substantial equivalence to a predicate device, demonstrated through comparative technical specifications and internal performance testing, not against a clinical ground truth.
8. The sample size for the training set
Not applicable. This document describes a physical medical device, not a machine learning model; therefore, there is no "training set."
9. How the ground truth for the training set was established
Not applicable, as there is no training set.
Ask a specific question about this device
(246 days)
Interacoustics A/S
The TRV is intended to assist in the diagnosis and treatment of balance disorders and vertigo, including benign paroxysmal positional vertigo.
The TRV is a multi-axial chair that can rotate 360° around both the horizontal and vertical axes. With the patient secured by a four-point harness, a head mount, a leg strap, and shoulder pads, an examiner is able to rotate the patient around the plane of each of the 6 semicircular canals and hold the patient in any position for detailed examination of the semicircular canals. The TRV chair is manually handled by the healthcare professional. The axes of rotation are lockable in preset positions. The primary axis has a battery-powered electromagnetic lock controlled by a footswitch and the secondary axis is manually locked with a mechanical lever. A mechanical system with an adjustable counterweight ensures that the weight of the chair and the patient are balanced during the maneuvers.
The provided text describes the TRV device and its substantial equivalence to a predicate device, the Epley Omniax™. However, it does not contain information related to a study that establishes acceptance criteria and then proves the device meets those criteria, especially in the context of an AI-powered device or for clinical accuracy.
Instead, the document details a 510(k) submission for a physical medical device (a multi-axial chair) used in the diagnosis and treatment of balance disorders. The performance data presented focuses on electrical safety, electromagnetic compatibility, and mechanical testing, not on predictive performance, accuracy, or diagnostic effectiveness in the way an AI/software device would be evaluated.
Therefore, many of the requested elements for describing "acceptance criteria and the study that proves the device meets the acceptance criteria" in an AI/software context are not present in the provided document.
Here's a breakdown of what can and cannot be answered based on the provided text:
1. Table of acceptance criteria and the reported device performance
Acceptance Criteria (General Safety and Performance) | Reported Device Performance |
---|---|
Electrical Safety: Comply with IEC 60601-1 | Passed: Device complies with IEC 60601-1 for electrical safety. |
Electromagnetic Compatibility (EMC): Comply with IEC 60601-1-2 | Passed: Device complies with IEC 60601-1-2 standard for EMC. |
Mechanical Testing: Demonstrate long-term safety and withstand worst-case stress loads. | Passed: Wear observed after testing is considered not to have any effect on the safety of patients or operators. |
Technological Equivalence (to predicate device): Similar indications for use, use environment, user, patient population, test options, maneuvers, movement range, and power supply (with supported differences). | Achieved: Found to be as safe, as effective, and performs comparably to the predicate device, with technological differences supported by non-clinical data. |
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Not Applicable / Not Provided: The document does not describe a "test set" in the context of data for an algorithm. The "testing" refers to physical device safety and mechanical performance. There is no data provenance for a diagnostic or predictive model.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not Applicable / Not Provided: No "ground truth" establishment for a diagnostic output is described, as this is a physical chair, not an AI diagnostic tool.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not Applicable / Not Provided: No adjudication method for a data test set is described.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No: A multi-reader multi-case (MRMC) study was not done. The device is a physical chair, not an AI assistant.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- No: This is not an algorithm. The device is a physical chair, and human healthcare professionals manually operate it. Its function is to assist professionals in performing maneuvers, not to make a standalone diagnosis or treatment decision without human input.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- Not Applicable / Not Provided: See point 3. The "ground truth" in this context pertains to compliance with safety regulations and functional equivalence to a predicate device, not diagnostic accuracy.
8. The sample size for the training set
- Not Applicable / Not Provided: No training set is described, as this is a physical device and not an AI model.
9. How the ground truth for the training set was established
- Not Applicable / Not Provided: No training set or ground truth establishment for it is described.
Ask a specific question about this device
(54 days)
Interacoustics A/S
The Lyra with DPOAE is intended for use in the audiologic evaluation and documentation of ear disorders using Distortion Product Otoacoustic Emissions. The target population for Lyra with DPOAE includes all ages.
The Lyra with TEOAE is intended for use in the audiologic evaluation of ear disorders using Transient Evoked Otoacoustic Emissions. The target population for Lyra with TEOAE includes all ages.
The Lyra System is to be used by trained personnel only, such as audiologists, ENT surgeons, doctors, hearing healthcare professionals or personnel with a similar level of education. The device should not be used without the necessary knowledge and training to understand its use and how results should be interpreted.
The device is audiometric equipment used for assisting of inner ear abnormalities. Lyra features a hardware unit connecting to a PC installed with IA OAE suite software designated for use with Lyra. The PC software provides a user interface designed to integrate in the standard Microsoft Windows environment. Lyra can be purchased with various licenses allowing you to perform different hearing screening tests.
Distortion product otoacoustic emissions (DPOAE) technology uses pairs of pure tones presented in sequence to screen patients for cochlear hearing loss. Responses to the stimulus are predictable and therefore can be measured via a sensitive microphone placed in the patient's ear canal.
Transient otoacoustic emissions (TEOAE) technology uses a short duration stimulus to screen patients for cochlear hearing loss. Responses to the stimulus are predictable and therefore can be measured via a sensitive microphone placed in the patient's ear canal. The response can be divided into frequency bands for assessment.
The provided text describes a 510(k) premarket notification for an audiometric device named Lyra. The submission asserts the substantial equivalence of the Lyra device to legally marketed predicate devices (Titan™ TEOAE and Titan™ DPOAE).
However, the documentation does not contain information related to acceptance criteria, a specific study proving the device meets acceptance criteria, sample sizes for test or training sets, data provenance, the number or qualifications of experts, adjudication methods, MRMC studies, standalone algorithm performance, or how ground truth was established beyond general statements about design verification and validation against standards.
The document primarily focuses on establishing substantial equivalence based on:
- Identical Indications for Use: Both Lyra and the predicate devices are intended for audiologic evaluation and documentation of ear disorders using Distortion Product Otoacoustic Emissions (DPOAE) and Transient Evoked Otoacoustic Emissions (TEOAE) for all ages, to be used by trained personnel.
- Similar Technological Characteristics: The document provides detailed comparison tables highlighting that Lyra shares virtually all key technological characteristics (e.g., type, regulation number, product code, target population, intended user, anatomical sites, safety and performance standards, device type, system configuration, stimulus types, frequency ranges, levels, level steps, transducer, probe detection, recording, A/D resolution, artifact reject system, automatic test with display of PASS-REFER) with the predicate devices.
- Non-Clinical Testing Summary: It states that design verification and validation were performed according to current standards (IEC 60601-1 series, IEC 60645 series) to assure the device meets performance specifications, and software verification and validation were conducted.
- Clinical Testing Summary: Crucially, it states "Not applicable. Not required to establish substantial equivalence."
Therefore, based on the provided text, I cannot complete the requested tables and information. The submission for the Lyra device explicitly states that clinical testing was "Not applicable" and "Not required to establish substantial equivalence," meaning the detailed study design elements you requested (like acceptance criteria for performance, sample sizes, ground truth establishment, expert involvement, and comparative effectiveness studies) were not part of this 510(k) submission.
The acceptance criteria for this submission appear to be demonstrating equivalence to established predicate devices through non-clinical performance and technological characteristics, rather than demonstrating a specific level of clinical performance against a clinical ground truth.
In summary, the provided document does not support the specific type of performance study you are asking about, as it relies on substantial equivalence to predicates rather than novel clinical performance demonstration.
Ask a specific question about this device
(86 days)
Interacoustics A/S
The Sera™ with DPOAE is intended for use in the audiologic evaluation and documentation of ear disorders using Distortion Product Otoacoustic Emissions. The target population for Sera with DPOAE includes all ages.
The Sera™ with TEOAE is intended for use in the audiologic evaluation and documentation of ear disorders using Transient Evoked Otoacoustic Emissions. The target population for Sera with TEOAE includes all ages.
The Sera™ with ABRIS is intended for use in the audiologic evaluation and documentation of ear and nerve disorders using auditory evoked potentials from the inner ear, the auditory nerve and the brainstem. The target population for Sera with ABRIS is newborns.
The Sera™ System is to be used by trained personnel only, such as audiologists, ENT surgeons, doctors, hearing healthcare professionals or personnel with a similar level of education. The device should not be used without the necessary knowledge and training to understand its use and how results should be interpreted.
The device is audiometric equipment used for assisting of inner ear and auditory brainstem abnormalities.
Sera™ features a touch-screen display and user-friendly software in a compact hardware design. Sera™ can be purchased with various licenses allowing you to perform different hearing screening tests.
Sera™ uses auditory brainstem response (ABRIS) technology to screen patients for hearing loss. A modified click stimulus, the CE-Chirp", of 35 dB nHL is delivered into the patient's ear while electrodes placed on the patient's head measure EEG activity.
Auditory brainstem response (ABRIS) test produces a short acoustic stimulus and measures via transcutaneous electrodes the auditory evoked potentials from the inner ear, the auditory nerve and the brainstem.
The provided document is a 510(k) summary for the Sera™ audiometric equipment. It indicates that no clinical tests were performed to establish acceptance criteria or demonstrate device performance. The device's safety and effectiveness were affirmed based on its fulfillment of international standards for Otoacoustic Emissions (OAE) and Auditory Brainstem Response (ABR) measurements.
Therefore, the following information cannot be extracted from the document:
- A table of acceptance criteria and reported device performance.
- Sample size used for the test set and data provenance.
- Number of experts used to establish ground truth and their qualifications.
- Adjudication method for the test set.
- Effect size of human readers improvement with AI vs without AI (Multi-reader multi-case comparative effectiveness study was not performed as no clinical testing was done).
- Standalone performance (algorithm only without human-in-the-loop) was not explicitly detailed via clinical study.
- Type of ground truth used.
- Sample size for the training set.
- How ground truth for the training set was established.
Summary of Non-Clinical Testing and Device Performance (as described in the document):
The document states:
"Design verification and validation were performed according to current standards for OAE and ABR to assure the device meets its performance specifications. EMC and Safety was performed in compliance with recognized standards IEC 60601-1 series, Medical Equipment – General requirements for basic safety and essential performance. The product meets the requirements from the international standard for OAE measurements IEC 60645 series. Software verification and validation testing were conducted and documentation was provided as recommended by FDA's Guidance for Industry and FDA Staff, "Guidance for the Content of Premarket Submissions for Software Contained in medical Devices." The software for this device was considered as a 'moderate' level of concern since a malfunction of, or a latent design flaw in, the Software Device could lead to an erroneous diagnosis or a delay in delivery of appropriate medical care that would likely lead to Minor Injury. The OAE and ABRIS measurements were divided into 3 phases. Phase 1 included when optimization occurred and involved feedback to the operator so that they could adjust such as probe fit, electrode impedance, ambient electrical and acoustic noise etc. Once the pre-test conditions were optimized, phase 2 of data collection proceeded as rapid as possible to allow the maximum quantity of good quality data to be collected in the shortest possible time. Phase 3 proceeded into the data assessment and decision stage and this ran concurrently with Phase 2 once the predetermined minimum amount of data had been collected. Phase 3 then went into the algorithm descriptions for each TEOAE, DPOAE and ABRIS measurements modes and is provided in detail in Annex 16D of this submission. No clinical tests were performed, but based on the fulfillment of the international standards for OAE and ABR we believe the device is safe and effective. The auditory impedance testing characteristics and safety systems were compared and found to be comparable."
Conclusion regarding acceptance criteria and study in the absence of clinical data:
The acceptance criteria for the Sera™ device and its performance are not directly demonstrated through clinical studies with a defined test set, ground truth established by experts, or statistical performance metrics. Instead, the device's acceptability is based on:
- Compliance with recognized international standards:
- IEC 60601-1 series (Medical Equipment – General requirements for basic safety and essential performance) for EMC and Safety.
- IEC 60645 series (international standard for OAE measurements).
- Software verification and validation: Per FDA's Guidance for Industry and FDA Staff, "Guidance for the Content of Premarket Submissions for Software Contained in medical Devices." The software was deemed "moderate" level of concern.
- Bench testing and design verification/validation: To ensure the device meets its performance specifications according to current standards for OAE and ABR.
- Comparison to a predicate device (easyScreen, K171506): The document asserts substantial equivalence in technological characteristics and indications for use.
The document implies that by meeting these standards and demonstrating comparability to the predicate device through non-clinical means (bench testing, software validation), the device is considered safe and effective, and thus acceptable. The specific 'acceptance criteria' in terms of clinical performance (e.g., sensitivity, specificity for detecting hearing loss) and a study proving it meets these are not presented because clinical testing was not performed.
Ask a specific question about this device
(168 days)
Interacoustics A/S
The VisualEyes system provides information to assist in the nystagmographic evaluation, diagnosis and documentation of vestibular disorders. Nystagmus of the eye is recorded by use of a goggle mounted with cameras. These images are measured, recorded, displayed and stored in the software. This information then can be used by a trained medical professional to assist in diagnosing vestibular disorders. The target population for VisualEyes system is 5 years of age+
VisualEyes 505/515/ 525 is a software program that analyzes eye movements recorded from a camera mounted to a video goggle. A standard Video Nystagmography (VNG) protocol is used for the testing. VisualEyes 505/515/ 525 is and update/change, replacing the existing VisualEyes 515/525 release 1 (510(k) cleared under K152112). The software is intended to run on a Microsoft Windows PC platform. The "525" system is a full featured system (all vestibular tests as listed below) while the "515" system has a subset of the "525″ features. "505″ is a simple video recording mode. The VisualEyes 505/ 515/ 525 software is designed to perform the following vestibular tests: Spontaneous Nystagmus Test, Gaze Test, Smooth Pursuit Test, Saccade Test, Optokinetic Test, Dix-Hallpike, Positional Test, Caloric Test, SHA, Step, Visual VOR, VOR Suppression, Visual Eyes 505. The system consists of a head mounted goggle/mask, a camera unit and a software application running on a standard PC.
The provided text describes a 510(k) submission for the Interacoustics VisualEyes device. The focus of the performance tests is on demonstrating substantial equivalence to predicate devices rather than proving the device meets specific acceptance criteria for a novel AI/ML algorithm.
Therefore, many of the typical acceptance criteria and study details relevant to AI/ML device performance (like sensitivity, specificity, AUC, human-in-the-loop performance, expert ground truth establishment for a novel algorithm, etc.) are not explicitly mentioned in this document. The document focuses on showing that the new version of VisualEyes (revision 2) performs similarly to its predecessor (revision 1) and another cleared predicate device (VIDEO EYE TRAKKER).
However, I can extract the relevant information from the provided text as best as possible, interpreting the "acceptance criteria" in this context as the demonstration of substantial equivalence through comparable performance.
Here's the breakdown based on the provided document:
Acceptance Criteria and Reported Device Performance for Substantial Equivalence
Since this is a 510(k) submission demonstrating substantial equivalence to a predicate device, the "acceptance criteria" can be interpreted as the demonstration that the "key algorithms for detecting and analysing nystagmus" are similar between the new device and the predicate devices. The reported performance is the qualitative finding of "equivalence."
Table of Acceptance Criteria and Reported Device Performance
Acceptance Criterion (Implicit) | Reported Device Performance |
---|---|
Functional Equivalence: The device's eye movement analysis algorithms (IA Curve tracker) should perform comparably to those in the predicate devices. | "Same – An algorithm comparison evaluation has been performed. This evaluation shows high correlation between the algorithm used in the predicate device and the new as the algorithms are the same." |
"All results showed equivalence between the predicates and the subject, this means that results processed in predicates are showing equivalence to results from the subject device." | |
"We have performed a comparison validation between VisualEyes 505/515/ 525 and the predicate devices. All similarities and differences have been discussed. We trust that the results of these comparisons demonstrate that the VisualEyes 505/515/ 525 is substantially equivalent to the marketed predicate devices." | |
Clinical Performance Equivalence: The device should perform as specified, safely and effectively, in clinical comparisons. | "We have performed clinical comparisons between the three systems. These activities, testing and validation show that VisualEyes 505/515/ 525 perform as specified and is safe and effective." |
No Essential/Major Differences: No differences should exist that adversely affect safety and effectiveness. | "We did not find any essential or major differences between the devices." |
"Any deviations between VisualEyes 505/515/ 525 and predicate devices are appraised to have no adverse effect on the safety and effectiveness of the device." |
Study Details for Demonstrating Substantial Equivalence
-
Sample size used for the test set and the data provenance:
- The document states: "The demonstration was carried out as a side by side comparison where the same patient was analysed by the subject device and the predicate device simultaneously."
- It also mentions: "All tests were performed on test subjects with conjugate eye movements."
- However, the exact number of patients/subjects in the test set is not specified.
- Data provenance (country of origin, retrospective/prospective): Not specified.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- The document does not describe ground truth established by experts in the typical sense for AI/ML performance validation (e.g., for disease diagnosis).
- Instead, the "ground truth" for this substantial equivalence study appears to be the output of the predicate device, as the goal was to show that the new device's processing ("IA Curve Tracker") yielded "high correlation" with and "equivalence" to the predicate device's output. The predicate device itself is implicitly considered the "truth" for comparative purposes.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not applicable as the comparison was directly between the output of the subject device and the predicate device, not against a human-adjudicated ground truth.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC study was described. The study focused on the algorithmic comparison between devices, not human reader performance with or without AI assistance. The device's purpose is to assist a trained medical professional, but its performance was validated on the algorithm's output comparison.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, this appears to be the primary method of validation for the algorithm. The document states: "One camera recorded the left eye and was processed in the predicate device and the other recorded the right eye and was processed in subject device." This indicates a direct comparison of the algorithmic output, separate from human interpretation or human-in-the-loop performance.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc):
- For the algorithmic comparison, the "ground truth" was effectively the output of the predicate device. The study aimed to show that the subject device's processing produced equivalent results to the cleared predicate device for the "key algorithms for detecting and analysing nystagmus."
- For the overall clinical performance, it states "These activities, testing and validation show that VisualEyes 505/515/ 525 perform as specified and is safe and effective," which implies a broader clinical assessment but no specific details on how "truth" was established for clinical outcomes beyond direct comparison to the predicate.
-
The sample size for the training set:
- Not applicable/Not mentioned. This document describes a 510(k) submission for an update to an existing device, focusing on substantial equivalence tests, not the development or training of a de novo AI/ML model. The algorithm is stated as being "the same" as the predicate's "IA Curve tracker."
-
How the ground truth for the training set was established:
- Not applicable/Not mentioned for the reasons stated above.
Ask a specific question about this device
(244 days)
INTERACOUSTICS A/S
The Eclipse with VEMP (Vestibular Evoked Myogenic Potential) is intended for vestibular evoked myogenic potential testing to assist in the assessment of vestibular function. The target population for Eclipse with VEMP includes patients aged from 8 years and up.
The device is to be used only by qualified medical personnel with prior knowledge of the medical and scientific facts underlying the procedure.
The Eclipse with VEMP is audiometric equipment intended to perform various Otoacoustic Emissions (OAEs) and Auditory Evoked Potential evaluations. The Eclipse is operated solely from PC based software modules. The Eclipse platform performs the physical measurements. The protocols are created in the software modules. The Eclipse consists of a hardware platform, a preamplifier, stimulation transducers and recording electrodes.
VEMP evaluations are tests of the vestibular portion of the inner ear and acoustic nerve, evoked with an auditory stimulation. The evoked response results in a potential recorded from the sternocleidomastoid (neck) muscles or the inferior oblique (eye) muscles. VEMP is not a test of the neck or eye musculature directly; the clinician is interested in the vestibular anatomy which triggers the response. The cervical Vestibular Evoked Myogenic Potential (cVEMP) is an evoked potential measured from the sternocleidomastiod (SCM) muscle and the ocular VEMP (oVEMP) is an evoked potential measured from the inferior oblique muscle. Both tests are used to assess the otolith organs (saccule and utricle) of the vestibular system and their afferent pathways and assist medical practitioners in the diagnosis of various balance disorders.
Summary: VEMP is Auditory Evoked Potentials like ABR obtained using any commercially available EP system. The addition of the VEMP module to Eclipse will make it possible for clinicians to conduct VEMP tests while using EMG (electromyography) monitoring and scaling. The VEMP function of the Eclipse with VEMP does not make a diagnosis. It only assists the medical professional.
Here's a breakdown of the acceptance criteria and study information for the Interacoustics A/S Eclipse with VEMP, based on the provided document:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state formal "acceptance criteria" in terms of predefined numerical thresholds for performance metrics. Instead, it presents the results of internal clinical evaluations (test-retest reliability and reproducibility studies) and compares them to similar reported values in a predicate device. The implicit acceptance criterion is that the device demonstrates comparable or better reliability and reproducibility than the predicate device.
Performance Metric | Acceptance Criteria (Implicit) | Reported Device Performance (Eclipse with VEMP) | Predicate Device (K143670) Performance (for comparison) |
---|---|---|---|
cVEMP Test-Retest Reliability: | High correlation between repeated measurements | Mean: 0.9374 | |
Median: 0.9562 | |||
IQR: 0.0547 | N/A (No direct predicate comparison for test-retest in this doc) | ||
oVEMP Test-Retest Reliability: | High correlation between repeated measurements | Mean: 0.9095 | |
Median: 0.9280 | |||
IQR: 0.0738 | N/A (No direct predicate comparison for test-retest in this doc) | ||
cVEMP Reproducibility (Different Days, Testers, Electrode Placements): | High correlation between measurements on different days | Mean: 0.8794 | |
Median: 0.9235 | |||
IQR: 0.0955 | Mean: 0.9146 (R) / 0.9162 (L) | ||
oVEMP Reproducibility (Different Days, Testers, Electrode Placements): | High correlation between measurements on different days | Mean: 0.8923 | |
Median: 0.9100 | |||
IQR: 0.1112 | Mean: 0.926 (R) / 0.93 (L) |
Conclusion from document: The mean and median values for both test-retest reliability and reproducibility are high, indicating good performance. For reproducibility, the Eclipse with VEMP's mean correlation values are "similar to or higher than" those reported by the predicate device (GN Otometrics K143670).
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: 14 normal adults (for both cVEMP and oVEMP).
- Data Provenance: The studies were conducted at two internal sites: one in Denmark and one in the United States. They appear to be prospective internal evaluations specifically for this device and its VEMP module.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
The document does not mention the use of experts to establish a "ground truth" for the test set in the traditional sense of diagnostic accuracy. The studies evaluated the reliability and reproducibility of the device's measurements, not its diagnostic accuracy against a separate, definitive truth. The VEMP module "does not make a diagnosis" but "only assists the medical professional." Therefore, there isn't a stated ground truth established by experts for these specific studies.
4. Adjudication Method for the Test Set
Not applicable. As noted above, these studies focused on the intrinsic reliability and reproducibility of the device's waveform measurements, not diagnostic outcomes requiring expert adjudication. The analysis involved correlation of waveforms.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
No, an MRMC comparative effectiveness study involving human readers with/without AI assistance was not mentioned or performed in the provided document. The device itself is an "Evoked Response Auditory Stimulator," a diagnostic tool that assists medical professionals, rather than an AI-powered diagnostic system that interprets images or data.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
The studies performed evaluate the device's ability to consistently acquire and output VEMP waveforms. The device itself is a data acquisition system. While it has software for processing signals (e.g., filtering, averaging), the "performance" described is the consistency of the recorded physiological responses, not an algorithm's standalone diagnostic interpretation. The document explicitly states the VEMP module "does not make a diagnosis."
7. The Type of Ground Truth Used
There is no traditional "ground truth" (like pathology or clinical outcomes) used in these studies. The studies aimed to establish the test-retest reliability (consistency of measurements taken repeatedly within a short timeframe) and reproducibility (consistency of measurements taken on different occasions, potentially by different operators with slightly different setups) of the VEMP waveforms produced by the device. The comparison point is the device's own prior measurement or the measurement from a different session.
8. The Sample Size for the Training Set
The document describes internal clinical evaluations, not the training of a machine learning algorithm. Therefore, there is no "training set" in the context of AI/ML.
9. How the Ground Truth for the Training Set was Established
Not applicable, as there is no training set for an AI/ML algorithm mentioned in the document.
Ask a specific question about this device
(153 days)
INTERACOUSTICS A/S
The VisualEyes 515/ VisualEyes 525 system provides information to assist in the nystagmographic evaluation, diagnosis and documentation of vestibular disorders. Nystagmus of the eye is recorded by use of a goggle mounted with cameras. These images are measured, recorded, displayed and stored in the software. This information then can be used by a tained medical professional to assist in diagnosing vestibular disorders. The target population for VisualEyes 515/525 system is 5 years of age+
VisualEyes 515/ 525 is a software program that analyzes eye movements recorded from a camera mounted to a video goggle. A standard Video Nystagmography (VNG) protocol is used for the testing. VisualEyes 515/ 525 is replacing the existing Micromedical Technologies Spectrum vestibular testing software system and Interacoustics VN415 and VO425 vestibular testing software (510(k) cleared under K964646 and K072254). The software system will work with the existing Micromedical VisualEyes Goggle and Interacoustics VN415/VO425 Goggle. The goggle hardware is not part of this submission and is still assumed covered by K964646 and K072254. The software is intended to run on a Microsoft Windows PC platform. The "525″ system is a full featured system (all vestibular tests as listed below) while the "515" system has a limited number of tests (indicated with a * below).
VNG in general is used to record nystagmus during oculomotor tests such as saccades, pursuit and gaze testing, optokinetics and also calorics.The VisualEyes 515/ 525 software performs the following standard vestibular tests: *Spontaneous Nystagmus, Gaze, Smooth Pursuit, Saccade, Optokinetic, *Positionals, *Dix-Hallpikes and *Caloric tests. These are exactly the same standard tests that are performed in the predicate devices and are described in the ANSI standard (ANSI S3.45-1999, "American National Standard Procedures for Testing Basic Vestibular Function"). There are no difference in any settings or parameters in these default tests in any of the devices. The clinical validation tests showed that each test was performed in exactly the same manner and resulted in similar findings when comparing VE525 to the predicate devices.
The provided text describes the Interacoustics VisualEyes 515/525 system, a video nystagmography (VNG) device, and its substantial equivalence determination to predicate devices. However, it does not contain the specific details required to fully address all aspects of your request, particularly regarding quantitative acceptance criteria, statistical study design, and sample sizes for training and test sets.
Here's an analysis based on the information provided, highlighting what is present and what is missing:
Acceptance Criteria and Study Details for VisualEyes 515/525
The provided document describes a substantial equivalence (SE) determination based on a comparison to predicate devices, rather than a clinical trial with predefined quantitative acceptance criteria and a detailed statistical analysis of performance metrics. Therefore, a table of acceptance criteria with reported device performance, a sample size for the test set, detailed ground truth establishment, and MRMC study details cannot be fully constructed from the given text.
The core of the "study" described is a side-by-side comparison to demonstrate the algorithms for detecting and analyzing nystagmus were similar to the predicate devices.
1. Table of Acceptance Criteria and Reported Device Performance
Based on the provided text, no numerical acceptance criteria (e.g., sensitivity, specificity, accuracy targets) are explicitly stated, nor are specific quantitative performance metrics reported for the VisualEyes 515/525. The "acceptance criteria" appear to be based on demonstrating "equivalence" in functionality and results compared to predicate devices.
Acceptance Criterion (Inferred from SE determination) | Reported Device Performance (Inferred) |
---|---|
Similar algorithms for detecting and analyzing nystagmus to predicate devices | "All results showed equivalence between the predicates and the subject, this means that results processed in predicates are showing equivalence to results from the subject device." |
Performance as specified | "All these activities, testing and validation show that VisualEyes 515/ 525 perform as specified and is safe and effective." |
No essential or major differences affecting safety and effectiveness compared to predicate devices | "We did not find any essential or major differences between the devices." |
2. Sample Size Used for the Test Set and Data Provenance
The document states: "The demonstration was carried out as a side by side comparison where the same patient was analysed by the subject device and the predicate device simultaneously."
- Sample Size for Test Set: Not explicitly stated. The text refers to "the same patient" in a singular sense, but then mentions "test subjects" in plural. It is unclear how many patients/subjects were included in this comparison.
- Data Provenance: Not explicitly stated (e.g., country of origin). The study involved simultaneous analysis of "the same patient" with conjugate eye movements. It is a prospective comparison in the sense that data was collected specifically for this validation, but not necessarily a "clinical trial" as commonly understood.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
This information is not provided in the document. The study described is a comparison between devices, not primarily against an independent expert-established ground truth. The implication is that the predicate devices' outputs serve as a reference for equivalence.
4. Adjudication Method for the Test Set
This information is not provided. Given the side-by-side comparison, it's possible the "adjudication" was a direct functional comparison of the outputs, but no formal adjudication method is outlined.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No, an MRMC comparative effectiveness study was not explicitly described. The document focuses on device-to-device equivalence rather than human-reader performance with or without AI assistance. The device assists trained medical professionals, but its impact on human reader improvement or an effect size is not discussed.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
The described "performance tests" involving the side-by-side comparison with predicate devices can be considered a form of standalone algorithm evaluation when comparing the output signals and processed data directly. However, the device is explicitly intended to "provide information to assist in the nystagmographic evaluation, diagnosis and documentation of vestibular disorders," implying a human-in-the-loop for diagnosis. The study focused on the equivalence of the raw measurements and processed data between the new and predicate devices.
7. The Type of Ground Truth Used
The "ground truth" for the performance comparison in this context appears to be the output of the legally marketed predicate devices (Micromedical Technologies Spectrum and Interacoustics VN415/VO425 systems). The goal was to show that the VisualEyes system produced "similar findings" and "equivalence" to these established devices.
"One camera recorded the left eye and was processed in the predicate device and the other recorded the right eye and was processed in subject device. All results showed equivalence between the predicates and the subject..."
8. The Sample Size for the Training Set
This information is not provided. The document makes no mention of a "training set" for the algorithms, which is typical for machine learning-based devices. The algorithms likely rely on established biomechanical models for eye movement rather than a distinct training phase.
9. How the Ground Truth for the Training Set Was Established
As no training set is mentioned (see point 8), this information is not provided.
Summary of Missing Information:
The provided 510(k) summary is focused on establishing substantial equivalence to previously cleared predicate devices, emphasizing functional and technological similarity. It does not contain the detailed statistical analysis, quantitative performance metrics, and specific study design elements (like explicit sample sizes for testing/training, ground truth establishment by experts, or MRMC studies) that would typically be found in direct performance studies or clinical trials for novel devices or AI/ML-driven diagnostics. The "study" described is a technical comparison rather than a full-fledged clinical validation with predefined endpoints.
Ask a specific question about this device
(30 days)
INTERACOUSTICS A/S
The Interacoustics AT235 Impedance Audiometer is an electroacoustic test instrument that produces controlled levels of test tones and signals intended for use in conducting diagnostic hearing evaluations and assisting in the diagnosis of possible otologic disorders. It features tympanometry, acoustic reflex and air conduction audiometry.
AT235 is an auditory impedance analyser. The device is intended to change the air pressure in the external auditory canal and measure and graph the mobility characteristics of the tympanic membrane to evaluate the functional condition of the middle ear. The device is used to determine abnormalities in the mobility of the tympanic membrane due to stiffness, flaccidity, or the presence of middle ear pathologies. The device is also used to measure the acoustic reflex threshold which occurs due to contractions of the stapedial muscle following exposure to a strong stimulus. This test allow to assess between central and peripheral pathologies and to identify where the patients uncomfortable loudness level may reside. The uncomfortable loudness level is useful when providing rehabilitative amplification methods and determining the correct management process for the patient. The AT235 also includes basic audiometry functions. The instrument is software controlled. The software controls the probe (tone and pressure) stimuli, measures the result and presents the result on a built in display. All functions are set and interpreted by the operator (There are no interpretations of results in the device). The technological characteristics are substantially equivalent with predicate device. All technological characteristics are in compliance with the consensus standard ANSI S3.39 for auditory impedance testers
The provided document is a 510(k) summary for the Interacoustics AT235 Impedance Audiometer. This document focuses on demonstrating substantial equivalence to a predicate device, rather than providing detailed acceptance criteria and a study proving device performance against those criteria in a typical AI/software-as-a-medical-device (SaMD) context.
Therefore, many of the requested points related to acceptance criteria, specific performance metrics, ground truth establishment, training sets, and MRMC studies are not applicable to this type of regulatory submission. The submission relies on demonstrating that the new AT235 maintains the same safety and effectiveness as its previously cleared version (K994254).
Here's a breakdown based on the available information:
1. A table of acceptance criteria and the reported device performance:
This information is not explicitly stated in the document in the format of specific acceptance criteria and performance results (e.g., sensitivity, specificity, accuracy). The document states that the device was found in compliance with current standards and demonstrated substantial equivalence with the predicate device.
The technological characteristics are stated to be in compliance with the consensus standard ANSI S3.39 for auditory impedance testers. This implicitly means the device meets the performance requirements outlined in that standard, but the specific metrics and targets are not detailed in this summary.
2. Sample sized used for the test set and the data provenance:
- Test set sample size: Not applicable. This document refers to device verification and validation activities, not a study involving a test set of patient data with a defined sample size for performance evaluation in the way an AI/SaMD would.
- Data provenance: Not applicable for performance evaluation against a specific test set. The validation activities are described as "following the design control procedure."
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
Not applicable. Ground truth establishment for a patient-level test set is not described as part of this submission. The device is an electroacoustic test instrument; its performance is validated against physical and instrument standards, not typically against expert-labeled patient data in this context.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
Not applicable. There is no mention of an adjudication process for a test set of patient data.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
Not applicable. This device is an audiometer, an electroacoustic test instrument, not an AI-powered diagnostic imaging tool that would typically involve human readers interpreting results.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
Not applicable. This device is a measurement instrument. Its "performance" is its ability to accurately measure and present electroacoustic data, not to interpret or diagnose. The document explicitly states: "All functions are set and interpreted by the operator (There are no interpretations of results in the device)."
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc):
The ground truth for this type of device would primarily be instrument calibration standards, physical measurements, and compliance with recognized industry standards (e.g., ANSI S3.39 for auditory impedance testers). The document states: "The device was found in compliance with current standards and demonstrated substantial equivalence with the predicate device."
8. The sample size for the training set:
Not applicable. This device does not use machine learning or AI that requires a training set of data.
9. How the ground truth for the training set was established:
Not applicable. As there is no training set for an AI/ML algorithm, there is no ground truth establishment for such a set.
Summary of what the document does state regarding verification and validation:
The "Nonclinical tests summary" section on page 4-5 clearly states:
- "Following the design control procedure the design verification and validation were performed according to current standards for medical device safety and EMC and performance of impedance tester."
- "The device was found in compliance with current standards and demonstrated substantial equivalence with the predicate device."
- "Clinical tests None applicable"
The "Conclusion" further reiterates:
- "The AT235 as a modification to the predicate device (the previous cleared revision of AT235) uses the same or identical technology and has the same intended use as the predicate device."
- "We trust that the verification and validation activities show substantial equivalence with the predicate device and that the modified AT235 is as safe and effective as the predicate device for its claimed purpose."
In essence, the "study" proving the device meets its acceptance criteria is the comprehensive design verification and validation process against relevant medical device standards (including safety, EMC, and the ANSI S3.39 performance standard) to demonstrate substantial equivalence to its predicate. The specifics of these tests (e.g., specific measurement ranges, accuracy tolerances, etc.) are not detailed in this summary document but would be part of the full 510(k) submission.
Ask a specific question about this device
(88 days)
INTERACOUSTICS A/S
The EyeSeeCam vHIT is designed to provide information on the performance of the vestibular ocular reflex (VOR) by providing objective measures of eye-velocity response to head-velocity stimulus, showing the VOR gain in the plane of rotation of the head.
The EyeSeeCam vHIT system consists of a head mounted goggle/mask, a camera unit with calibration laser and a software application running on a standard PC. The vHIT goggle generally has one camera (monocular) fixed at the top side(s) of the mask. The camera is held in place mechanically with a spherical ball-and-socket joint. The vHIT goggle has support for camera position-for the left and for the right eye. Therefore the camera is interchangeable between the left and right eyes. The vHIT goggle supports the camera that is used to record the eye images. This constitutes the major component of the vHIT system. The USB camera uses infrared light (IR), which is not visible to the naked eye. The IR illumination enables sessions to be performed in complete darkness. The vHIT goggle has dichroic mirrors that allow visible light to pass through and infra red light to be reflected towards the cameras. A calibration laser is placed in the center of the goggle. This is used for camera / pupil calibration before testing. The complete system is operated from a standard PC/Laptop via a standard USB connection. The PC application software controls the camera recordings and shows the results of the tests. EyeSeeCam utilizes an inertial measurement unit (IMU) which is an accelerometer and gyroscope combined. The IMU is contained inside the camera unit. The camera housing is attached to a light weight goggle. The camera records eye movements and the IMU records an electronic waveform that is proportional to head angular velocity (deg/sec).
The EyeSeeCam vHIT is a device intended to objectively measure the vestibular ocular reflex (VOR) in response to head movements specifically for patients experiencing dizziness or balance problems. The submission states that clinical testing compared results between the EyeSeeCam vHIT and a predicate device (VORTEQ/VISUALEYES) and found substantial equivalence.
Here's a breakdown of the requested information based on the provided document:
Acceptance Criteria and Device Performance
The document does not explicitly state numerical acceptance criteria in terms of sensitivity, specificity, or similar metrics. Instead, "substantial equivalence" to the predicate device, VORTEQ/VISUALEYES, is used as the primary acceptance criterion. The device's performance is demonstrated through this comparison.
Acceptance Criterion | Reported Device Performance |
---|---|
Substantial Equivalence to Predicate Device (VORTEQ/VISUALEYES) | "Clinical testing compared test results between the EyeSeeCam vHIT and the Predicate Device. The results showed compliance and substantial equivalence between the two instruments." and "We have performed a comparison validation between EyeSeeCam and VORTEQ. All similarities and differences have been discussed. We trust that the results of these comparisons demonstrate that the EyeSeeCam vHIT is substantially equivalent to the marketed predicate device." |
Functionality (Technical Characteristics) | EyeSeeCam vHIT has similar technology and functionality to the predicate device, with minor differences (e.g., USB vs. Firewire, monocular vs. optional binocular cameras) that are appraised as not having adverse effects on safety and effectiveness. |
Compliance with Standards | "Design verification and validation were performed according to current standards for medical device safety and EMC. The device was found in compliance with current standards." This includes IEC 60601-1, ANSI/AAMI ES60601-1, CAN/CSA-C22.2 No. 60601.1:08 for Safety; IEC 60601-1-2 for EMC; and IEC 62471-1, IEC 60825-1 for Laser. |
Additional Information:
-
Sample size used for the test set and the data provenance:
- The document states "Clinical testing compared test results between the EyeSeeCam vHIT and the Predicate Device," and "We have performed a comparison validation between EyeSeeCam and VORTEQ." However, the specific sample size (number of patients or cases) for this clinical comparison is not provided.
- Data provenance is not specified (e.g., country of origin, retrospective or prospective nature of the clinical study).
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- The document does not specify the number of experts or their qualifications used to establish ground truth for any test set. The study primarily relies on a comparison between the EyeSeeCam vHIT and a predicate device.
-
Adjudication method for the test set:
- The document does not describe any adjudication method for a test set. The comparison is between two devices, not an evaluation against a manually established ground truth using multiple experts.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study was not done. This device is a diagnostic tool for measuring the VOR, not typically categorized as an "AI assistance" tool for human readers in the context of image interpretation. The study focused on device-to-device equivalence.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- The EyeSeeCam vHIT operates as a standalone diagnostic device. The clinical comparison study evaluates the performance of the device itself (algorithm only, without human-in-the-loop performance influencing the measurement of VOR, though human interpretation of the results is implied) against another device. So, yes, a standalone performance assessment (in the context of device function) was done by comparing its output to the predicate device.
-
The type of ground truth used:
- The "ground truth" for the comparison study appears to be the measurements and results obtained from the predicate device (VORTEQ/VISUALEYES), as the study aimed to demonstrate substantial equivalence to this legally marketed device rather than an independent gold standard like pathology or long-term outcomes.
-
The sample size for the training set:
- The document does not provide information regarding a specific "training set" or its sample size. This submission focuses on the premarket notification for a medical device demonstrating equivalence, which typically does not detail machine learning model training data.
-
How the ground truth for the training set was established:
- As no information on a training set is provided, how its ground truth would have been established is also not detailed.
Ask a specific question about this device
Page 1 of 3