(155 days)
The Interacoustics EP systems, EP15 and EP25, are intended to assist in the evaluation, documentation and diagnosis of ear disorders on human beings. EP15/25 is a 2 channel ABR and the automatic recording of ABR waveforms makes it well suited for waveform based screening and the manual programmability options allow for comprehensive clinical use ranging from frequency specific threshold test to operating room applications and cochlear implant tests. The EP15 is a basic unit allowing only recording of the Auditory Brainstem Response (ABR), while the EP25 allows recording of the ABR and earlier and later potentials.
The Interacoustics TEOAE25 system is intended for determining Cochlear function using Transient Evoked Otoaccoustic Emission click stimuli.
Both of these systems are of particular interest to Ear, Nose, and Throat doctors, Neurology specialties, Audiologist and other health professionals concerned with measuring auditory functions.
The EP15 and EP25 systems measure auditory brainstem responses picked up by skin electrodes and amplified by a pre amplifier. The data acquisition of the ABR recordings takes place from the surface electrodes mounted at specific recording points on the patient. The analogue ABR recordings are amplified in the external preamplifier connected to the electrodes. The amplified analogue ABR recordings are converted into a digital signal in the ADC (Analog to Digital Converter) inside the Eclipse. The digital ABR recordings undergo data processing handled by the PC to improve the ABR-recordings. The ABR-recordings are displayed on the monitor for the operator, for further examination and diagnosis. All ABR recordings are stored on the Laptop / Desktop computer hard drive for later examination and diagnosis.
Here's an analysis of the provided text regarding the acceptance criteria and study for the Interacoustics EP15/EP25 device:
This 510(k) summary describes a modification to an existing medical device, the Interacoustics Eclipse EP15/EP25. As such, the acceptance criteria and study focus on demonstrating that the modified device is as safe and effective as, or better than, the predicate device, rather than a de novo validation of a completely new device.
1. Table of Acceptance Criteria and Reported Device Performance:
The document doesn't explicitly list "acceptance criteria" in a quantitative, pass/fail format related to performance metrics like sensitivity, specificity, or accuracy for a diagnostic task. Instead, the acceptance criteria are implicitly met by demonstrating that the modified device functions correctly and equivalently (or better) than the predicate across various technical aspects.
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
New Features (Functional Correctness): | |
Correct implementation of EABR protocol setup | "The usefulness of the EABR protocol setup has been validated with a positive result." |
Correct implementation of new stimuli (chirp, NB chirp, click from wave file) | "The correct implementation of the new stimuli, i.e. chirp, NB chirp and click from wave file, have been verified through bench testing which provided the expected results. The generated electrical output was consistent with the mathematically predicted output." |
Correct implementation of Bayesian weighting and low pass filtering | "The correct implementation of the Bayesian weighting and low pass filtering has also been verified through bench testing and the results were as predicted." |
Operating System Compatibility: | |
Ability to run under Windows Vista® without abnormal behavior | "The EP15/EP25's ability to run under windows Vista® has also been tested and no abnormal behavior was detected." |
Overall Safety and Effectiveness: | |
Device is as safe and effective as or better than the predicate | "Based on the design control, the tests performed and the validation of the modified device we conclude that the modified device is both as safe and effective and performs as good as or better than the predicate device." (This is the overarching conclusion, supported by the specific tests above and the comparison table showing equivalence in clinical purpose, patient group, hardware, transducers, and electrodes, while indicating improvements in specific features like stimuli and averaging techniques.) |
2. Sample Size Used for the Test Set and the Data Provenance:
The document does not provide specific sample sizes for any of the bench tests or validations mentioned. It describes "bench testing" and "validation," which typically refer to internal engineering and quality assurance tests rather than clinical studies on patient populations.
Regarding data provenance: All tests described appear to be internal, prospective bench tests conducted by Interacoustics. There is no mention of data from a specific country of origin or whether it was retrospective or prospective in a clinical sense.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts:
The document does not mention the use of experts to establish ground truth for the test set. The validation seems to rely on comparing the output of the modified device against mathematically predicted outputs or expected results from the predicate device (e.g., electrical consistency, predicted results for Bayesian weighting).
4. Adjudication Method for the Test Set:
There is no mention of an adjudication method. Since the validation primarily involves technical bench testing and comparison against expected outputs, rather than clinical interpretation by multiple human readers, an adjudication method is not applicable in this context.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance:
No, an MRMC comparative effectiveness study was not done. The device (EP15/EP25) is an audiometer that measures auditory brainstem responses and evoked potentials. It assists in evaluation and diagnosis but is not an AI-driven image analysis or decision support system that would typically undergo MRMC studies to assess the impact of AI assistance on human reader performance. Its enhancements are in signal processing and stimulus generation, not AI interpretation.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done:
The "tests" section describes validation activities, which could be considered analogous to standalone testing for the specific new features. For example:
- "The usefulness of the EABR protocol setup has been validated with a positive result."
- "The correct implementation of the new stimuli... have been verified through bench testing which provided the expected results. The generated electrical output was consistent with the mathematically predicted output."
- "The correct implementation of the Bayesian weighting and low pass filtering has also been verified through bench testing and the results were as predicted."
These statements indicate that the algorithmic and functional aspects of the new features were tested independently to ensure they performed as intended, without direct human intervention in the measurement process itself. However, the device's overall intended use still involves human operators for diagnosis and interpretation.
7. The Type of Ground Truth Used:
The ground truth for the technical validations was primarily:
- Mathematically predicted output: For the new stimuli (chirp, NB chirp, click from wave file), the generated electrical output was compared to mathematically predicted output.
- Expected results/Predicted results: For Bayesian weighting and low pass filtering, the results were compared to what was "predicted."
- Lack of abnormal behavior: For Windows Vista® compatibility, the ground truth was the absence of unexpected errors or malfunctions.
There's no mention of pathology, expert consensus on clinical cases, or outcomes data as ground truth because the validation focused on the technical correctness of feature implementation, not diagnostic accuracy in a clinical population.
8. The Sample Size for the Training Set:
The document does not provide any information about a training set. This is expected as the document describes a traditional medical device (audiometer) update, not a machine learning or AI-driven system that would typically involve distinct training and test sets in the context of model development.
9. How the Ground Truth for the Training Set Was Established:
Not applicable, as no training set is mentioned or implied for this device modification.
§ 874.1050 Audiometer.
(a)
Identification. An audiometer or automated audiometer is an electroacoustic device that produces controlled levels of test tones and signals intended for use in conducting diagnostic hearing evaluations and assisting in the diagnosis of possible otologic disorders.(b)
Classification. Class II. Except for the otoacoustic emission device, the device is exempt from the premarket notification procedures in subpart E of part 807 of this chapter, if it is in compliance with American National Standard Institute S3.6-1996, “Specification for Audiometers,” and subject to the limitations in § 874.9.