Search Results
Found 46 results
510(k) Data Aggregation
(85 days)
GWJ
The Integrity with VEMP provides testing that is intended to assist in the assessment of vestibular function. The target patient population ranges from school age children to geriatric adults who can complete the testing tasks. The Integrity with VEMP is intended to be used by a variety of professionals, such as clinical practitioners specialized in balance disorders, Audiologists, and Otolaryngologists (ENT doctors) with prior knowledge of the medical and scientific knowledge about the VEMP procedure.
VEMP (Vestibular Evoked Myogenic Potential) is a short latency response generated either from sternocleidomastoid muscle (cVEMP) or oblique muscle (oVEMP) which is typically evoked by high level acoustic or vibratory stimulation. VEMP is a non-invasive test that is used for diagnosis of vestibular disorders such as superior canal dehiscence, Menier's disease, vestibular neuritis, and, among others. VEMP test can be done in clinics (ENT/audiology) and hospitals provided that the users have adequate knowledge and background about underlying process of VEMP and recording auditory evoked responses.
The Integrity V500 is indicated for auditory evoked potential testing. "Integrity with VEMP" (this submission) is the addition of the VEMP test modality to the Integrity V500. The Integrity V500 is an auditory evoked auditory system response and Integrity with VEMP is an auditory evoked vestibular system response. Since the vestibular system is connected to the auditory system, a loud stimulus to the auditory system also simultaneously stimulates the vestibular system. However, the evoked signals are measured at different electrode sites. It is important to note that the methods of stimulation and data acquisition are the same for the VEMP modality as for the ABR modality (one of the Integrity V500's auditory evoked testing modalities), with differences being in electrode montage and patient's physical state. As such, it is possible to also get a VEMP response while testing in the Integrity V500's ABR modality.
Integrity with VEMP is a PC based device which uses the same hardware and similar software for evoking the stimulus, collecting the response, processing the data, and displaying the outcome on the screen as does the Integrity V500. The hardware used are: a VivoLink (patient interface device), air and bone conduction transducers (to stimulate the vestibular system), CV-Amp (bio-amplifier), and a computer. The response from the muscle is picked up by a bio-amplifier attached to the neck (for cVEMP) or under eyes (for oVEMP) and forehead with electrodes. The data is the VivoLink for pre-processing and then transferred to the PC via Bluetooth connection for full processing using the algorithm designed for VEMP processing. Through the test, the processed sweeps are filtered using the filter setting defined by the user and the averaged response is shown on the screen.
The primary diagnostic component of a VEMP measurement is the comparison of the amplitude of the primary peak of the VEMP response between right and left sides, defined as an asymmetry ratio, where a significant amount of asymmetry is indicative of a vestibular disorder. The amplitude of the vestibular response peaks is also dependent on the contraction level of the muscles involved in the recording. To minimize the muscular response biasing the result, it is important to have equal contraction levels for both sides; this is especially physically challenging to generate equivalent neck contractions needed for cVEMP. To overcome this issue, two key features were added to the Integrity with VEMP compared to Integrity V500: a biofeedback EMG monitor (which displays real-time muscular activity) and VEMP response normalization (scaling based on muscular contraction levels). The EMG monitors can be used as a guidance to the clinicians and patients for the level of muscle contraction. Normalization automatically scales the recorded sweeps based on the energy of the corresponding EMG which helps to compensate for imbalance contraction from the two sides.
Here's an analysis of the acceptance criteria and the study proving the device meets them, based on the provided text:
Acceptance Criteria and Reported Device Performance
The acceptance criteria are not explicitly listed as a table of thresholds to be met, but rather are inferred from the clinical studies' conclusions regarding substantial equivalence and test reliability. The reported device performance is presented in tables comparing the device's measurements to a predicate device and to literature norms, as well as its reliability metrics against industry standards.
Acceptance Criteria (Inferred) | Reported Device Performance Against Predicate (Study 1) | Reported Device Performance for Reliability & Reproducibility (Study 2) |
---|---|---|
1. Substantial Equivalence to Predicate Device (Interacoustics Eclipse K162037) | cVEMP vs. Eclipse (K162037): |
- P1 [ms]: Integrity VEMP (15.6 ± 1.3, 15.8 ± 1.4) vs. Eclipse (14.7 ± 1.2, 15.1 ± 1.5). Both within literature norms (11-17 ms). ICC: 0.72 (Right), 0.91 (Left).
- N1 [ms]: Integrity VEMP (23.5 ± 1.6, 23.9 ± 1.5) vs. Eclipse (23.6 ± 1.4, 23.7 ± 1.8). Both within literature norms (18-27 ms). ICC: 0.95 (Right), 0.80 (Left).
- P1-N1 [µV]: Integrity VEMP (209.6 ± 105.3, 198.7 ± 106) vs. Eclipse (195.5 ± 54.4, 168.8 ± 48.8). Both within literature norms (28-300 µV). ICC: 0.67 (Right), 0.71 (Left).
oVEMP vs. Eclipse (K162037):
- N1 [ms]: Integrity VEMP (11.2 ± 0.8, 11.4 ± 0.9) vs. Eclipse (10.2 ± 1.1, 10.4 ± 1.1). Both within literature norms (9-19 ms). ICC: 0.63 (Right), 0.78 (Left).
- P1 [ms]: Integrity VEMP (15.9 ± 1.7, 16.3 ± 1.6) vs. Eclipse (15.2 ± 1.7, 15.2 ± 1.6). Both within literature norms (12-22 ms). ICC: 0.89 (Right), 0.76 (Left).
- N1-P1 [µV]: Integrity VEMP (17.4 ± 8.5, 17.3 ± 8.3) vs. Eclipse (16.7 ± 10.6, 14.1 ± 6.6). Both within literature norms (2-45 µV). ICC: 0.90 (Right), 0.91 (Left).
Conclusion: Both systems' results consistent with literature; statistical analysis shows good to excellent reliability. Substantial equivalency criteria met. | Test-Retest Reliability (One Session):
- cVEMP (Air Conduction): Mean (μ) = 0.92, Median = 0.94, Std Dev = 0.05, IQR = 0.07. (Meets industry std: Mean ≥ 0.879, IQR ≤ 0.111)
- oVEMP (Air Conduction): Mean (μ) = 0.89, Median = 0.89, Std Dev = 0.05, IQR = 0.08. (Meets industry std)
- cVEMP (Bone Conduction): Mean (μ) = 0.92, Median = 0.92, Std Dev = 0.05, IQR = 0.12. (Mean meets, IQR slightly high but deemed acceptable due to bone conduction variability)
Test-Retest Reproducibility (Two Sessions 24-72 hrs apart):
- cVEMP (Air Conduction): Mean (μ) = 0.92, Median = 0.93, Std Dev = 0.05, IQR = 0.08. (Meets industry std: Mean ≥ 0.879, IQR ≤ 0.111)
- oVEMP (Air Conduction): Mean (μ) = 0.89, Median = 0.87, Std Dev = 0.06, IQR = 0.11. (Meets industry std)
Conclusion: Reliability and reproducibility values are similar to those from other industry products (Eclipse K162037 document). Good test-retest reliability and reproducibility are concluded. |
| 2. Test-Reliability and Reproducibility comparable to industry standards | (See above) | (See above) |
1. Sample sizes used for the test set and the data provenance:
- Study 1 (Substantial Equivalence):
- cVEMP: 13 normal adult participants.
- oVEMP: 9 adult participants.
- Data Provenance: Not explicitly stated (e.g., country of origin, prospective/retrospective). Implied to be prospective for the purpose of this comparative study.
- Study 2 (Reliability & Reproducibility):
- US site: 33 normal hearing adults for cVEMP (AC); 17 normal hearing adults for oVEMP (AC).
- Canada Site: 9 school-age normal hearing children for cVEMP (BC); 8 pre-school age normal hearing children for cVEMP (AC).
- Data Provenance: Prospective, collected from two sites (US and Canada).
2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- The ground truth in this context is established by the physiological measurements themselves, not by expert interpretation of images or other subjective data. The "normal adult/hearing" participant selection implies a pre-qualification based on typical physiological parameters, but no specific "expert" panel is described as establishing or adjudicating individual VEMP responses for ground truth. The comparison is against established literature norms and a predicate device's measured performance.
3. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not applicable as this is a device performance study based on objective physiological measurements (latencies, amplitudes) rather than subjective assessments requiring expert consensus or adjudication. The data analysis involved statistical comparisons (mean, standard deviation, ICC, IQR).
4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, this type of study was not performed. The device is not an AI-powered diagnostic tool that assists human readers in interpreting complex data like medical images. It's a medical device that measures physiological responses (VEMP). The study focuses on the device's ability to accurately capture and report these responses, which are then interpreted by clinicians.
5. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:
- The device inherently involves a human-in-the-loop for proper patient setup, transducer placement, monitoring for muscle contraction via biofeedback, and ultimately interpreting the results. The "performance" being evaluated is the system's ability to consistently provide objective VEMP measurements. There isn't an "algorithm only" mode separate from its intended use with a human operator. The software's processing of data from the hardware is integral to its function.
6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- The ground truth is based on physiological measurement consistency and comparison to established literature norms and a legally marketed predicate device.
- For Study 1, the ground truth for "normal" VEMP responses is referenced against "Norms from Literature" for P1/N1 latencies and P1-N1 amplitudes. The predicate device's performance also serves as a comparative "ground truth" for substantial equivalence.
- For Study 2, "industry-accepted standards" for test-retest repeatability and reproducibility (derived from the predicate device's 510(k) document) served as the benchmark for reliability.
7. The sample size for the training set:
- This information is not provided. The document describes clinical studies that served as part of the regulatory submission (verification and validation), not studies for training a machine learning model.
8. How the ground truth for the training set was established:
- This information is not provided, as the studies described are related to clinical validation for regulatory clearance, not the development or training of a machine learning model. If any internal models (e.g. for artifact rejection or signal processing) were trained, their training data and ground truth establishment are not detailed in this document.
Ask a specific question about this device
(115 days)
GWJ
The ALGO Pro Newborn Hearing Screener is a mobile, noninvasive instrument used to screen infants for hearing loss. The screener uses Automated Auditory Brainstem Response (AABR®) for automated analysis of Auditory Brainstem Response (ABR) signals recorded from the patient. The screener is intended for babies between the ages of 34 weeks (gestation age) and 6 months. Babies should be well enough to be ready for discharge from the hospital, and should be asleep or in a quiet state at the time of screener is simple to operate. It does not require special technical skills or interpretation of results. Basic training with the equipment is sufficient to learn how to screen infants who are in good health and about to be discharged from the hospital. A typical screening process can be completed in 15 minutes or less. Sites appropriate for screening include the well-baby nursery, NICU, mother's bedside, audiology suite, outpatient clinic, or doctor's office.
The ALGO® Pro is a fully automated hearing screening device used to screen infants for hearing loss. It provides consistent, objective pass/refer results. The ALGO Pro device utilizes Auditory Brainstem Response (ABR) as the hearing screening technology, which allows the screening of the entire hearing pathway from outer ear to the brainstem. The ABR signal is evoked by a series of acoustic broadband transient stimulus (clicks) presented to a subject's ears using acoustic transducers and recorded by sensors placed on the skin of the patient. The ALGO Pro generates each click stimulus and presents to the patient's ear using acoustic transducers attached to disposable acoustic earphones. The click stimulus elicits a sequence of distinguishable electrophysiological signals produced as a result of signal transmission and neural responses within the auditory nerve and brainstem of the infant. Disposable sensors applied to the infant's skin pick up this evoked response, and the signal is transmitted to the screener via the patient electrode leads. The device employs advanced signal processing technology such as amplification, digital filtering, artifact rejection, noise monitoring and noise-weighted averaging, to separate the ABR from background noise and from other brain activity. The ALGO Pro uses a statistical algorithm based on binomial statistics to determine if there is a response to the stimulus that matches to the ABR template of a normal hearing newborn. If a response is detected that is consistent with the ABR template derived from normal hearing infants (automated auditory brainstem response technology, AABR), the device provides an automated 'Pass' result. A 'Refer' result is automatically generated if the device cannot detect an ABR response with sufficient statistical confidence or one that is consistent with the template.
Here's a breakdown of the acceptance criteria and study information for the ALGO Pro Newborn Hearing Screener, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The FDA submission primarily focuses on demonstrating substantial equivalence to a predicate device (ALGO 5) rather than setting specific, numerical acceptance criteria for a new clinical performance study. The "acceptance criteria" here are implied through the comparison with the predicate device's established performance and the demonstration that the ALGO Pro performs comparably.
Acceptance Criterion (Implied) | Reported Device Performance (ALGO Pro / Comparative) |
---|---|
Safety | Complies with: IEC 60601-1 Ed. 3.2, IEC 60601-2-40 Ed. 2.0, IEC 60601-1-6, IEC 62366-1, IEC62304, IEC 62133-2, IEC 60601-1-2 Ed. 4.1, IEC 60601-4-2, FCC Part 15. |
Biocompatibility | Passed Cytotoxicity, Sensitization, and Irritation tests (ISO 10993-1:2018 for limited contact). |
Mechanical Integrity | Passed drop and tumble, cable bend cycle, electrode clip cycle, power button cycle, connector mating cycle, bassinet hook cycle, and docking station latch/pogo pin cycle testing. |
Effectiveness (AABR Algorithm Performance) | Utilizes the exact same AABR algorithm as predicate ALGO 5. |
Algorithmic Sensitivity | 99.9% for each ear (using binomial statistics, inherited from ALGO AABR algorithm). |
Overall Clinical Sensitivity | 98.4% (combined results from independent, peer-reviewed clinical studies using the ALGO AABR algorithm, e.g., Peters (1986), Herrmann et al. (1995)). |
Specificity | 96% to 98% (from independent, peer-reviewed clinical studies using the ALGO AABR algorithm). |
Performance Equivalence to Predicate | Bench testing confirmed equivalence of acoustic stimuli, recording of evoked potentials, and proper implementation of ABR template and algorithm, supporting device effectiveness. |
Software Performance | Software Verification and Validation testing conducted, Basic Documentation Level provided. |
Usability | Formative and summative human factors/usability testing conducted, no concerns regarding safety and effectiveness raised. |
2. Sample Size Used for the Test Set and Data Provenance
No new clinical "test set" was used for the ALGO Pro in the context of a prospective clinical trial. The performance data for the AABR algorithm (sensitivity and specificity) are derived from previously published, peer-reviewed clinical studies that validated the underlying ALGO AABR technology.
- Sample Size for AABR Algorithm Development: The ABR template, which forms the basis of the ALGO Pro's algorithm, was determined by superimposing responses from 35 neonates to 35 dB nHL click stimuli.
- Data Provenance for ABR Template: The data for the ABR template was collected at Massachusetts Eye and Ear Infirmary during the design and development of the original automated infant hearing screener.
- Data Provenance for Clinical Performance (Sensitivity/Specificity): The studies cited (Peters, J. G. (1986) and Herrmann, Barbara S., Aaron R. Thornton, and Janet M. Joseph (1995)) are generally long-standing research from various institutions. The document doesn't specify the exact country of origin for the studies cited beyond the development of the template in the US. These studies would be retrospective relative to the ALGO Pro submission, as they describe the development and validation of the original ALGO technology.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts
This information is not provided in the document for the studies that established the ground truth for the ABR template or the clinical performance of the ALGO AABR algorithm. The template was derived from "normal hearing" neonates, implying a clinical assessment of their hearing status, but the specifics of how that ground truth was established (e.g., specific experts, their qualifications, or methods other than the ABR itself) are not detailed within this submission summary.
4. Adjudication Method for the Test Set
Not applicable, as no new clinical "test set" requiring adjudication for the ALGO Pro itself was conducted or reported in this submission. The historical studies developing the AABR algorithm would have defined their own ground truth and validation methods, but these are not detailed here.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
Not applicable. The ALGO Pro is an automated hearing screener that provides a "Pass" or "Refer" result without requiring human interpretation of the ABR signals themselves. It is not an AI-assisted human reading device, but rather a standalone diagnostic aid.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
Yes, a standalone performance assessment of the AABR algorithm (which is essentially the "algorithm only" component) was done indirectly through historical studies and directly through bench testing.
- The core AABR algorithm has a 99.9% algorithmic sensitivity (based on binomial statistics).
- Historically, independent clinical studies (cited) showed an overall clinical sensitivity of 98.4% and specificity of 96% to 98% for the ALGO AABR technology when used in clinical settings.
- For the ALGO Pro specifically, bench testing was performed to confirm the equivalence of the acoustical stimuli, recording of evoked potentials, and proper implementation of the ABR template and algorithm between the ALGO Pro and its predicate device (ALGO 5). This bench testing effectively confirmed the standalone performance of the ALGO Pro's algorithm against the established performance of the predicate.
7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)
- For ABR Template Development: The ABR template was based on the morphology of ABR waveforms from normal hearing neonates. This implies a ground truth established by clinical assessment of "normal hearing" status.
- For Clinical Performance (Sensitivity/Specificity): The clinical studies cited (Peters, Herrmann et al.) would have established ground truth for hearing status through follow-up diagnostic audiologic evaluations, which could include behavioral audiometry, auditory steady-state response (ASSR) testing, or other objective measures (likely expert consensus based on these diagnostic tests). The document does not specify the exact ground truth methodology of these historical studies.
8. The Sample Size for the Training Set
The document states that the ABR template, which underpins the algorithm, was derived by superimposing responses from 35 neonates. This set of 35 neonates effectively served as the "training set" or foundational data for the ABR template.
9. How the Ground Truth for the Training Set Was Established
The ground truth for the "training set" (the 35 neonates used to derive the ABR template) was established based on their status as "normal hearing" infants. This implies a determination of their hearing status through established clinical methods for neonates at the time (e.g., standard audiologic evaluation to confirm normal hearing), though the specific details of these diagnostic methods are not provided in this summary.
Ask a specific question about this device
(197 days)
GWJ
The QSCREEN device is a hand-held, portable hearing screener intended for recording and automated evaluation of Otoacoustic Emissions (OAE) and Auditory Brainstem Responses (ABR). Distortion Product Otoacoustic Emission (DPOAE) and Transient Evoked Otoacoustic Emission (TEOAE) tests are applicable to obtain objective evidence of peripheral auditory function. ABR tests are applicable to obtain objective evidence of peripheral and retro-cochlear auditory function including the auditory nerve and the brainstem. QSCREEN is intended to be used in subjects of all ages. It is especially indicated for use in testing individuals for whom behavioral results are deemed unreliable.
The QScreen is a hand-held and portable audiometric examination device offering test methods for the measurement of Otoacoustic Emissions (OAE) including transitory evoked otoacoustic emissions (TEOAE), distortion product otoacoustic emissions (DPOAE) and Auditory Evoked Responses like Auditory Brainstem Responses (ABR) in patients of all ages. It has a touch screen display and can be used with different accessories, such as its Docking Station, Ear Coupler Cable, Ear probe, Insert earphone, and Electrode cable.
QScreen is a battery powered device which is charged by the Docking Station wirelessly and communicates with the Docking Station via Bluetooth. The Docking Station can be connected to a personal computer (PC) via USB cable on which patient and test data can be reviewed and managed with the optional software. Additionally, device and user profile configurations can be conducted with the software. Printing the data is also possible and carried out by a label printer that can be connected to the Docking Station. The QScreen device also contains a camera on the back side to read linear bar codes and QR codes. All materials that come into contact with human skin are selected to be biocompatible.
The operating system on the QScreen is a self-contained firmware. The user is guided by the menu on the touch screen through the measurement. The results are evaluated on the base of signal statistics. The device offers an automatically created result, which can have the values "PASS" (Clear Response), "REFER" (No Clear Response) or "INCOMPLETE" (Test aborted).
The provided text is a 510(k) summary for the PATH MEDICAL GmbH's QScreen device. It states that "No clinical performance data was collected for the subject device QScreen. Substantial equivalence was shown through bench testing and compliance to international standards..." As such, the document does not contain the information required to answer the prompt regarding acceptance criteria and the study that proves the device meets the acceptance criteria (specifically clinical performance data).
Therefore, I cannot provide:
- A table of acceptance criteria and the reported device performance
- Sample sized used for the test set and the data provenance
- Number of experts used to establish the ground truth
- Adjudication method
- If a multi reader multi case (MRMC) comparative effectiveness study was done
- If a standalone performance study was done
- The type of ground truth used
- The sample size for the training set
- How the ground truth for the training set was established
The document explicitly states that no clinical performance data was collected to demonstrate the device meets acceptance criteria via a clinical study. Instead, substantial equivalence to a predicate device (Sentiero) was shown through:
- Bench testing: This included tests for "frequency, timing and sound level of the stimulus as well as noise resistance and the lowest response signal detectable by the device."
- Compliance to international standards: IEC 60645-6:2009 (OAE) and IEC 60645-7:2009 (ABR).
- Biocompatibility testing: According to ISO 10993-1:2018 (Cytotoxicity, Sensitization, Irritation).
- Electrical safety and electromagnetic compatibility (EMC) testing: According to IEC 60601-1:2005/AMD1:2012, IEC 60601-1-2:2014, IEC 60601-2-40: 2016.
- Software Verification and Validation Testing: According to FDA's guidance and IEC 62304:2015.
- Usability Testing: According to EN 62366:2015.
- Mechanical and Acoustic Testing: Including maximum sound level, push, drop, and mould stress relief tests, and frequency content, timing, sound level, and repetition rate of stimuli.
- Literature Review: Citing publications on ABR algorithm and automated infant screening.
The comparison to the predicate device focuses on technical characteristics, intended use (where QScreen is a subset of Sentiero's functionality), and accessories, stating that these differences "do not raise different question of safety or effectiveness."
Ask a specific question about this device
(178 days)
GWJ
The ALGO® 71 Newborn Hearing Screener is a hand-held, portable hearing screener intended to objectively determine the hearing status of a newborn/infant from 34 weeks gestational age to 6 months old. Babies should be well enough for hospital discharge and should be asleep or in a quiet state at the time of screening.
ALGO 7i is an audiometric examination platform which consists of the ALGO 7i device with a touch screen display together with different accessories such as Multidata Cable, Docking Station, Patient Cable, ATA Cable. All connectors and transducers have a special mechanically coded plug-in order to ensure the correct connection to the device. All plugs of the transducers have a memory chip inside which stores the information about the respective transducer (including type of connector, calibration table). As a result, the ALGO 7i instrument can be connected flexibly to different ATA Cables while enabling the instrument to 'know' the calibration values and status of the connected Cable. This information is to guide the user (feedback via display) and help to ensure correct performance of the device. The ALGO 7i is designed as standalone examination platform and can be connected to a personal computer (PC) via USB using the Multidata Cable or the Docking Station for data review and management. The Device is portable and is meant to be mainly used as mobile device. Materials in contact with humans are selected to be biocompatible.
Additional features are direct printing of patient and measurement data on a label using label printer that can be connected via Multidata Cable or Docking Station.
The ALGO 7i offers hearing screening using AABR technology.
The measurement application is controlled from a self-contained firmware. The measurement flow is menu guided on a touch screen. Evaluation of test results is based on signal statistics. Besides that, result information is displayed as PASS/REFER/INCOMPLETE.
The ALGO 7i is designed to be used by trained personnel in a medical or home environment to examine hearing in infants from 34 weeks (gestational age) that are ready for discharge from the hospital up to 6 months old.
Here's a breakdown of the acceptance criteria and the study proving the ALGO 7i device meets those criteria, based on the provided text:
Acceptance Criteria and Reported Device Performance
The acceptance criteria for the ALGO 7i are primarily established through its substantial equivalence to the predicate device, ALGO 3i, especially regarding its ABR (Auditory Brainstem Response) technology and performance metrics. The text describes the algorithmic sensitivity and typical specificity observed in studies of the predicate device, which is foundational to the ALGO 7i's acceptance.
Acceptance Criteria (Derived from Predicate ALGO 3i) | Reported Device Performance (ALGO 7i and Predicate ALGO 3i) |
---|---|
Algorithmic Sensitivity for "PASS" Result (each ear) | Set to 99.9% (using binomial statistics). This means the device will issue a "PASS" result if it establishes with >99% statistical confidence that an ABR signal is present and consistent with the template. |
Minimum Sweeps for "PASS" result | 1000 sweeps for 35dBnHL screening. |
2000 sweeps for 40dBnHL screening. | |
Maximum Sweeps before "REFER" result | Up to 15,000 noise-weighted sweeps. If a "PASS" result is not established after 15,000 sweeps, it issues a "REFER" result. |
Overall Clinical Sensitivity (based on predicate studies) | 98.4% (combined overall sensitivity from independent clinical studies of the ALGO device). |
Clinical Specificity (based on predicate studies) | Ranged from 96% to 98% in independent clinical studies of the ALGO device. |
Equivalence in Result Detection (compared to predicate) | Proven in bench testing for ALGO 7i. The ALGO 7i uses the exact same methods and parameters to evoke, record, process, and detect ABR responses as the ALGO 3i, including the weighted-binary template-matching algorithm and associated ABR template. |
Stimulus Characteristics Equivalence (compared to predicate) | Bench tests included frequency, timing, polarity, and sound level of the stimulus, demonstrating equivalence to the predicate ALGO 3i stimulus. |
Noise Resistance and Measurable Potential Equivalence (compared to predicate) | Bench tests included noise resistance and the lowest potential measurable or detectable by the device, demonstrating equivalence to the predicate ALGO 3i. Similar myogenic and acoustic noise detection and rejection, and impedance detection demonstrated for ALGO 7i to ALGO 3i. |
Safety and Effectiveness (electrical, mechanical, biocompatibility standards) | Complies with IEC 60601-1, IEC 60601-1-2, IEC 60601-2-40, EN 10993-1 (cytotoxicity, sensitization, irritation), IEC 62304, EN 1041, IEC 60645-7, EN 60645-3, EN 62366. Biocompatibility testing indicated no issues. Electrical safety and EMC compliance confirmed by external laboratories. Mechanical testing (tensile strength, flex life, push/drop/mould stress) performed. Maximum sound level remains below threatening levels even under failure. |
Details of the Study Proving Device Meets Acceptance Criteria:
-
Sample Size Used for the Test Set and Data Provenance:
- Test Set Sample Size: No new clinical performance data was collected for the ALGO 7i. The performance evaluation relies entirely on the established performance of its predicate device, ALGO 3i, and non-clinical bench testing.
- Data Provenance (Predicate Device): The original ABR template was derived from the responses of 35 neonates to 35 dBnHL click stimuli. This data was collected at Massachusetts Eye and Ear Infirmary during the design and development of the original automated infant hearing screener (ALGO I device). Clinical studies supporting the predicate ALGO devices (e.g., ALGO 3i) are referenced in the literature review. The studies mentioned are:
- Peters, J. G. "An automated infant screener using advanced evoked response technology." The Hearing Journal 39 (1986): 25-30.
- Herrmann, Barbara S., Aaron R. Thornton, and Janet M. Joseph. "Automated infant hearing screening using the ABR: development and validation." American Journal of Audiology 4.2 (1995): 6-14.
- Nature of Data: Retrospective (referencing historical clinical studies and data from predicate devices) and prospective (new non-clinical bench testing for ALGO 7i).
-
Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications:
- For the original ABR template creation (for the predicate ALGO I using 35 neonates), the text does not specify the number of experts. However, it implicitly relies on the clinical expertise and methodologies employed at Massachusetts Eye and Ear Infirmary during the initial development of the ABR technology. No specific number or qualifications of experts are given for establishing the ground truth for the test set of the ALGO 7i, as its "test set" for clinical performance is essentially the historical performance data of the ALGO 3i.
-
Adjudication Method for the Test Set:
- Not applicable as this was not a comparative clinical study with human readers adjudicating results for the ALGO 7i. The ALGO device itself provides a PASS/REFER/INCOMPLETE result based on its internal algorithm without human adjudication within the device's operation. The source ground truth (the ABR waveforms from the 35 neonates) would have involved clinical expertise, but the specific adjudication method (e.g., 2+1) is not described.
-
Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- No MRMC comparative effectiveness study was done for the ALGO 7i. The device is a standalone automated screener; it is not designed to assist human readers in interpreting results. Its output is a direct PASS/REFER/INCOMPLETE.
- Therefore, there is no effect size reported for human readers improving with AI vs. without AI assistance.
-
Standalone Performance:
- Yes, a standalone (algorithm only) performance was done. The ALGO 7i is entirely an algorithm-driven device that provides a direct PASS/REFER/INCOMPLETE result based on its patented signal processing technology and template matching. Its performance is evaluated based on its ability to accurately classify hearing status. The "standalone" performance metrics (sensitivity, specificity, sweeps to result) are those derived from the predicate ALGO 3i, as the ALGO 7i uses the exact same fundamental algorithm.
-
Type of Ground Truth Used:
- Expert Consensus / Physiological Data: The core ground truth for the ABR template is derived from the "morphology of normal hearing, near-threshold, infant ABR waveforms" determined by superimposing responses from 35 neonates. This implies a physiological ground truth established through clinical observation and interpretation, consistent with expert understanding of normal infant hearing.
- Outcomes Data: While the original studies for the predicate device would implicitly rely on long-term outcomes to validate the clinical utility of the screening, the immediate ground truth for template development is physiological (ABR waveforms).
-
Sample Size for the Training Set:
- The "training set" for the algorithm's template was generated from the ABR waveforms of 35 neonates. This serves as the basis for the fixed "template" which the algorithm matches against.
-
How the Ground Truth for the Training Set Was Established:
- The ground truth for the training set (the ABR template) was established by superimposing the responses of 35 neonates to 35 dBnHL click stimuli. These neonates were presumed to have normal hearing, and their physiological responses (ABR waveforms) formed the "template" against which subsequent readings are compared. This data collection occurred at the Massachusetts Eye and Ear Infirmary during the design and development of the original ALGO device. The process involved collecting actual ABR waveforms from a cohort of what medical experts considered "normal-hearing" infants.
Ask a specific question about this device
(176 days)
GWJ
The Audera Pro is intended to be used for the stimulation, recording and measurement of auditory evoked potentials, vestibular evoked myogenic potentials, auditory steady state responses and otoacoustic emissions. The device is indicated for use in the evaluation, identification, documentation and diagnosis of auditory and vestibular disorders. The device is intended to be used on patients of any age.
The Audera Pro is intended to be used by qualified medical personnel such as an audiologist, physician, hearing healthcare professional, or trained technician. The Audera Pro is intended to be used in a hospital, clinic, or other healthcare facility with a suitable quiet testing environment.
The anatomical sites of contact for auditory evoked potential (AEP) testing are the patient's ear canal (with the contact object being a sound delivery eartip or headphone, or an ear probe and eartip) and the patient's scalp and possibly other body sites (with the contact object being a bone transducer or electrodes that are capable of measuring bio-potentials). The anatomical sites of contact for vestibular evoked myogenic potential (VEMP) testing are the patient's ear canal (with the contact object being a sound delivery eartip or headphone, or an ear probe and eartip) and the patient's head and neck and possibly other body sites (with the contact object being a bone transducer or electrodes that are capable of measuring bio-potentials). The anatomical sites of contact for otoacoustic emission (DPOAE, TEOAE) testing are the patient's ear canal (with the contact object being an ear probe and eartip).
The device is a configurable platform used to aid in the screening and diagnosis of sensory-neural and hearing conditions. It is capable of performing the following procedures: Auditory Evoked Potentials (EP), Auditory Steady-State Response (ASSR), Distortion Products Otoacoustic Emissions (DPOAE), Transient Evoked Otoacoustic Emissions (TEOAE), and vestibular evoked myogenic potentials (VEMP). The device system consists of a laptop PC with Windows 10 Pro, placed on top of or beside a specialized hardware implementation interface (i.e. platform) for the procedures. Software in the laptop controls the specialized hardware and collects and analyzes the resulting signals. Transducers and various accessories connect to the specialized hardware via connectors on the back of the hardware package.
The provided FDA 510(k) summary for the GSI Audera Pro describes acceptance criteria and studies primarily through comparison to predicate devices and adherence to international standards.
Here's the information broken down as requested:
1. Table of Acceptance Criteria and Reported Device Performance
The device's acceptance criteria are framed in terms of meeting the requirements of various international standards and demonstrating comparability to predicate devices. The "Reported Device Performance" column reflects the conclusions drawn from the testing against these standards or the comparison to the predicates.
Acceptance Criteria (Objective of Testing/Evaluation) | Reported Device Performance |
---|---|
Electrical Safety (ES): Demonstrate that the basic safety and essential performance requirement of the device are satisfied to ensure safe use. | Satisfied (based on compliance with IEC 60601-1) |
Electromagnetic Compatibility (EMC): Demonstrate that the basic safety and essential performance of the device is maintained in the presence of electromagnetic disturbances. | Maintained (based on compliance with IEC 60601-1-2) |
Electromyographs (EMG): Demonstrate that the basic safety and essential performance for electromyographs (myofeedback equipment, as supported by the device system) is maintained. | Maintained (based on compliance with IEC 60601-2-40) |
Calibration and Test Signal: Demonstrate that the device satisfies general requirements with respect to determining relative to standard reference threshold levels established by means of psychoacoustic test methods. | Satisfied (based on compliance with IEC 60645-1, IEC 60645-3, ISO 389-2, ISO 389-6, validated for various transducers) |
Otoacoustic Emissions (OAE): Ensure that measurements made under comparable test conditions are consistent, with respect to methods for testing and routine calibration for measurement of otoacoustic emissions. | Ensured (based on compliance with IEC 60645-6, by demonstrating accuracy of required frequencies and amplitudes, harmonic distortion, measurement accuracy, and presentation of results) |
EP (ABR): Ensure that measurements made under comparable test conditions are consistent with respect to characteristics and performance requirements for measurement of auditory evoked potential from the inner ear, auditory nerve, and brainstem, evoked by acoustic stimuli of short duration. | Ensured (based on compliance with IEC 60645-7, by evaluating measuring system, stimulus types, test quality assuring system, frequency accuracy, hearing level control linearity, stimulus pulse, SPL accuracy levels, and maximum transducer output level for various transducers) |
Usability: Demonstrate that process used to analyze, specify, design, verify and validate usability as it relates to basic safety and essential performance of the device is in compliance with the IEC 62366 standard. | Compliant (based on compliance with IEC 60601-1-6) |
Module Comparison: Demonstrate that performance of device in comparison to the primary predicate device (K163326) is comparable. | Comparable (Bench testing using simulator, with evaluation of device output upon activation of each module. Results indicated that end-to-end performance of device system is comparable to predicate despite observed differences in performance, using Bland-Altman analyses and correlation coefficient comparison). |
Software Verification and Validation: As recommended by the Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices (effective 5/11/05). | Satisfied (for a Moderate Level of Concern (LOC)) |
Cybersecurity Risk Management: With implementation of modifications to procedures and labeling, as recommended by the Guidance Content of Premarket Submissions for Management of Cybersecurity in Medical Devices (effective 10/2/2014). | Satisfied |
Mechanical Requirements Evaluation: To demonstrate that functional mechanical product design requirements are satisfied. | Satisfied |
2. Sample size used for the test set and the data provenance
The document does not specify a "sample size" in terms of patient data or clinical cases for a test set. The studies described are primarily non-clinical bench testing and comparative technical assessments against predicate devices and international standards. Therefore, there's no mention of data provenance (e.g., country of origin) as clinical data was not used.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Not applicable. As clinical testing was not performed and the studies were non-clinical bench tests and comparisons to predicate devices, there was no need for experts to establish ground truth in the context of patient diagnoses or outcomes. The "ground truth" for these tests would be the established specifications and performance characteristics defined by the relevant international standards and the predicate devices.
4. Adjudication method for the test set
Not applicable. Given that the studies were non-clinical bench tests and comparisons to predicate devices, there was no independent adjudication of results in the way it would be done for a clinical study with multiple human readers. The evaluation methods included "Bland-Altman analyses and correlation coefficient comparison" for the module comparison, which are statistical methods rather than adjudication by experts.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC study was performed. The device is not an AI-assisted diagnostic tool that would typically involve human readers interpreting results. It is an "Evoked Response Auditory Stimulator" used for physiological measurements.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
This concept doesn't directly apply because the device is hardware and software for stimulation, recording, and measurement of biological responses, not an algorithm providing a diagnosis or interpretation in a standalone capacity that would typically interface with human practitioners in an AI context. The performance evaluations described are of the device's ability to accurately generate and measure these physiological signals.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
For the non-clinical studies described:
- International Standards: The primary "ground truth" for electrical safety, EMC, EMG, calibration, OAE, and ABR measurements are the specifications and performance requirements detailed within the referenced IEC and ISO standards (e.g., IEC 60601-1, IEC 60645 series).
- Predicate Device Performance: For the module comparison, the performance characteristics of the legally marketed predicate devices (K163326 and K061443) served as the benchmark for demonstrating comparability.
- Product Design Requirements: For software, cybersecurity, and mechanical evaluations, the established design requirements and FDA guidance documents served as the "ground truth" or criteria for successful performance.
8. The sample size for the training set
Not applicable. This device is not an AI/ML model that undergoes a training phase with a dataset. It is a medical device system built on established principles of electronics, signal processing, and audiometry.
9. How the ground truth for the training set was established
Not applicable, as there is no training set for this type of device.
Ask a specific question about this device
(305 days)
GWJ
SmartEP is an evoked response testing and diagnostic device, that is capable of eliciting, acquiring, and measuring auditory, somatosensory, visual, and vestibular evoked myogenic potential data, as well as providing nerve stimulation and monitoring.
The intended use of the SmartEP device is to objectively record evoked responses from patients of all ages upon the presentation of sensory stimuli. The product is indicated for use as a diagnostic aid and adjunctive tool in sensory related disorders (i.e., auditory, somatosensory, visual, and vestibular) and in surgical procedures for inter-operative nerve monitoring.
The SmartEP system is intended to be used by trained personnel in a hospital, nursery, clinic, audiologist's, EP technologist's, surgeon's, or physician's office, operating room, or other appropriate setting.
The SmartEP device records evoked potentials by using delivery of auditory, somatosensory, visual, or nerve sensory stimuli and using signal averaging techniques to extract the evoked potential from the uncorrelated electrical activity of the brain (electroencephalography or EEG) and muscles (electromyography or EMG). The device has options for Auditory Evoked Potentials (AEPs), Somatosensory Evoked Potentials (SEPs), Visual Evoked Potentials (VEPs), Vestibular Evoked Myogenic Potentials (VEMPs), and nerve stimulation and monitoring. The SEP, VEP, and nerve stimulation and monitoring functionality. operating principles, and intended uses are the same as on the predicate SmartEP device. On the SmartEP device with VEMP modality, the AEP modality has been modified to facilitate VEMP recording and analysis with optional biofeedback. The VEMP features added are comparable to those found in the ICS Chartr 200 predicate device. The VEMP modality does not provide a diagnosis. Diagnosis is made by a medical professional.
The SmartEP device is a Windows OS personal computer (PC) based system composed of software modules, an external main hardware unit, an optional biofeedback box, and peripheral stimulus delivery and recording components and accessories. The biofeedback box, stimulation, and recording devices are connected to the main hardware unit which is connected to the PC via a Universal Serial Bus cable. Software on the computer is used for the user interface to facilitate test parameter specification and for data display and analysis purposes.
The SmartEP with VEMP device has an optional biofeedback hardware accessory (VEMP feedback box) or uses a computer monitor for indicating EMG levels during VEMP testing. The VEMP feedback box has LEDs that indicate that the measured EMG level is either below the minimum value set by the user (Low - orange LED), or is between the minimum and maximum values set by the user (Satisfactory green LED), or is above the maximum value as set by the user (High – orange LED). The computer monitor displays a bar graph and pictorial face that indicates that the measured EMG level is either below the minimum value set by the user (Low - small pink bar and sad face), or is between the minimum and maximum values set by the user (Satisfactory - medium green bar and happy face), or is above the maximum value as set by the user (High - large pink bar and sad face). Recording of VEMPs can be set to occur when the EMG level is within the user programmed satisfactory range.
The provided document describes the substantial equivalence determination for the SmartEP device, focusing on its Vestibular Evoked Myogenic Potential (VEMP) testing modality compared to a predicate device. The information primarily addresses performance testing related to VEMP repeatability and reproducibility, rather than a typical AI model's acceptance criteria based on sensitivity/specificity/accuracy or an MRMC study.
Therefore, many of the requested fields cannot be directly answered as they pertain to AI/machine learning evaluation paradigms which are not the subject of this 510(k) summary. However, I will answer all applicable points based on the provided text.
Acceptance Criteria and Device Performance (VEMP Modality Only)
The document does not explicitly state "acceptance criteria" in terms of pre-defined numerical thresholds for performance metrics like sensitivity, specificity, or accuracy, as this is not an AI/diagnostic algorithm in the conventional sense. Instead, the performance is demonstrated through comparisons of repeatability and reproducibility metrics (specifically, correlation values of VEMP waveforms) to a legally marketed predicate device. The general acceptance criterion is demonstrating that the SmartEP device's VEMP performance is "similar or higher" than the predicate device.
Table of Performance for VEMP Waveform Correlation (SmartEP vs. Predicate)
Performance Metric | SmartEP Device (Range of Mean Correlation) | Predicate Device (Mean Correlation) | Acceptance Criterion (Implicit) |
---|---|---|---|
cVEMP Waveform Test-Retest Repeatability (Session 1 Mean Correlation) | 0.911 - 0.915 | Not explicitly stated for this metric in predicate data. | Similar or higher than predicate (if available for comparison) and demonstrating high repeatability. |
cVEMP Waveform Test-Retest Repeatability (Session 2 Mean Correlation) | 0.921 - 0.941 | Not explicitly stated for this metric in predicate data. | Similar or higher than predicate (if available for comparison) and demonstrating high repeatability. |
oVEMP Waveform Test-Retest Repeatability (Session 1 Mean Correlation) | 0.944 - 0.956 | Not explicitly stated for this metric in predicate data. | Similar or higher than predicate (if available for comparison) and demonstrating high repeatability. |
oVEMP Waveform Test-Retest Repeatability (Session 2 Mean Correlation) | 0.951 - 0.955 | Not explicitly stated for this metric in predicate data. | Similar or higher than predicate (if available for comparison) and demonstrating high repeatability. |
cVEMP Waveform Reproducibility (Session 1 vs. Session 2 Mean Correlation) | 0.847 - 0.868 | 0.915 (Right), 0.916 (Left) | Similar or higher than predicate. |
oVEMP Waveform Reproducibility (Session 1 vs. Session 2 Mean Correlation) | 0.902 - 0.940 | 0.926 (Right), 0.93 (Left) | Similar or higher than predicate. |
Reported Device Performance:
The document states: "The mean correlation values for both cVEMP and oVEMP recordings obtained the SmartEP with VEMP device for both test-retest repeatability and reproducibility are similar or higher than those reported for the ICS Chartr EP 200 with VEMP device." This generally indicates that the SmartEP device met the implicit acceptance criterion of performing comparably or better than the predicate device for VEMP testing.
Study Details
-
Sample sizes used for the test set and the data provenance:
- Study 1 (P1 latency from cVEMP): 215 normal subjects.
- Study 2 (cVEMP and oVEMP waveforms): 10 adult normal subjects.
- Data Provenance: The document does not explicitly state the country of origin. It describes the studies as "performed with using the SmartEP with VEMP device," suggesting they were conducted by or for the manufacturer. The studies appear to be prospective to evaluate the device's performance.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
This device is an evoked potential system used to record physiological responses, not an AI diagnostic algorithm that produces an interpretation requiring expert ground truth for classification/detection tasks. The "ground truth" here is the physiological response itself, as measured by the device. Therefore, this question is not directly applicable to the type of device and study described. -
Adjudication method (e.g., 2+1, 3+1, none) for the test set:
Not applicable, as the "ground truth" is the recorded physiological signal, not an interpretation requiring human adjudication. -
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
No, an MRMC comparative effectiveness study was not done. This is a medical device for measuring physiological responses, not an AI-assisted diagnostic tool that humans would use to interpret images or signals. The study focuses on the device's measurement consistency. -
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
This device is a standalone measurement system. The "performance testing data" section describes the device's inherent ability to record VEMP waveforms repeatably and reproducibly, independent of a specific human interpretation loop for a diagnostic task. -
The type of ground truth used (expert consensus, pathology, outcomes data, etc):
The "ground truth" is the physiological VEMP response itself, measured by the device. The studies characterize the device's ability to consistently and accurately capture these responses (repeatability and reproducibility). There isn't a "diagnostic ground truth" in the sense of a disease state confirmed by a gold standard like pathology, as the device is for recording diagnostic data, not making the diagnosis itself. It is stated that "The VEMP modality does not provide a diagnosis. Diagnosis is made by a medical professional." -
The sample size for the training set:
Not applicable. This is not an AI/machine learning device that requires a "training set" to learn from data. It's a measurement instrument. -
How the ground truth for the training set was established:
Not applicable, as there is no "training set."
Ask a specific question about this device
(244 days)
GWJ
The Eclipse with VEMP (Vestibular Evoked Myogenic Potential) is intended for vestibular evoked myogenic potential testing to assist in the assessment of vestibular function. The target population for Eclipse with VEMP includes patients aged from 8 years and up.
The device is to be used only by qualified medical personnel with prior knowledge of the medical and scientific facts underlying the procedure.
The Eclipse with VEMP is audiometric equipment intended to perform various Otoacoustic Emissions (OAEs) and Auditory Evoked Potential evaluations. The Eclipse is operated solely from PC based software modules. The Eclipse platform performs the physical measurements. The protocols are created in the software modules. The Eclipse consists of a hardware platform, a preamplifier, stimulation transducers and recording electrodes.
VEMP evaluations are tests of the vestibular portion of the inner ear and acoustic nerve, evoked with an auditory stimulation. The evoked response results in a potential recorded from the sternocleidomastoid (neck) muscles or the inferior oblique (eye) muscles. VEMP is not a test of the neck or eye musculature directly; the clinician is interested in the vestibular anatomy which triggers the response. The cervical Vestibular Evoked Myogenic Potential (cVEMP) is an evoked potential measured from the sternocleidomastiod (SCM) muscle and the ocular VEMP (oVEMP) is an evoked potential measured from the inferior oblique muscle. Both tests are used to assess the otolith organs (saccule and utricle) of the vestibular system and their afferent pathways and assist medical practitioners in the diagnosis of various balance disorders.
Summary: VEMP is Auditory Evoked Potentials like ABR obtained using any commercially available EP system. The addition of the VEMP module to Eclipse will make it possible for clinicians to conduct VEMP tests while using EMG (electromyography) monitoring and scaling. The VEMP function of the Eclipse with VEMP does not make a diagnosis. It only assists the medical professional.
Here's a breakdown of the acceptance criteria and study information for the Interacoustics A/S Eclipse with VEMP, based on the provided document:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state formal "acceptance criteria" in terms of predefined numerical thresholds for performance metrics. Instead, it presents the results of internal clinical evaluations (test-retest reliability and reproducibility studies) and compares them to similar reported values in a predicate device. The implicit acceptance criterion is that the device demonstrates comparable or better reliability and reproducibility than the predicate device.
Performance Metric | Acceptance Criteria (Implicit) | Reported Device Performance (Eclipse with VEMP) | Predicate Device (K143670) Performance (for comparison) |
---|---|---|---|
cVEMP Test-Retest Reliability: | High correlation between repeated measurements | Mean: 0.9374 | |
Median: 0.9562 | |||
IQR: 0.0547 | N/A (No direct predicate comparison for test-retest in this doc) | ||
oVEMP Test-Retest Reliability: | High correlation between repeated measurements | Mean: 0.9095 | |
Median: 0.9280 | |||
IQR: 0.0738 | N/A (No direct predicate comparison for test-retest in this doc) | ||
cVEMP Reproducibility (Different Days, Testers, Electrode Placements): | High correlation between measurements on different days | Mean: 0.8794 | |
Median: 0.9235 | |||
IQR: 0.0955 | Mean: 0.9146 (R) / 0.9162 (L) | ||
oVEMP Reproducibility (Different Days, Testers, Electrode Placements): | High correlation between measurements on different days | Mean: 0.8923 | |
Median: 0.9100 | |||
IQR: 0.1112 | Mean: 0.926 (R) / 0.93 (L) |
Conclusion from document: The mean and median values for both test-retest reliability and reproducibility are high, indicating good performance. For reproducibility, the Eclipse with VEMP's mean correlation values are "similar to or higher than" those reported by the predicate device (GN Otometrics K143670).
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: 14 normal adults (for both cVEMP and oVEMP).
- Data Provenance: The studies were conducted at two internal sites: one in Denmark and one in the United States. They appear to be prospective internal evaluations specifically for this device and its VEMP module.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
The document does not mention the use of experts to establish a "ground truth" for the test set in the traditional sense of diagnostic accuracy. The studies evaluated the reliability and reproducibility of the device's measurements, not its diagnostic accuracy against a separate, definitive truth. The VEMP module "does not make a diagnosis" but "only assists the medical professional." Therefore, there isn't a stated ground truth established by experts for these specific studies.
4. Adjudication Method for the Test Set
Not applicable. As noted above, these studies focused on the intrinsic reliability and reproducibility of the device's waveform measurements, not diagnostic outcomes requiring expert adjudication. The analysis involved correlation of waveforms.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
No, an MRMC comparative effectiveness study involving human readers with/without AI assistance was not mentioned or performed in the provided document. The device itself is an "Evoked Response Auditory Stimulator," a diagnostic tool that assists medical professionals, rather than an AI-powered diagnostic system that interprets images or data.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
The studies performed evaluate the device's ability to consistently acquire and output VEMP waveforms. The device itself is a data acquisition system. While it has software for processing signals (e.g., filtering, averaging), the "performance" described is the consistency of the recorded physiological responses, not an algorithm's standalone diagnostic interpretation. The document explicitly states the VEMP module "does not make a diagnosis."
7. The Type of Ground Truth Used
There is no traditional "ground truth" (like pathology or clinical outcomes) used in these studies. The studies aimed to establish the test-retest reliability (consistency of measurements taken repeatedly within a short timeframe) and reproducibility (consistency of measurements taken on different occasions, potentially by different operators with slightly different setups) of the VEMP waveforms produced by the device. The comparison point is the device's own prior measurement or the measurement from a different session.
8. The Sample Size for the Training Set
The document describes internal clinical evaluations, not the training of a machine learning algorithm. Therefore, there is no "training set" in the context of AI/ML.
9. How the Ground Truth for the Training Set was Established
Not applicable, as there is no training set for an AI/ML algorithm mentioned in the document.
Ask a specific question about this device
(303 days)
GWJ
The ICS Chartr EP 200 with VEMP is indicated for auditory evoked potential testing as an aid in assessing hearing loss and lesions in the auditory pathway. The Vestibular Evoked Myogenic Potential is indicated for vestibular evoked potential testing as an aid in assessing vestibular function in adult patients. The device is to be used only by qualified medical personnel with prior knowledge of the medical and scientific facts underlying the procedure.
"Vestibular Evoked Myogenic Potentials (VEMPs) are short latency electromyograms (EMGs) evoked by high level acoustic stimuli recorded from surface electrodes over the tonically contracted sternocleidomastoid (SCM) muscle." Akin FW & Murnane OD (2001).
The ICS Chartr EP 200 with VEMP is used to test the auditory and vestibular functions of children and adults in a hospital, in an Ear, Nose and Throat Clinic or in an Audiology office. The ICS Chartr EP 200 system measures evoked potentials from the patient using repeated auditory stimuli and averaging EEG or EMG activity in order to abstract the response from the noise resulting in an analysis of the auditory/vestibular system functions.
The ICS Chartr EP 200 is a PC-based system, which consists of software modules for installation on a PC, an isolation transformer, a hardware platform, pre-amp, a mains adapter, stimulation devices and recording devices. The stimulation and recording devices are connected to the preamp, which is connected to the hardware platform, which is connected to the PC via a USB cable - no hardware installation inside the PC is required. The PC and hardware platform are powered from the isolation transformer which is powered from the mains.
One added item as compared to the standard EP200 is the VEMP monitor. The Chartr EP (USB) VEMP monitor assesses the level of tonic EMG and displays if the level is adequate or inadequate. The monitor light will display the following based on the EMG level: Low (blue) -EMG level is below the Min value- Good (green) - EMG level is between Min value and the Max value - High (amber) - EMG level is above the Max value
Summary: VEMP is an evoked potential (EP) just like ABR obtained using any commercially available EP system. The unique feature with the ICS Chartr EP 200 is the ability to perform EMG monitoring during VEMP data collection.
The VEMP function of the EP 200 does not make a diagnosis. The diagnosis is made by a medical professional.
The provided text describes a 510(k) premarket notification for the ICS Chartr EP 200 with VEMP, an auditory and vestibular evoked potential testing device. It does not contain information about an AI/ML medical device. Therefore, a comprehensive answer to the user's request for acceptance criteria and study proving an AI/ML device meets them cannot be wholly extracted from the given text.
However, I can extract the information relevant to the device's validation and substantial equivalence, and then address the remaining points by stating the information is not present in the provided document.
Here's an attempt to answer the user's request based only on the provided text, highlighting what is available and what is not:
Based on the provided FDA 510(k) summary for the ICS Chartr EP 200 with VEMP:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly define "acceptance criteria" in a quantitative table format for performance metrics relevant to an AI/ML device (e.g., sensitivity, specificity, AUC). Instead, it focuses on demonstrating the reproducibility of VEMP waveforms and comparing the new device's features and safety/effectiveness to a predicate device.
The closest equivalent to "reported device performance" in the context of this device is the correlation values for VEMP waveform reproducibility.
Performance Metric (Reproducibility of VEMP Waveform) | Acceptance Criteria (Implicit) | Reported Device Performance (Correlation Values) |
---|---|---|
Normal Subjects (cVEMP) | (Implied to be "good" for clinical utility based on comparison to other EPs) | CORR R (entire window): 0.893448276 |
CORR L (entire window): 0.903448276 | ||
5-35ms CORR R: 0.914655172 | ||
5-35ms CORR L: 0.916206897 | ||
Patients with Disorders (cVEMP) | (Lower correlation expected due to absent/abnormal responses, but still demonstrably present when possible) | CORR R (entire window): 0.751964286 |
CORR L (entire window): 0.75637931 | ||
5-35ms CORR R: 0.775172414 | ||
5-35ms CORR L: 0.805 | ||
Normal Subjects (oVEMP) | (Implied to be "good" for clinical utility) | CORR R: 0.897 |
CORR L: 0.8915 | ||
4-20ms R: 0.926 | ||
4-20ms L: 0.93 |
2. Sample sized used for the test set and the data provenance
- Test Set Sample Size:
- 60 normal cVEMP subjects
- 58 pathologic cVEMP subjects
- 20 normal oVEMP subjects
- Data Provenance: Studies were collected at two different facilities, one in the USA and one in Canada. The document states these were "clinical studies," implying they were prospective, but does not explicitly state "retrospective" or "prospective."
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
This information is not present in the provided text. The device assesses auditory and vestibular function, and the "ground truth" seems to be the VEMP waveform itself and its reproducibility, rather than a clinical diagnosis established by experts. The diagnosis is stated to be made by a medical professional.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not present in the provided text. The study focuses on correlation and reproducibility of waveforms, not on classification or diagnostic accuracy adjudicated by multiple readers.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: No, an MRMC study was not done. This device is an evoked potential testing system, not an AI-powered diagnostic algorithm assisting human readers.
- Effect Size of AI assistance: Not applicable, as this is not an AI/ML device.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Performance: Not applicable in the context of AI/ML. The device's "performance" is its ability to reliably acquire and display VEMP waveforms. The clinical conclusion and diagnosis are explicitly stated to be made by a medical professional. "The VEMP function of the EP 200 does not make a diagnosis. The diagnosis is made by a medical professional."
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
The "ground truth" implicitly used for the reproducibility study is the VEMP waveform itself and its consistency across repeated measurements. The study aimed to demonstrate that the device could reliably produce these waveforms. For patients with disorders, the "pathologic" status serves as a descriptor for that cohort, with results indicating lower correlation due to the nature of their conditions (absent or abnormal VEMPs).
8. The sample size for the training set
This information is not present in the provided text. This is not an AI/ML device that requires a "training set" in the machine learning sense. Clinical studies presented were to confirm reproducibility, not to train an algorithm.
9. How the ground truth for the training set was established
This information is not present in the provided text, as it is not an AI/ML device with a training set and corresponding ground truth.
Ask a specific question about this device
(81 days)
GWJ
The Echo-Screen III hearing screener models are based upon otoacoustic emission (OAE) and auditory brainstem response (ABR) technology.
The device is intended to screen hearing for newborns through adults, including geriatric patients. The device does not measure hearing per se, but helps to determine whether or not a hearing loss may be present.
The Echo-Screen III product family consists of handheld, automated OAE and ABR based hearing systems which are easy to use. The measurement flow is menu guided and the evaluation is based upon signal statistics. The Echo-Screen III devices are intended to be used by trained personnel in a medical or school environment. The Echo-Screen III models are not intended for fitting assistive listening devices such as hearing aids or cochlear implants.
The Echo-Screen III hearing screener is a portable, handheld, battery-operated device that can detect hearing loss using Otoacoustic Emission or Auditory Brainstem Response screening technologies. The Echo-Screen III may be configured to support one or any combination of TEOAE, DPOAE, and AABR technologies.
The device represents the next generation of the Echo-Screen product line with key enhancements over the previously cleared predicate Echo- Screen T, TA, TD, TDA, TC [K013977], hereinafter referred to as the Echo-Screen T series; specifically, use of the Android operating system, programming upgrade to C and Java languages, addition of a color screen and built-in full hardware keyboard plus icons and on-screen touch keyboard, optional barcode scanner, Li-ion rechargeable battery, and inclusion of a docking station for battery charging and data transfer.
The provided document K141446 for the Echo-Screen III does not contain a study that establishes acceptance criteria and then proves the device meets those criteria through clinical or non-clinical testing. Instead, the document states that the device's TEOAE, DPOAE, and ABR screening test performance is equivalent to the performance of the predicate Echo-Screen T series (K013977).
Therefore, the specific information requested in the prompt (acceptance criteria, reported device performance, sample sizes, ground truth establishment, expert qualifications, adjudication methods, MRMC studies, standalone performance, and training set details) is not explicitly provided within this 510(k) summary for the Echo-Screen III as it relies on substantial equivalence to a predicate device.
The document states:
- "Clinical Tests: N/A" – indicating no new clinical studies were conducted for this submission.
- "Nonclinical Tests: Design verification and validation were performed to assure that the Echo-Screen III meets its performance specifications and demonstrates equivalence to the specified predicate device."
This means the acceptance criteria and performance are implicitly tied to the predicate device's established performance. To fully answer your question, one would need to refer to the 510(k) submission for the predicate device, K013977 (Echo-Screen T, TA, TD, TDA, TC), which would likely contain the underlying performance data and acceptance criteria based on which the substantial equivalence claim for the Echo-Screen III is made.
Without that predicate device's submission, I cannot create the table or provide the detailed study information you requested for the Echo-Screen III. The provided document focuses on describing the technological characteristics and enhancements of the Echo-Screen III compared to its predicate, and concludes that it is substantially equivalent.
Ask a specific question about this device
(160 days)
GWJ
The Type 1077 device is indicated for use in the recording and automated analysis of human physiological data (screening auditory brainstem responses and/or otoacoustic emissions) necessary for the diagnosis of auditory and hearing-related disorders.
Distortion Product Otoacoustic Emissions and Transient Evoked Otoacoustic Emissions: The Type 1077 DPOAE module and TEOAE module can be used for patients of all ages, from children to adults, including infants and geriatric patients. It is especially indicated for use in testing individuals for whom behavioral audiometric results are deemed unreliable, such as infants, young children, and cognitively impaired adults.
Auditory Brainstem Response: The Type 1077 ABR module is especially intended for infants from 34 weeks (gestational age) up to 6 months of age.
When the device is used to screen infants. they should be asleep or in a quiet state at the time of screening. The device is intended for use by audiologists. ENT's and other healthcare professionals.
The device is identical to our own device described in K122067. Only the indications for use has changed. The age range has been expanded. Different models of the device are capable of the following list of tests: AccuScreen TE (TEOAE), AccuScreen DP (DPOAE), AccuScreen TE/DP (TEOAE and DPOAE), AccuScreen ABR (ABR), AccuScreen ABR/TE (ABR and TEOAE), AccuScreen ABR/DP (ABR and DPOAE), AccuScreen ABR/TE/DP (ABR, TEOAE and DPOAE). The measurement application is controlled from a self-contained firmware (software) module installed in the handheld device. The firmware module can be configured to allow different OAE measurement types (DPOAE and/or TEOAE) by a license key stored in the device. For automated OAE measurements, the handheld device uses an OAE probe, designed and manufactured by PATH Medical GmbH. The OAE probe is fitted with an ear-tip (constructed of biocompatible material) and inserted in the ear canal of the patient. The AccuScreen device plays stimulus sounds in the ear canal via small speakers in the OAE probe. The AccuScreen device measures the patient's response to the stimulus sounds via a microphone in the probe. The measured response is processed by the AccuScreen device using statistics to help deter or not a hearing loss may be present. When the OAE measurement is a DPOAE measurement, the stimulus signal is composed of two pure tone signals, each presented by a speaker in the OAE measurement is a TEOAE measurement, the stimulus signal is a series of broadband clicks presented by one speaker in the OAE probe.
Here's an analysis of the acceptance criteria and the study details for the MADSEN AccuScreen Type 1077, based on the provided 510(k) summary:
1. Table of Acceptance Criteria and Reported Device Performance
The clinical study's primary objective was to demonstrate comparable performance to the predicate device (Echo-Screen T, TA, TD, TDA, TC, K013977) across an expanded age range for OAE measurements. The acceptance criteria essentially translate to reaching a high percentage agreement with the predicate device.
Test Type | Acceptance Criteria (Implicit from comparator study design) | Reported Device Performance (Agreement with Predicate) |
---|---|---|
DPOAE | Assumed high agreement with predicate across age groups | Overall 93.8% agreement |
TEOAE | Assumed high agreement with predicate across age groups | Overall 91.5% agreement |
2. Sample Size and Data Provenance for Test Set
- Sample Size for Test Set: 130 ears were tested. (This implicitly refers to 65 individuals if testing both ears.)
- Data Provenance: Not explicitly stated, but the study was conducted to compare against a predicate device, which implies a prospective data collection for the comparison. The country of origin is not mentioned.
3. Number of Experts and Qualifications for Ground Truth for Test Set
- The ground truth for the primary comparison was the predicate device's result.
- In cases of discrepancy between the device under review (Type 1077) and the predicate, a "diagnostic OAE device" was used to re-test the subjects. The summary does not specify the number or qualifications of experts involved in operating this diagnostic device or interpreting its results to resolve discrepancies.
4. Adjudication Method for the Test Set
- The primary method was a direct comparison between the Type 1077 and the predicate device.
- For discrepant results (where Type 1077 and the predicate did not agree), a "diagnostic OAE device" was used for re-testing. This acts as an adjudication method, essentially using a third, presumably more accurate or established, method to determine the "correct" result in ambiguous cases. The document does not specify if expert consensus was involved in interpreting the diagnostic OAE device results.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No, an MRMC comparative effectiveness study was not done. This study focuses on device performance comparison rather than human reader improvement with or without AI assistance. The device is an automated screening tool, not an AI-assisted diagnostic aid for human readers.
6. Standalone Performance Study
- Yes, a standalone performance was done in the sense that the device's output (Pass/Refer for DPOAE and TEOAE) was directly compared to the predicate device's output. The goal was to establish that the Type 1077 provides results comparable to a legally marketed equivalent device without human intervention in the interpretation of the screening result.
7. Type of Ground Truth Used
- The primary "ground truth" for the comparison was the result from the predicate device (Echo-Screen).
- For discrepant cases, the ground truth was derived from re-testing using a "diagnostic OAE device." This implies a more definitive diagnostic result was used to resolve discrepancies between the two screening devices.
8. Sample Size for the Training Set
- The document does not provide information about a training set. This is a medical device clearance for an auditory stimulator and analysis system, not a machine learning algorithm that typically requires a distinct training set. The device likely relies on established signal processing and statistical methods for OAE analysis, which were developed and validated during its initial design (pre-K122067 and K122067).
9. How the Ground Truth for the Training Set Was Established
- Not applicable, as no training set is mentioned in the provided documentation.
Ask a specific question about this device
Page 1 of 5