Search Results
Found 12 results
510(k) Data Aggregation
(56 days)
Natus Medical Incorporated DBA Excel-Tech Ltd. (XLTEK)
The Natus BrainWatch System including the Natus BrainWatch Headband, is intended to record and store EEG signals and present these signals visually to assist trained medical staff in making neurological diagnoses in patients aged 2 years and older.
The device does not provide any diagnostic conclusions about the subject's condition and does not provide any automated alerts of an adverse clinical event. The Natus BrainWatch System is intended for use within a professional healthcare facility or clinical research environment. The Natus BrainWatch Headband is intended for single-patient use.
The Natus BrainWatch system is a reliable, mobile, and easy-to-use EEG device intended to record, store, and visually present EEG signals to assist trained medical staff in making neurological diagnoses in patients.
The system includes a touchscreen tablet as its primary interface. The Natus BrainWatch Headband is a single-use disposable headpiece with an integrated array of 10 passive electrodes that are applied to the patient's head to record EEG signals when connected to an amplifier.
The Natus BrainWatch System consists of the following components: Tablet, IV Pole Handle, Amplifier, Headband(s), Gel Pods and a Mobile Application:
- Touchscreen Tablet with charger
- Single-Use disposable elastic fabric headband with 10 electrodes (available in sizes Small, Medium, and Large) containing:
- Hydroflex patch with 2 built-in electrodes
- 8 electrodes attached to gel pods used to improve impedance levels labeled L1-L4, R1-R4
- Wireless amplifier that attaches to the headband and connects to the tablet via Bluetooth
- IV pole handle that holds the tablet for a hands-free experience
- Gels pods attach to the electrodes to improve impedance levels
The Natus BrainWatch System is a portable 10-channel EEG monitoring system. 10 patient electrodes are used to record the 10 channels. Channels 1-5 should be used for the patient's left hemisphere, with channel 1 at the front of the patient's head and channel 5 at the back. Channels 6-10 should be used for the patient's right hemisphere, with channel 6 at the front of the patient's head and channel 10 at the back.
EEG recording files are transferred wirelessly to a computer from the Natus BrainWatch Tablet using a Wi-Fi connection. The EEG sessions from Natus BrainWatch are stored using a cloudbased solution which allows the end user to view studies at a later date. Recorded sessions can be reviewed remotely on a computer using the Neuroworks EEG software.
The device is a portable, 10-channel EEG monitoring system. The device connects to a headband consisting of 10 patient electrodes which are used to form the 10 channels and may be used with any scalp EEG electrodes.
The system acquires the EEG signals of a patient and presents the EEG signals in visual formats in real time. The EEG recordings are displayed on a computer or tablet using an EEG viewer software. The visual signals assist trained medical staff to make neurological diagnoses. It does not provide any diagnostic conclusion about the subject's condition and does not provide any automated alerts of an adverse clinical event.
Micro-USB cable is used to connect the Natus BrainWatch System to power adapter for charging. Bluetooth is used to connect amplifier with tablet to transfer EEG recording files. When the Natus BrainWatch System is connected to a power adapter of a computer, all EEG acquisition functions are automatically disabled.
The Natus BrainWatch System is a portable 10-channel EEG monitoring system intended to record, store, and visually present EEG signals to assist trained medical staff in making neurological diagnoses in patients aged 2 years and older. It does not provide diagnostic conclusions or automated alerts of adverse clinical events.
Here's a breakdown of the acceptance criteria and the study verifying the device's performance:
1. Table of Acceptance Criteria and Reported Device Performance
The provided text details the performance verification against various standards rather than specific quantitative acceptance criteria for clinical performance. The focus is on demonstrating safety and electrical performance equivalence to predicate devices.
Acceptance Criteria Category | Reported Device Performance (Summary) |
---|---|
Electrical Safety | Verified in accordance with IEC 60601-1-6:2010/AMD2:2020, IEC 60601-1:2005/AMD2:2020, and IEC 80601-2-26:2019. |
Electromagnetic Compatibility | Verified in accordance with IEC 60601-1-2 Ed 4.1. Underwent Wireless Coexistence testing per ANSI C63.27-2021. FCC Part 15 certified. |
Packaging & Handling | Successfully passed verification as per ASTM D4169-22. |
Bench Verification & Validation (Functional/Performance) | Successfully passed performance verification and validation in accordance with internal requirements and specifications. Met defined acceptance criteria for functional and performance characteristics. |
EEG Specific Performance | Met requirements for basic safety and essential performance of electroencephalographs per IEC 80601-2-26. Met Performance Criteria of FDA Guidance "Cutaneous Electrodes for Recording Purposes - Performance Criteria for Safety and Performance Based Pathway". |
Battery Safety | Tested per IEC 62133. |
Biocompatibility | Patient contacting components (including conductive electrolyte gel) verified with Irritation, Sensitization, and Cytotoxicity testing per ISO 10993-5:2009, ISO 10993-23:2021, and ISO 10993-10:2021. |
Shelf-life | Shelf-life testing performed. |
Wireless Technology | Utilizes Bluetooth 5.0 technology (similar to reference device CGX Quick-20m K203331). No interference with other electronic devices; coexists well in a typical medical environment. |
Analogue-to-Digital Conversion | 24-Bit Delta-Sigma (Same as Predicate 1). |
Sampling Rate | 250 Hz (Same as Predicate 1). |
2. Sample size used for the test set and the data provenance
The document does not specify a separate "test set" in the context of clinical data for algorithmic performance. The testing described is primarily bench and engineering verification against international standards. Therefore, information about sample size, country of origin, or retrospective/prospective nature of a clinical test set is not provided.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Not applicable, as the provided information does not describe a study involving clinical ground truth establishment by experts for a test set. The device assists trained medical staff in making diagnoses but does not provide diagnostic conclusions itself.
4. Adjudication method for the test set
Not applicable, as there is no described clinical study involving a test set and ground truth adjudication.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC comparative effectiveness study is described. The device is a data acquisition and visualization tool; it does not explicitly feature AI for interpretation or diagnostic assistance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Not applicable. The "Natus BrainWatch System" is an electroencephalograph (EEG) device that records and visually presents signals for trained medical staff to interpret. It does not provide any diagnostic conclusions or automated alerts, meaning it is not a standalone diagnostic algorithm.
7. The type of ground truth used
Not applicable. The testing focuses on engineering performance, safety, and functional compliance rather than diagnostic accuracy against a clinical ground truth (e.g., pathology, outcomes data, or expert consensus).
8. The sample size for the training set
Not applicable, as the document does not describe the development or validation of an AI algorithm with a training set. The device's function is to capture and display raw EEG signals.
9. How the ground truth for the training set was established
Not applicable, as there is no described training set or AI algorithm for which ground truth would need to be established.
Ask a specific question about this device
(115 days)
Natus Medical Incorporated DBA Excel-Tech Ltd. (XLTEK)
The ALGO Pro Newborn Hearing Screener is a mobile, noninvasive instrument used to screen infants for hearing loss. The screener uses Automated Auditory Brainstem Response (AABR®) for automated analysis of Auditory Brainstem Response (ABR) signals recorded from the patient. The screener is intended for babies between the ages of 34 weeks (gestation age) and 6 months. Babies should be well enough to be ready for discharge from the hospital, and should be asleep or in a quiet state at the time of screener is simple to operate. It does not require special technical skills or interpretation of results. Basic training with the equipment is sufficient to learn how to screen infants who are in good health and about to be discharged from the hospital. A typical screening process can be completed in 15 minutes or less. Sites appropriate for screening include the well-baby nursery, NICU, mother's bedside, audiology suite, outpatient clinic, or doctor's office.
The ALGO® Pro is a fully automated hearing screening device used to screen infants for hearing loss. It provides consistent, objective pass/refer results. The ALGO Pro device utilizes Auditory Brainstem Response (ABR) as the hearing screening technology, which allows the screening of the entire hearing pathway from outer ear to the brainstem. The ABR signal is evoked by a series of acoustic broadband transient stimulus (clicks) presented to a subject's ears using acoustic transducers and recorded by sensors placed on the skin of the patient. The ALGO Pro generates each click stimulus and presents to the patient's ear using acoustic transducers attached to disposable acoustic earphones. The click stimulus elicits a sequence of distinguishable electrophysiological signals produced as a result of signal transmission and neural responses within the auditory nerve and brainstem of the infant. Disposable sensors applied to the infant's skin pick up this evoked response, and the signal is transmitted to the screener via the patient electrode leads. The device employs advanced signal processing technology such as amplification, digital filtering, artifact rejection, noise monitoring and noise-weighted averaging, to separate the ABR from background noise and from other brain activity. The ALGO Pro uses a statistical algorithm based on binomial statistics to determine if there is a response to the stimulus that matches to the ABR template of a normal hearing newborn. If a response is detected that is consistent with the ABR template derived from normal hearing infants (automated auditory brainstem response technology, AABR), the device provides an automated 'Pass' result. A 'Refer' result is automatically generated if the device cannot detect an ABR response with sufficient statistical confidence or one that is consistent with the template.
Here's a breakdown of the acceptance criteria and study information for the ALGO Pro Newborn Hearing Screener, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The FDA submission primarily focuses on demonstrating substantial equivalence to a predicate device (ALGO 5) rather than setting specific, numerical acceptance criteria for a new clinical performance study. The "acceptance criteria" here are implied through the comparison with the predicate device's established performance and the demonstration that the ALGO Pro performs comparably.
Acceptance Criterion (Implied) | Reported Device Performance (ALGO Pro / Comparative) |
---|---|
Safety | Complies with: IEC 60601-1 Ed. 3.2, IEC 60601-2-40 Ed. 2.0, IEC 60601-1-6, IEC 62366-1, IEC62304, IEC 62133-2, IEC 60601-1-2 Ed. 4.1, IEC 60601-4-2, FCC Part 15. |
Biocompatibility | Passed Cytotoxicity, Sensitization, and Irritation tests (ISO 10993-1:2018 for limited contact). |
Mechanical Integrity | Passed drop and tumble, cable bend cycle, electrode clip cycle, power button cycle, connector mating cycle, bassinet hook cycle, and docking station latch/pogo pin cycle testing. |
Effectiveness (AABR Algorithm Performance) | Utilizes the exact same AABR algorithm as predicate ALGO 5. |
Algorithmic Sensitivity | 99.9% for each ear (using binomial statistics, inherited from ALGO AABR algorithm). |
Overall Clinical Sensitivity | 98.4% (combined results from independent, peer-reviewed clinical studies using the ALGO AABR algorithm, e.g., Peters (1986), Herrmann et al. (1995)). |
Specificity | 96% to 98% (from independent, peer-reviewed clinical studies using the ALGO AABR algorithm). |
Performance Equivalence to Predicate | Bench testing confirmed equivalence of acoustic stimuli, recording of evoked potentials, and proper implementation of ABR template and algorithm, supporting device effectiveness. |
Software Performance | Software Verification and Validation testing conducted, Basic Documentation Level provided. |
Usability | Formative and summative human factors/usability testing conducted, no concerns regarding safety and effectiveness raised. |
2. Sample Size Used for the Test Set and Data Provenance
No new clinical "test set" was used for the ALGO Pro in the context of a prospective clinical trial. The performance data for the AABR algorithm (sensitivity and specificity) are derived from previously published, peer-reviewed clinical studies that validated the underlying ALGO AABR technology.
- Sample Size for AABR Algorithm Development: The ABR template, which forms the basis of the ALGO Pro's algorithm, was determined by superimposing responses from 35 neonates to 35 dB nHL click stimuli.
- Data Provenance for ABR Template: The data for the ABR template was collected at Massachusetts Eye and Ear Infirmary during the design and development of the original automated infant hearing screener.
- Data Provenance for Clinical Performance (Sensitivity/Specificity): The studies cited (Peters, J. G. (1986) and Herrmann, Barbara S., Aaron R. Thornton, and Janet M. Joseph (1995)) are generally long-standing research from various institutions. The document doesn't specify the exact country of origin for the studies cited beyond the development of the template in the US. These studies would be retrospective relative to the ALGO Pro submission, as they describe the development and validation of the original ALGO technology.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts
This information is not provided in the document for the studies that established the ground truth for the ABR template or the clinical performance of the ALGO AABR algorithm. The template was derived from "normal hearing" neonates, implying a clinical assessment of their hearing status, but the specifics of how that ground truth was established (e.g., specific experts, their qualifications, or methods other than the ABR itself) are not detailed within this submission summary.
4. Adjudication Method for the Test Set
Not applicable, as no new clinical "test set" requiring adjudication for the ALGO Pro itself was conducted or reported in this submission. The historical studies developing the AABR algorithm would have defined their own ground truth and validation methods, but these are not detailed here.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
Not applicable. The ALGO Pro is an automated hearing screener that provides a "Pass" or "Refer" result without requiring human interpretation of the ABR signals themselves. It is not an AI-assisted human reading device, but rather a standalone diagnostic aid.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
Yes, a standalone performance assessment of the AABR algorithm (which is essentially the "algorithm only" component) was done indirectly through historical studies and directly through bench testing.
- The core AABR algorithm has a 99.9% algorithmic sensitivity (based on binomial statistics).
- Historically, independent clinical studies (cited) showed an overall clinical sensitivity of 98.4% and specificity of 96% to 98% for the ALGO AABR technology when used in clinical settings.
- For the ALGO Pro specifically, bench testing was performed to confirm the equivalence of the acoustical stimuli, recording of evoked potentials, and proper implementation of the ABR template and algorithm between the ALGO Pro and its predicate device (ALGO 5). This bench testing effectively confirmed the standalone performance of the ALGO Pro's algorithm against the established performance of the predicate.
7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)
- For ABR Template Development: The ABR template was based on the morphology of ABR waveforms from normal hearing neonates. This implies a ground truth established by clinical assessment of "normal hearing" status.
- For Clinical Performance (Sensitivity/Specificity): The clinical studies cited (Peters, Herrmann et al.) would have established ground truth for hearing status through follow-up diagnostic audiologic evaluations, which could include behavioral audiometry, auditory steady-state response (ASSR) testing, or other objective measures (likely expert consensus based on these diagnostic tests). The document does not specify the exact ground truth methodology of these historical studies.
8. The Sample Size for the Training Set
The document states that the ABR template, which underpins the algorithm, was derived by superimposing responses from 35 neonates. This set of 35 neonates effectively served as the "training set" or foundational data for the ABR template.
9. How the Ground Truth for the Training Set Was Established
The ground truth for the "training set" (the 35 neonates used to derive the ABR template) was established based on their status as "normal hearing" infants. This implies a determination of their hearing status through established clinical methods for neonates at the time (e.g., standard audiologic evaluation to confirm normal hearing), though the specific details of these diagnostic methods are not provided in this summary.
Ask a specific question about this device
(134 days)
Natus Medical Incorporated DBA Excel-Tech Ltd. (Xltek)
General ophthalmic imaging including retinal, corneal, and external imaging.
Photodocumentation of pediatric ocular diseases, including retinopathy of prematurity (ROP).
Screening for Type 2 pre-threshold retinopathy of prematurity (ROP) (zone 1, stage 1 or 2, without plus disease, or zone 2, stage 3, without plus disease), or treatment-requiring ROP, defined as Type 1 ROP (zone 1, any stage, with plus disease; zone 1, stage 3 without plus disease; or zone 2, stage 2 or 3, with plus disease), or threshold ROP (at least 5 contiguous or 8 non-contiguous clock hours of stage 3 in zone 1 or 2, with plus disease) * in 35-37 week postmenstrual infants.
The RetCam Envision system is a contact type wide-field fundus ophthalmic imaging system used for general ophthalmic imaging including retinal, corneal, and external imaging. Photodocumentation of pediatric ocular diseases, including retinopathy of prematurity (ROP). Screening for Type 2 pre-threshold retinopathy of prematurity (ROP) (zone 1, stage 1 or 2, without plus disease, or zone 2, stage 3, without plus disease), or treatment-requiring ROP, defined as Type 1 ROP (zone 1, any stage, with plus disease; zone 1, stage 3 without plus disease; or zone 2, stage 2 or 3, with plus disease), or threshold ROP (at least 5 contiguous or 8 non-contiguous clock hours of stage 3 in zone 1 or 2, with plus disease) * in 35-37 week postmenstrual infants. A fundus camera comprised of handpiece, detachable lens piece, LED light sources, control panel, footswitch and application software running on a PC are used to acquire still images and video of the eye. The LED light is used to illuminate the retina uniformly and the image is transferred from the handpiece to the PC for display storage, review & transfer. The camera focus and image light intensity as well as the image capture is controlled by the user via a button panel on the cart or a footswitch. Controls are also available via the keyboard mouse and touchscreen.
The provided text describes a 510(k) premarket notification for the RetCam Envision ophthalmic camera, outlining its indications for use, technological characteristics, and comparison to a predicate device. However, it does not include a study specifically designed to demonstrate the device's performance against detailed acceptance criteria for its clinical indications (e.g., screening for ROP using a defined metric like sensitivity/specificity). Instead, the document focuses on demonstrating substantial equivalence to a predicate device through engineering and safety testing.
Therefore, many of the requested details about a study proving the device meets clinical acceptance criteria cannot be extracted from this document. The information primarily relates to electrical safety, electromagnetic compatibility, packaging, and internal performance verification against specifications, rather than clinical performance for its stated indications for use.
Here's an attempt to answer based on the provided text, highlighting what is present and what is missing:
1. A table of acceptance criteria and the reported device performance
The document provides acceptance criteria in terms of compliance with various engineering and safety standards, and internal performance specifications. It does not provide quantitative clinical acceptance criteria (e.g., specific sensitivity, specificity, or image quality metrics for ROP screening) nor a study demonstrating performance against such criteria.
Acceptance Criteria Category | Specific Standard/Requirement | Reported Device Performance |
---|---|---|
Electrical Safety | IEC 60601-1-6: 2010, Am1: 2013; IEC 62366: 2007, Am1: 2014 | Verified for performance in accordance with the standard. |
Electromagnetic Compatibility | IEC 60601-1-2 Edition4.0: 2014-02 | Verified for performance in accordance with the standard. |
Packaging and Handling | ISTA-2B: Packaged Products weighing over 150 lbs (68 kg) | Successfully passed packaging and handling verification. |
Performance Verification & Validation (Bench Testing) | Internal requirements and specifications (functional and performance characteristics), including: Image Comparison Test, Optics verification and Validation Test, Software Test, Mechanical design Test, Light Safety Test, EMC and Electrical Safety Test, Biocompatability Test, Packaging ISTA Test | Successfully passed performance verification and validation; met defined acceptance criteria. |
Clinical Indication Performance | (Not specified in terms of quantitative metrics like sensitivity/specificity for ROP screening) | (Not a clinical study; document states "Verification and Validation were performed to ensure no new questions of safety or effectiveness are raised. The results of these activities demonstrate that the RetCam Envision is as safe, as effective, and performs as well as or better than the predicate device.") |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document describes engineering bench testing and internal verification/validation. It does not mention a clinical test set sample size, data provenance, or whether it was retrospective or prospective. The "Performance Testing – Bench Verification & Validation" refers to internal testing of the device itself.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
Given that no clinical test set for ROP screening performance is described, there is no information provided on experts or ground truth establishment relevant to clinical indications.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
As no clinical test set is detailed, no adjudication method for a clinical ground truth is specified.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document makes no mention of a multi-reader multi-case (MRMC) comparative effectiveness study, nor does it discuss AI assistance or improvement of human readers. The device described is an ophthalmic camera, not an AI-powered diagnostic tool.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The device is an imaging system; it does not feature an AI algorithm for standalone diagnosis based on the provided text. Therefore, no standalone algorithm performance study was reported.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
For the technical performance aspects, the "ground truth" was likely established by reference to engineering specifications and standard requirements (e.g., electrical safety standards, packaging standards). For clinical ground truth for indications like ROP screening, this information is not provided because a clinical performance study is not described.
8. The sample size for the training set
Since no AI algorithm is detailed, there is no mention of a training set or its sample size.
9. How the ground truth for the training set was established
As there is no training set described, this information is not applicable.
Summary of what the document does provide regarding "studies":
The document primarily describes a comprehensive set of engineering and safety verification and validation activities to demonstrate that the RetCam Envision meets its predetermined specifications and applicable standards, and that it is substantially equivalent to existing predicate devices. These activities include:
- Electrical Safety testing: Compliance with IEC 60601-1-6 and IEC 62366.
- Electromagnetic Compatibility testing: Compliance with IEC 60601-1-2.
- Packaging and Handling Verification: Compliance with ISTA-2B.
- Bench Testing (Verification & Validation): Internal testing covering Image Comparison, Optics, Software, Mechanical design, Light Safety, EMC, Electrical Safety, Biocompatibility, and Packaging.
The "study that proves the device meets the acceptance criteria" in this context refers to these internal engineering and safety tests, which concluded that "the RetCam Envision system complies with its predetermined specifications and the applicable standards." The document does not present a clinical performance study with defined clinical acceptance criteria for its ROP screening indications.
Ask a specific question about this device
(46 days)
Natus Medical Incorporated DBA Excel-Tech Ltd. (XLTEK)
Natus NeuroWorks is EEG software that displays physiological signals. The intended user of this product is a qualified medical practitioner trained in Electroencephalography who will exercise professional judgment in using the information. The NeuroWorks EEG software allows acquisition, display, archive, review and analysis of physiological signals. · The Seizure Detection component of NeuroWorks is intended to mark previously acquired sections of the adult (greater than or equal to 18 years) EEG recordings that may correspond to electrographic seizures, in order to assist qualified clinical practitioners in the assessment of EEG traces. EEG recordings should be obtained with full scalp montage according to the standard 10/20 system. · The Spike Detection component of NeuroWorks is intended to mark previously acquired sections of the adult (greater than or equal to 18 years) EEG recordings that may correspond to electrographic spikes, in order to assist qualified clinical practitioners in the assessment of EEG traces. EEG recordings should be obtained with full scalp montage according to the standard 10/20 system. · aEEG, Burst Suppression, Envelope, Alpha variability, Spectral Entropy trending functionalities included in NeuroWorks are intended to assist the user while monitoring the state of the brain. The automated event marking function of Neuroworks is not applicable to these analysis features. · Neuro Works also includes the display of a quantitative EEG plot, Density Spectral Array (DSA), which is intended to help the user to monitor and analyze the EEG waveform. The automated event marking function of NeuroWorks is not applicable to DSA. · This device does not provide any diagnostic conclusion about the patient's condition to the user.
Natus NeuroWorks is electroencephalography (EEG) software that displays physiological signals. The software platform is designed to work with Xltek and other select Natus amplifiers (headboxes). Software add-ons and optional accessories let you customize your system to meet your specific clinical EEG monitoring needs.
The provided document describes the Natus NeuroWorks software, an EEG software with functionalities including seizure and spike detection, and various trending capabilities.
Here's an analysis of the acceptance criteria and the study that proves the device meets them:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly present a table of quantitative acceptance criteria and corresponding reported device performance with specific metrics (e.g., sensitivity, specificity, accuracy thresholds for seizure/spike detection). Instead, the demonstration of equivalence for the new trend features relies on qualitative comparison to a predicate device.
The "Comments" column in Table 1: Substantial Equivalence, Trends and other features implicitly states the performance goal for the newly added or enhanced features: to be "Equivalent" or "Same" as the specified predicate device. For example, for Burst Suppression, Envelope Trend, Spectral Entropy, Spectral Edge, Alpha Variability, and R-R interval trend, the comment is "Equivalent: Feature added to Natus NeuroWorks Subject device. With this implementation the Natus NeuroWorks Subject device is now equivalent to NicoletOne and Moberg Predicate devices."
For the DSA / Spectrogram feature, the comment indicates that the feature was already available but improved in terms of common naming, additional color scales for better contrast, and improved spectral resolution (64Hz vs 30Hz), making it "equivalent to NicoletOne predicate device."
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document does not specify a separate "test set" for performance evaluation of the trend features as typically understood in a clinical study context. Instead, for the newly added or improved trend features (Burst Suppression, Envelope, Spectral Entropy, Spectral Edge, Alpha Variability, DSA), the performance evaluation involved showing that the "resulting graphs are identical" when using the same study data as examples from the NicoletOne predicate device.
The document does not provide information on the country of origin of this "same study data" or whether it was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
The document does not specify the number or qualifications of experts used to establish ground truth for the trend feature comparison. The comparison relies on visual identity of trend plots against a predicate device.
For the Seizure Detection and Spike Detection components, the indications for use state they are "intended to mark previously acquired sections of the adult (greater than or equal to 18 years) EEG recordings that may correspond to electrographic seizures/spikes, in order to assist qualified clinical practitioners in the assessment of EEG traces." This implies that the 'ground truth' for these features is ultimately the interpretation by "qualified clinical practitioners," but the document doesn't detail how this was established for the performance study.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The document does not describe any formal adjudication method for the performance evaluation of the trend features. The comparison relies on direct graphical comparisons.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
A multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly described. The document states: "There were no clinical studies performed for this submission."
The device functions as "computer-assisted tools" for marking electrographic events to "assist qualified clinical practitioners." However, no study measuring improvement in human reader performance with this assistance is presented.
6. If a standalone (i.e. algorithm only, without human-in-the-loop performance) was done
While the document doesn't explicitly refer to "standalone performance" metrics for seizure and spike detection (e.g., sensitivity, specificity of the algorithm alone), the performance testing for the trend features was essentially a standalone comparison: the algorithm's output (trend graph) was compared to the predicate's algorithm output using the same input data. The "identical" nature of the graphs suggests successful replication of the predicate's standalone trending functionality.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
For the trend features (Burst Suppression, Envelope, Spectral Entropy, Spectral Edge, Alpha Variability, DSA), the "ground truth" implicitly used for comparison was the output of the predicate device's algorithms on the same raw EEG data. The goal was to demonstrate that the Natus NeuroWorks algorithms produced graphically identical or equivalent trends.
For the Seizure Detection and Spike Detection, the ground truth is ultimately "electrographic seizures/spikes" as interpreted by "qualified clinical practitioners," but the method of establishing this ground truth for validation is not detailed.
8. The sample size for the training set
The document does not provide information on the sample size for the training set for any of the algorithms. It states that the software was "designed and developed according to a robust software development process" and "rigorously verified and validated," but omits details about machine learning model training.
9. How the ground truth for the training set was established
The document does not provide information on how the ground truth for the training set was established. Given the lack of details on training data and methods, this information is not available in the provided text.
Ask a specific question about this device
(171 days)
Natus Medical Incorporated DBA Excel-Tech Ltd. (XLTEK)
The Natus Photic Stimulator is indicated for photic activation of the EEG study and in the generation of visual evoked potentials.
The Natus Medical Incorporated (Natus) DBA Excel-Tech Ltd. (XLTEK) Photic Stimulator is used by trained medical staff in a medical environment to apply photic flashes to the patient during neurophysiology studies such as electroencephalographic (EEG) studies, where it is used as an activation to test photo-sensibility related to epilepsy.
The Natus Photic Stimulator is typically used by EEG technicians in the hospital environment in fixed, mobile or portable systems, and with patients of all ages. The lamp head assembly is mounted on an arm allowing easy placement at 30 cm away from the patient's face. The arm is ergonomically designed so that the lamp can be move and rotated in the direction of the patient using a handgrip on the lamp.
Trigger pulses applied to the trigger input of the Natus Photic Stimulator generate 1 millisecond photic flashes at specific frequencies, typically in the range of 0.5 Hz to 60 Hz by a white light emitting diode (LED) lamp. The basic operating mode consists of generating a single flash per trigger pulse applied. Flash intensity has a maximum range from 22,000 lux up to 75,000 lux measured at 30 cm from the LED lamp at the position of highest intensity and 3,520 lux to 12,000 lux calculated at 75cm. The frequency and intensity of the flash is controlled by the acquisition software.
It can also be used along with evoked potential devices for stimulating visual evoked potentials.
This document describes the Natus Photic Stimulator, a device used in neurophysiology studies. The information provided focuses on the device's technical specifications and compliance with regulatory standards, rather than clinical performance or AI integration.
Therefore, the following information regarding acceptance criteria and study details cannot be fully extracted as it pertains to clinical performance and AI, which are not the primary focus of this submission. The document primarily reports on technical and safety performance testing.
Here's what can be extracted based on the provided text:
1. A table of acceptance criteria and the reported device performance
The document does not detail specific acceptance criteria for clinical performance (e.g., sensitivity, specificity, accuracy) akin to what would be provided for a diagnostic AI device. Instead, the "acceptance criteria" are compliance with various technical and safety standards.
Acceptance Criteria (Standards) | Reported Device Performance |
---|---|
Electrical Safety: | The Natus Photic Stimulator complies with: |
IEC 60601-1: 2005, Am1: 2012 | Results indicate that the Natus Photic Stimulator complies with the applicable standards. |
Electromagnetic Compatibility: | The Natus Photic Stimulator complies with: |
IEC 60601-1-2: 2007 | Results indicate that the Natus Photic Stimulator complies with the applicable standards. |
IEC 60601-2-40: 2016 | Results indicate that the Natus Photic Stimulator complies with the applicable standards. |
Bench Performance Testing: | The Natus Photic Stimulator complies with its predetermined specifications and applicable standards, including: |
Internal requirements | Results indicate that the Natus Photic Stimulator complies with its predetermined specifications and the applicable standards. |
IEC 60601-1-6: 2010, Am1: 2013 | Verified for performance in accordance with internal requirements and the applicable clauses of the standards. |
IEC 60601-2-40: 2016 | Verified for performance in accordance with internal requirements and the applicable clauses of the standards. |
IEC 62366: 2007, Am1: 2014 | Verified for performance in accordance with internal requirements and the applicable clauses of the standards. |
IEC 62471: 2006 | Verified for performance in accordance with internal requirements and the applicable clauses of the standards. |
ANSI Z80.36-2016 | Verified for performance in accordance with internal requirements and the applicable clauses of the standards. |
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
This information is not provided. The performance testing described is bench testing against technical and safety standards, not a clinical study on a patient population.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This is not applicable as the submission describes device safety and performance testing against engineering standards, not a clinical evaluation requiring expert interpretation of ground truth in patient data.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This is not applicable. The performance testing involves engineering verification and validation, not clinical adjudication of patient cases.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
There is no mention of AI assistance or a MRMC study in this document. This submission is for a photic stimulator, which is a hardware device for neurophysiology studies, not an AI software.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
This is not applicable. The device is a hardware photic stimulator, and the performance testing focuses on its compliance with safety and technical standards as a standalone physical device.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The "ground truth" for the device's performance is compliance with established international and national technical, electrical, electromagnetic, usability, and photobiological safety standards (e.g., IEC 60601 series, IEC 62366, IEC 62471, ANSI Z80.36). This is verified through objective measurements and tests against those standards, not through clinical ground truth like pathology or expert consensus on patient outcomes.
8. The sample size for the training set
This is not applicable. This device is not an AI algorithm trained on data.
9. How the ground truth for the training set was established
This is not applicable. This device is not an AI algorithm trained on data.
Ask a specific question about this device
(133 days)
Natus Medical Incorporated DBA Excel-Tech Ltd. (Xltek)
The Natus Brain Monitor Amplifier is intended to be used as an electroencephalograph: to acquire, display, store and archive electrophysiological signals. The amplifier should be used in conjunction with Natus NeuroWorks™/ SleepWorks™ software to acquire scalp and intracranial electroencephalographic (EEG) signals as well as polysomnographic (PSG) signals. The Natus Brain Monitor Amplifier is intended to be used by trained medical professionals, and is designed for use in clinical environments such as hospital rooms, epilepsy monitoring units, intensive care units, and operating rooms. It can be used with patients of all ages, but is not designed for fetal use.
The Natus Brain Monitor family of amplifiers are intended to be used as an electroencephalograph: to acquire, display, store and archive electrophysiological signals. The amplifier should be used in conjunction with Natus NeuroWorks™/SleepWorks™ software to acquire electroencephalographic (EEG) signals as well as polysomnographic (PSG) signals. The Natus Brain Monitor family of amplifiers are intended to be used by trained medical professionals, and are designed for use in clinical environments such as hospital rooms, clinics, epilepsy monitoring units, intensive care units, and operating rooms. It can be used with patients of all ages, but is not designed for fetal use.
The Natus Brain Monitor (Natus Embla NDx, Natus Embla SDx) are comprised of a base unit and a breakout box. It is part of a system that is made up of a personal computer, software, a photic stimulator, an isolation transformer, video and audio equipment, networking equipment, and mechanical supports. EEG and other physiological signals from electrodes placed on the head and body as well as other accessories such as pulse oximeters, respiratory effort and airflow sensors can be acquired by the amplifiers. The amplifiers include sensor inputs for respiratory effort and airflow as well as snoring. The amplifiers include an integrated pressure sensor and pulse oximeter module. These signals are digitized and transmitted to the personal computer running the Natus NeuroWorks/SleepWorks software. The signals are displayed on the personal computer and can be recorded to the computer's local storage or to remote networked storage for later review.
Here's a breakdown of the acceptance criteria and study information for the Natus Brain Monitor, based on the provided text:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly state acceptance criteria in a pass/fail threshold format for each performance metric. Instead, it provides a comparison to predicate devices, implying that performance comparable to or better than the predicates is acceptable. The performance is reported in the "Comparison to Predicate Device" table.
Specification | Predicate Device (K143440, K111742, K172711) | Subject Device (Natus Brain Monitor - Natus Brain Monitor, Embla NDx, Embla SDx) |
---|---|---|
Referential Channels | 256 (K143440), 32 (K111742), 32 (K172711) | 40 (programmable to up to 64), 16 (Embla SDx Model) |
Bipolar Channels | 16 (K143440), 8 (K111742), 8 (K172711) | 12, 4 (Embla SDx Model) |
DC Inputs | 16 (+/-5Vdc or +/-2.5Vdc) | 16 (+/-5Vdc), 8 (+/-5Vdc) - For Embla SDx model |
SpO2, Pulse Rate, Plethysmogram | Yes (all predicates, some with PPG) | Yes (with PPG) |
Body Position | Uses universal sensor via DC input or integrated proprietary | Integrated proprietary, or uses universal sensor via DC input |
Resolution | 24 bit, 22 bit, 16 bit | 24 bit (16 bit stored) |
EEG Channels | 64-256, 40, 32 | 64, 20 (Embla SDx) |
Reference Channels | Dedicated separate reference and ground, Dedicated ground | Dedicated separate reference and ground |
Input Impedance | >1000 MOhm, >20MΩ, ≥20 ΜΩ | >1000 MOhm |
Input Noise | 110dB@60Hz, >80dB, >80 dB (signal ref), > 100 dB (earth ref) | >106db@60Hz |
Sampling Frequency | 256-16384 Hz, 64-512 Hz, 200-800 Hz | 256, 512, 1024, 2048, 4096 (256, 512Hz for Embla SDx) |
Sampling Resolution - EEG channels | 24 bits, 22 bits, 16 bits | 24 bits |
Sampling Quantization - EEG channels | 305nV, N/A, 0.06 µV/bit | 305nV |
Storage Resolution - EEG channels | 16 bits | 16 bits |
Impedance Check |
Ask a specific question about this device
(26 days)
Natus Medical Incorporated DBA Excel-Tech Ltd. (XLTEK)
The NeuroWorks is EEG software that displays physiological signals. The intended user of this product is a qualified medical practitioner trained in Electroencephalography. This device is intended to be used by qualified medical practitioners who will exercise professional judgment in using the information.
-The NeuroWorks EEG software allows acquisition, display, archive, review and analysis of physiological signals.
-The Seizure Detection component of NeuroWorks is intended to mark previously acquired sections of the adult (greater than or equal to 18 years) EEG recordings that may correspond to electrographic seizures, in order to assist qualified clinical practitioners in the assessment of EEG traces. EEG recordings should be obtained with full scalp montage according to the standard 10/20 system.
-The Spike Detection component of NeuroWorks is intended to mark previously acquired sections of the adult (greater than or equal to 18 years) EEG recordings that may correspond to electrographic spikes, in order to assist qualified clinical practitioners in the assessment of EEG traces. EEG recordings should be obtained with full scalp montage according to the standard 10/20 system.
-The aEEG functionality included in NeuroWorks is intended to monitor the state of the brain. The automated event marking function of NeuroWorks is not applicable to aEEG.
-NeuroWorks also includes the display of a quantitative EEG plot, Compressed Spectrum Array (CSA), which is intended to help the user to monitor and analyze the EEG waveform. The automated event marking function of NeuroWorks is not applicable to CSA.
This device does not provide any diagnostic conclusion about the patient's condition to the user.
Natus NeuroWorks is electroencephalography (EEG) software that displays physiological signals. The software platform is designed to work with Xltek and other select Natus amplifiers (headboxes). Software add-ons and optional accessories let you customize your system to meet your specific clinical EEG monitoring needs.
The provided text describes the Natus NeuroWorks EEG software and its FDA 510(k) clearance application. However, it does not contain the detailed information necessary to answer all parts of your request regarding acceptance criteria and the study proving the device meets them.
Specifically, the document states:
- "The NeuroWorks software was designed and developed according to a robust software development process, and was rigorously verified and validated."
- "Results indicate that the NeuroWorks software complies with its predetermined specifications, the applicable guidance documents, and the applicable standards."
- "Verification and validation activities were conducted to establish the performance and safety characteristics of the device modifications made to the NeuroWorks software. The results of these activities demonstrate that the NeuroWorks software is as safe, as effective, and performs as well as or better than the predicate device."
While these statements confirm that performance testing was done and the device met its specifications, the actual acceptance criteria (e.g., sensitivity, specificity, F-score targets for seizure/spike detection) and the detailed results of those tests are not explicitly provided in this summary.
Here's a breakdown of what can and cannot be answered based on the provided text:
1. A table of acceptance criteria and the reported device performance
- Acceptance Criteria: Not explicitly stated in terms of quantitative metrics (e.g., target true positive rate, false positive rate for seizure/spike detection). The text broadly states compliance with "predetermined specifications" and "applicable standards."
- Reported Device Performance: Not explicitly stated in terms of quantitative metrics against specific acceptance criteria. The text concludes that the device "performs as well as or better than the predicate device."
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size: Not specified for any performance evaluation.
- Data Provenance: Not specified.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not specified. The document mentions "qualified clinical practitioners" as the intended users, but doesn't detail their involvement in establishing ground truth for testing.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not specified.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- A MRMC comparative effectiveness study is not explicitly mentioned. The "Seizure Detection" and "Spike Detection" components are described as intended to "assist qualified clinical practitioners," implying a human-in-the-loop scenario. However, the study design and results of such assistance are not detailed.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- A standalone performance study focused solely on the algorithm's performance (without human interaction) is not explicitly mentioned as a separate activity with results. The device's indications for use emphasize assisting "qualified clinical practitioners," suggesting an integrated workflow.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Not specified. Given the nature of EEG analysis, expert consensus would be the most probable method, but it is not stated in the document.
8. The sample size for the training set
- Not specified.
9. How the ground truth for the training set was established
- Not specified.
In summary: The provided 510(k) summary focuses on the regulatory aspects, intended use, technological comparison to the predicate, and adherence to software development and general performance standards. It lacks the specific clinical performance metrics, study designs, and detailed data provenance typically found in a comprehensive clinical validation study report for an AI/ML medical device. Further details would likely be found in the complete 510(k) submission, which is not fully provided here.
Ask a specific question about this device
(98 days)
Natus Medical Incorporated DBA Excel-Tech Ltd. (XLTEK)
This software is intended for use by qualified research and clinical professionals with specialized training in the use of EEG and PSG recording instrumentation for the digital recording, playback, and analysis of physiological signals. It is suitable for digital acquisition, display, comparison, and archiving of EEG potentials and other rapidly changing physiological parameters.
The Natus Medical Incorporated (Natus) DBA Excel-Tech Ltd. (XLTEK) Grass® TWin ® (Grass TWin) is a comprehensive software program intended for Electroencephalography (EEG), Polysomnography (PSG), and Long-term Epilepsy Monitoring (LTM). TWin is incredibly powerful and flexible, but also designed for easy and efficient day-to-day use. Grass TWin is a software product only, and does not include any hardware.
This document is a 510(k) summary for the Grass TWin, a software program intended for Electroencephalography (EEG), Polysomnography (PSG), and Long-term Epilepsy Monitoring (LTM). The focus of the provided text is on demonstrating the device's substantial equivalence to a predicate device and its compliance with regulatory standards for software and usability.
Based on the provided text, the Grass TWin software does not appear to have detailed acceptance criteria or a specific study proving device performance against those criteria in the typical sense of a clinical performance study with metrics like sensitivity, specificity, or accuracy. Instead, the "performance testing" described focuses on software verification and validation and bench testing for compliance with pre-determined specifications and regulatory standards.
Here's an analysis of the information, addressing your requests based only on the provided text:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly present a table of quantitative "acceptance criteria" and "reported device performance" in terms of clinical outcomes or diagnostic accuracy. Instead, the acceptance criteria are framed as compliance with internal requirements and regulatory standards for software development, usability, and safety.
Acceptance Criteria Category | Reported Device Performance |
---|---|
Software Development | "The Grass TWin software was designed and developed according to a robust software development process, and was rigorously verified and validated." "Results indicate that the Grass TWin software complies with its predetermined specifications, the applicable guidance documents, and the applicable standards." (referencing FDA guidance documents and IEC 62304: 2006) |
Usability | "The Grass TWin was verified for performance in accordance with internal requirements and the applicable clauses of the following standards: IEC 60601-1-6: 2010, Am1: 2013, Medical electrical equipment -Part 1-6: General requirements for basic safety and essential performance – Collateral standard: Usability; IEC 62366: 2007, Am1: 2014, Medical devices – Application of usability engineering to medical devices." "Results indicate that the Grass TWin complies with its predetermined specifications and the applicable standards." |
Safety & Effectiveness | "Verification and validation activities were conducted to establish the performance and safety characteristics of the device modifications made to the Grass TWin. The results of these activities demonstrate that the Grass TWin is as safe, as effective, and performs as well as or better than the predicate devices." |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document does not describe a "test set" in the context of clinical data or patient samples. The performance evaluation focuses on software verification/validation and bench testing. Therefore, information about sample size, data provenance, or whether it was retrospective/prospective is not applicable as described in this document.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This information is not provided because the performance testing described is not based on a clinical test set requiring expert-established ground truth.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not provided because the performance testing described is not based on a clinical test set requiring expert adjudication.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not mention an MRMC comparative effectiveness study, nor does it refer to AI or assistance for human readers. The device is software for recording, playback, and analysis of physiological signals, not an AI-driven interpretive tool.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
While the Grass TWin is "software only" and can be considered a standalone algorithm in that it performs its functions without direct hardware integration, the performance evaluation documented here describes its compliance with specifications and standards, not a specific standalone clinical performance study with metrics like sensitivity or specificity. The "Indications for Use" explicitly state it is "intended for use by qualified research and clinical professionals with specialized training," implying human-in-the-loop operation.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The concept of "ground truth" as typically used in clinical performance studies (e.g., against pathology or expert consensus) is not directly applicable to the performance testing described. The "truth" against which the software was evaluated was its predetermined specifications and compliance with regulatory standards (e.g., correct operation of software features, adherence to usability principles).
8. The sample size for the training set
The document does not refer to a "training set" for an algorithm, as it describes a software application that is verified and validated rather than trained using machine learning.
9. How the ground truth for the training set was established
As there is no mention of a training set, this information is not provided.
In summary, the provided document describes a regulatory submission for software (Grass TWin) that demonstrates substantial equivalence by focusing on:
- Technology Comparison: Showing direct equivalence in intended use and technological characteristics with a predicate device, noting minor differences that do not raise new questions of safety or effectiveness (e.g., operating system, additional features like PTT Trend Option, Montage Editor Summation Feature).
- Software Verification and Validation: Adherence to robust software development processes and compliance with general FDA guidance documents and international standards (IEC 62304 for software lifecycle processes, IEC 60601-1-6 and IEC 62366 for usability).
- Bench Performance Testing: Verification against internal requirements and applicable standards, specifically for usability.
The detailed clinical performance metrics typical for diagnostic or AI-assisted devices that you've asked about are not present in this 510(k) summary, as the device's nature (recording, playback, and analysis software for existing physiological signals, rather than a novel diagnostic algorithm) and the context of a substantial equivalence submission likely did not require them.
Ask a specific question about this device
(56 days)
Natus Medical Incorporated DBA Excel-Tech Ltd. (XLTEK)
The Comet-PLUS® is designed for use in the recording of routine EEG, overnight sleep/EEG (PSG, Polysomnography), and other neurophysiological monitoring applications. This device is intended to be used only by physicians, technicians, or other medical professionals that are trained in either electroencephalography or polysomnography.
The Natus Medical Incorporated (Natus) DBA Excel-Tech Ltd. (XLTEK) Comet-PLUS® (also known as the AS40-PLUS Amplifier System) is designed specifically for the EEG and PSG monitoring lab. The different configurations of the Comet-PLUS feature the AS40-PLUS Amplifier with electroencephalograph (EEG) or polysomnograph (PSG) Comet-PLUS Personality Modules (headboxes) and Natus software. The Comet-PLUS Amplifier is a compact AC Amplifier with up to 57 channels for EEG and PSG recording applications.
The provided document is a 510(k) premarket notification for the Comet-PLUS device. It details various performance tests and compliance with standards rather than specific acceptance criteria for a medical AI device and the results of a study demonstrating those criteria being met. The device described, Comet-PLUS, is an electroencephalograph (EEG) and polysomnograph (PSG) system, a hardware device for recording neurophysiological signals.
Based on the content, the document does not contain the information requested in points 1-9 regarding acceptance criteria for an AI device and a study proving those criteria are met.
Specifically:
- No acceptance criteria or device performance for an AI algorithm: The document focuses on hardware (amplifier, headboxes) and firmware compliance with electrical safety, electromagnetic compatibility, and usability standards for an EEG/PSG system. There are no performance metrics like sensitivity, specificity, or AUC which are typical for AI-based diagnostic or assistive devices.
- No mention of AI or machine learning: The document describes a medical device for signal acquisition and monitoring, not an AI algorithm.
- No details on sample sizes, ground truth establishment, or expert reviews for an AI study: Since no AI study is described, this information is absent.
- No MRMC comparative effectiveness study: The document does not describe any human-in-the-loop studies comparing human readers with and without AI assistance.
- No standalone algorithm performance: The device is a system for recording signals, not a standalone algorithm.
The "Summary of Performance Testing" section (page 5) details the types of tests conducted:
- Software (firmware) validation: Compliance with FDA guidance documents and standards for medical device software.
- Electrical Safety: Compliance with IEC 60601-1:2005, Am1:2012.
- Electromagnetic Compatibility: Compliance with IEC 60601-1-2:2007.
- Bench Performance Testing: Compliance with internal requirements and applicable clauses of IEC 60601-1-6, IEC 60601-2-26, IEC 62366, and ISO 80601-2-61.
The conclusion states that these activities demonstrate the device's safety, effectiveness, and performance are as good as or better than predicate devices, leading to a substantial equivalence determination. However, this is for the hardware and its embedded firmware functionality, not an AI or machine learning algorithm.
Ask a specific question about this device
(34 days)
Natus Medical Incorporated DBA Excel-Tech Ltd. (XLTEK)
The XLTEK EMU40EX EEG Headbox is an electroencephalograph that works in conjunction with XLTEK NeuroWorks software.
The XLTEK EMU40EX EEG Headbox is used to acquire, digitize, store and transmit physiological signals (such as EEG, pulse and oximetry signals) for EEG studies in research and clinical environments.
The XLTEK EMU40EX EEG Headbox requires competent user input, and its output must be reviewed and interpreted by trained medical professionals who will exercise professional judgment in using this information.
The Natus Medical Incorporated DBA Excel-Tech Ltd. (XLTEK) EMU40EX System is a powerful and flexible EEG recording system. The EMU40EX System consists of a patient electrode set, EMU40EX EEG Headbox, and the Natus Base Unit.
The EMU40EX System is designed to work with an XLTEK host computer system running the Natus Database and Natus NeuroWorks® data review and analysis software.
The EMU40EX EEG Headbox can be connected to the Natus Base Unit with a cable, or can connect wirelessly through a secure BlueTooth link
The provided document is a 510(k) premarket notification for a medical device called the XLTEK EMU40EX EEG Headbox. This type of submission focuses on demonstrating substantial equivalence to a predicate device, rather than proving novel effectiveness or performance through typical clinical studies with specific acceptance criteria in the way a new drug or high-risk medical device might.
Here's an analysis based on the information provided, keeping in mind the context of a 510(k) submission:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria (Implied from testing) | Reported Device Performance |
---|---|
Electromagnetic Compatibility (EMC) | Passed verification in accordance with IEC 60601-1-2: 2007, indicating compliance with applicable standards. |
General Performance (Bench Testing) | Passed verification and validation for product requirements including: Signal quality, Functionality, User interface. Also complied with applicable clauses of IEC 60601-1-6: 2010 (Usability), IEC 60601-2-26: 2012 (Electroencephalographs), IEC 62366: 2007 (Usability Engineering), ISO 80601-2-61: 2011 (Pulse Oximeter Equipment). |
Software/Firmware Performance | Passed verification and validation according to a robust software development process, internal firmware requirements, and FDA guidance documents (e.g., "The content of premarket submissions for software contained in medical devices," IEC 62304: 2006). Complies with predetermined firmware specifications. |
Electrical Safety | Passed verification in accordance with IEC 60601-1: 2005, Am1: 2012, indicating compliance with applicable standards. |
Substantial Equivalence (Overall Conclusion) | "The XLTEK EMU40EX EEG Headbox is as safe, as effective, and performs as well as or better than the predicate device." |
Explanation of the Study and Device Meeting Acceptance Criteria:
The "study" described here is a set of engineering and performance verification and validation tests, not a clinical trial in the traditional sense. The device, an EEG headbox, is a Class II medical device, and the submission is a 510(k), which seeks to demonstrate "substantial equivalence" to a legally marketed predicate device (XLTEK EMU40 EEG Headbox, K053386).
The acceptance criteria are therefore largely derived from recognized international standards for medical electrical equipment, software development, and specific performance attributes expected of an EEG headbox. The device is deemed to meet these criteria by successfully passing the various verification and validation tests outlined in the document.
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: Not applicable in the context of this engineering-focused 510(k) submission. There is no "test set" of patient data or clinical cases described for evaluating algorithm performance. The testing involved hardware, software, and system-level performance against engineering standards.
- Data Provenance: Not applicable. The "data" refers to test results from in-house engineering and laboratory testing, rather than patient data.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications
- Number of Experts: Not applicable. Ground truth, in the sense of clinical interpretations or diagnoses for a dataset, is not part of this type of submission for this device. The "ground truth" for the engineering tests would be the established international standards and their specified parameters (e.g., a certain level of electromagnetic compatibility, signal fidelity characteristics, etc.), which are not established by individual experts for each test. Internal engineering and quality assurance personnel would verify adherence to these standards.
4. Adjudication Method for the Test Set
- Adjudication Method: Not applicable for the reasons outlined in point 3. Testing involves objective measurements against predefined engineering specifications.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size of Human Improvement with AI vs. Without AI Assistance
- MRMC Study: No, an MRMC comparative effectiveness study was not done.
- Effect Size of Human Improvement with AI: Not applicable. This device is an EEG headbox, a signal acquisition and digitization component. It is not an AI-powered diagnostic or interpretive algorithm designed to assist human readers. It acquires physiological signals for interpretation by trained medical professionals using other software (XLTEK NeuroWorks software), but the headbox itself does not have AI capabilities that would directly improve human reader performance.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop) Performance Study Was Done
- Standalone Performance Study: No, this device is not an algorithm that performs standalone diagnoses or interpretations. It is hardware that acquires data. Its performance is evaluated based on its ability to accurately and safely acquire and transmit physiological signals, not on diagnostic accuracy.
7. The Type of Ground Truth Used
- Type of Ground Truth: For this device, the "ground truth" for its performance is adherence to established engineering standards (e.g., IEC 60601 series, ISO 80601 series) and the manufacturer's internal product specifications for signal quality, functionality, usability, and safety. There is no clinical "ground truth" (like pathology, expert consensus on images, or outcomes data) involved for this hardware component.
8. The Sample Size for the Training Set
- Training Set Sample Size: Not applicable. This device does not use machine learning or AI algorithms that require a "training set" of data for development. Its firmware design and development follow a robust software development process, but this is traditional software engineering, not machine learning.
9. How the Ground Truth for the Training Set Was Established
- Ground Truth for Training Set: Not applicable, as there is no training set for this device.
Ask a specific question about this device
Page 1 of 2