Search Results
Found 6 results
510(k) Data Aggregation
(287 days)
The Trex HD is an Electroencephalograph intended to be used to acquire, display, store and archive electroencephalographic signals, intended for electroencephalographic (EEG) or level 1-2 polysomnographic (PSG) recordings. The Trex HD amplifier is designed to be used with Natus NeuroWorks™or Natus SleepWorks™ software.
The Trex HD headbox is similar to the cleared Trex headbox (K042150). It contains a complete data acquisition system that has built-in amplifiers, A/D Converters, Digital Signal Processors, and storage devices. Trex HD, as the predicate device, is composed of 24 Referential Inputs, 2 DC Inputs, 3 Differential Inputs, Oximeter / Photic Connection, Patient Event Switch Connection, USB Connection, and Video Interface box. The wireless adapter is a part of Trex HD system. It connects to Camera (Camcorder) via LANC port. It is used to synchronize video frames with the EEG study recorded in Trex HD. It communicates wirelessly with the headbox and can store video synchronization internally in non-volatile memory in case the wireless communication is not possible. The Trex HD headbox is designed to work with an XLTEK computer system running NeuroWorks (K090019) or SleepWorks software (K090277). A camcorder can be used to record video. A (Trex HD) Video Interface Box is needed in order to synchronize video recording (on the camcorder) and EEG recording (on the Trex HD headbox).
This 510(k) summary describes a device, the Trex HD, which is substantially equivalent to a predicate device, the Trex (K042150). The primary change in the Trex HD is the addition of a Video Interface box for wireless video synchronization. The provided document does not contain a traditional performance study comparing the device against specific acceptance criteria with quantifiable metrics. Instead, it relies on demonstrating substantial equivalence to a predicate device and adherence to established medical device standards.
Here's an analysis based on the provided text, addressing your questions as much as possible:
1. Table of Acceptance Criteria and Reported Device Performance
The submission does not explicitly list acceptance criteria as quantifiable targets for a clinical performance study. Instead, the "acceptance criteria" are implied by the safety and performance specifications being "Same" as the predicate device (Trex K042150). The performance of the Trex HD is "reported" as Pass for specific verification tests.
| Acceptance Criterion (Implied by Predicate Spec) | Reported Device Performance (Trex HD) |
|---|---|
| Electrical Performance | |
| Referential Inputs: +/- 10mV | Same as predicate (+/- 10mV) |
| Referential Resolution: 16 bit A/D | Same as predicate (16 bit A/D) |
| Differential Inputs: +/- 10mV | Same as predicate (+/- 10mV) |
| Differential Resolution: 16 bit A/D | Same as predicate (16 bit A/D) |
| Common Mode Rejection Ratio: -113 dB @ 60 Hz | Same as predicate (-113 dB @ 60 Hz) |
| DC Removal: Infinite | Same as predicate (Infinite) |
| Common Mode Input Impedance: > 10 MOhms | Same as predicate (> 10 MOhms) |
| Input Noise (peak to peak): 6.4 µV | Same as predicate (6.4 µV) |
| Input Noise (RMS): 1.08 µV | Same as predicate (1.08 µV) |
| Input Bias Current: < 10 pA | Same as predicate (< 10 pA) |
| Channel Crosstalk: 56 dB | Same as predicate (56 dB) |
| Electrode Connections: Safety Touch | Same as predicate (Safety Touch) |
| Non-Isolated DC Inputs: +/- 5 Volts | Same as predicate (+/- 5 Volts) |
| Non-Isolated DC Resolution: 16 bit A/D | Same as predicate (16 bit A/D) |
| Impedance: <2.5, <5, <10, <25 kOhm | Same as predicate (<2.5, <5, <10, <25 kOhm) |
| Channel Test Signal: Software selectable | Same as predicate (Software selectable) |
| Sampling Frequency: 200 Hz, 256 Hz, 512 Hz | Same as predicate (200 Hz, 256 Hz, 512 Hz) |
| Physical/Functional Performance | |
| Oximeter/Photic Stim Connection | Same as predicate (Yes) |
| Patient Event Button | Same as predicate (Yes) |
| Interface Cable: USB 2.0 | Same as predicate (USB 2.0) |
| USB Cable Length | Same as predicate (Standard: 68 inches, Max: 15 feet) |
| Main Unit Weight: 300g | Same as predicate (300g) |
| Main Unit Size: 10 x 15.5 x 2.5 (h x w x d) cm | Same as predicate (10 x 15.5 x 2.5 (h x w x d) cm) |
| Batteries: 2 AA | Same as predicate (2 AA) |
| Safety Performance | |
| Leakage Current: <10 µA with 240 VAC | Same as predicate (<10 µA with 240 VAC) |
| Non-Clinical Tests | Results |
| Trex HD Signal Quality Verification Test | Pass |
| Trex Functionality Verification Test | Pass |
| Video Synchronization Verification Test | Pass |
| Compliance with various safety and EMC standards | Full compliance with listed standards (e.g., IEC 60601-1, IEC 60601-2-26, IEC 60601-1-2, various IEC 61000 series, CISPR 11) |
| Wireless Transceiver | |
| Protocol | Bluetooth V2.0 EDR |
| Operating frequency | 2.402 - 2.480 GHz |
| Transmission power | 8dBm |
| Modulation | GFSK |
| FCCID | R47F2M03GL |
2. Sample Size Used for the Test Set and Data Provenance
The document states: "Non-clinical: Testing of the Natus Trex_HD was performed in compliance with Natus Corporation design control process." It then lists three verification tests that "Pass."
- Sample Size: The document does not specify a quantitative sample size for any of the verification tests. It implies that these tests were conducted on a sufficient number of devices or components to demonstrate compliance with design control processes.
- Data Provenance: The tests are non-clinical, implying they were conducted in a laboratory or engineering setting by the manufacturer, Natus Medical Incorporated DBA Excel-Tech Ltd. (XLTEK) in Oakville, Ontario, Canada. This is retrospective in the sense that the testing was performed during the development and verification phase of the device before submission for regulatory clearance. It is not clear if any data from humans was used for "Signal Quality Verification".
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications
- Number of Experts: This information is not provided.
- Qualifications of Experts: This information is not provided.
Given that this is a non-clinical submission for an EEG amplifier and video synchronization system, and not an AI/diagnostic device that interprets medical images/signals, the concept of "ground truth established by experts" as in a clinical setting is not applicable here. The "ground truth" for these engineering verification tests would likely be established by comparing device output against known input signals or reference measurements using calibrated equipment.
4. Adjudication Method for the Test Set
This information is not provided, and it's generally not applicable for non-clinical engineering verification tests of this type. Adjudication methods like "2+1" or "3+1" are relevant for clinical studies involving human interpretation or subjective assessment where consensus building is required.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size
- Was an MRMC study done? No, an MRMC comparative effectiveness study was not done. The submission explicitly states: "Clinical: Clinical testing was not required to ensure safety and effectiveness of the modified device."
- Effect Size: Not applicable, as no MRMC study was performed.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study Was Done
The device is an Electroencephalograph (EEG) and a video interface, not an AI algorithm. "Standalone performance" in this context would refer to the technical performance of the device hardware and its synchronization capabilities. The document reports "Pass" for "Trex HD Signal Quality Verification Test," "Trex Functionality Verification Test," and "Video Synchronization Verification Test." These are essentially standalone performance tests of the device's technical specifications.
7. The Type of Ground Truth Used
For the non-clinical tests ("Signal Quality Verification," "Functionality Verification," "Video Synchronization Verification"), the "ground truth" would be established by:
- Known input signals: Introducing precisely calibrated electrical signals to the EEG inputs and verifying the accuracy and fidelity of the recorded output signals.
- Reference timing standards: For video synchronization, comparing the timestamps generated by the device against a known, accurate time reference or by directly measuring synchronization accuracy between video frames and EEG datastreams.
- Conformance to engineering specifications: The device's electrical characteristics (e.g., resolution, noise, CMRR) are compared against the established design specifications, which are themselves based on industry standards and the performance of the predicate device.
It is not "expert consensus, pathology, or outcomes data," as these are clinical ground truths.
8. The Sample Size for the Training Set
This information is not applicable. The Trex HD is a medical device hardware system for acquiring physiological signals, not an AI or machine learning algorithm that requires a "training set."
9. How the Ground Truth for the Training Set Was Established
This information is not applicable, as there is no training set for this type of device.
Ask a specific question about this device
(219 days)
The Olympic Brainz Monitor (OBM) is a three channel electroencephalograph (EEG) acquisition system intended to be used in a hospital environment to record, collect, display and facilitate manual marking of aEEG recordings.
- The signals acquired from P3-P4, C3-P3 and C4-P4 channels are intended for use only with neonatal patients (defined as from birth to 28 days post-delivery, and corresponding to a postconceptual age of 24 to 46 weeks) to display aEEG for monitoring the state of the brain.
- The signals acquired from P3-P4 channel is intended to assist in the assessment of Hypoxic-Ischemic Encephalopathy severity and long-term outcome, in full term neonates (postconceptual age of 37-46 weeks) who have suffered a hypoxic-ischemic event.
- The RecogniZe seizure detection algorithm is intended to mark sections of EEG/aEEG that may correspond to electrographic seizures in only the centro-parietal regions of full term neonates (defined as from birth to 28 days post-delivery, and corresponding to a postconceptual age of 37 to 46 weeks). EEG recordings should be obtained from centro-parietal electrodes (located at P3, P4, C3 and C4 according to 10/20 system). The output of the Recognize algorithm is intended to assist in post hoc assessment of EEG/aEEG traces by qualified clinical practitioners, who will exercise professional judgment in using the information.
The Olympic Brainz Monitor does not provide any diagnostic conclusion about the patient's condition.
The Olympic Brainz Monitor is a three-channel electroencephalograph (EEG) system, as per 21 CFR §882.1400: a device used to measure and record the electrical activity of the patient's brain by placing two or more electrodes on the head. The device does not introduce, transfer or deliver any type of energy to the patient. As any other electroencephalograph the device passively record the electroencephalographic activity from the patient trough the hydrogel electrodes and then process the signal for display, analysis and archiving.
The Olympic Brainz Monitor system consists of the following:
- Data Acquisition Box (DAB)
- Touchscreen Monitor
- Roll Stand or optional Desktop Stand
- 9 Neonatal Sensor set (K033010)
- Software
These components have equivalent configuration and functions to those described in K093949 for the OBM Monitor. The Neonatal Sensor set (cleared on K033010) is an accessory to the device that is the only part that enters into contact with the patient. The sensor guarantees acquisition of the electroencephalographic signal and passively transfers it to the main unit. This is a set of five hydrogel skin electrodes that are attached to the patient's head at one extreme and to the Data Acquisition Box at the other extreme using standard touch-proof connectors.
The device allows practitioners to acquired, store, review and archive EEG activity from 4 centroparietal locations corresponding to C3, C4. P3 and P4 of the international 10-20 System. The device displays the recorded activity in form of the raw EEG and as amplitude integrated EEG (aEEG).
In addition the device now includes a seizure detection algorithm (i.e RecogniZe) to allow automated analysis of the recorded EEG. The RecogniZe Seizure Detection Algorithm identifies sections of the EEG trace where seizure activity is detected. The algorithm comprises filtering of the EEG signal, fragmentation of EEG signal into waves, wave-feature extraction, and elementary, preliminary and final detection. The main idea behind the algorithm is to detect heightened regularity in EEG wave sequences using wave intervals, amplitudes and shapes, as increased regularity is the major distinguishing feature of seizure discharges.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:
Acceptance Criteria and Device Performance for RecogniZe Seizure Detection Algorithm
1. Table of Acceptance Criteria and Reported Device Performance
| Metric | Acceptance Criteria (Predicate Devices - IndenEvent K092039, Persyst Reveal K011397) | Reported Device Performance (RecogniZe K123079) |
|---|---|---|
| Positive Percent Agreement (PPA) | 74% - 79.5% (observed in predicate devices) | 61% (95% CI: 52 - 68) |
| False Detection Rate (FDR) | 0.08 - 0.3 FP/h (observed in predicate devices) | 0.5 FP/h (95% CI: 0.4 - 0.7) |
Note: The document argues that RecogniZe is "substantially equivalent to the performance of medical experts confronted with similar task and amount of data" and therefore substantially equivalent to the predicate devices, despite the numerical differences compared to the predicate devices themselves. The comparison is made against expert inter-rater agreement for the specific limited-channel montage.
2. Sample Size Used for the Test Set and Data Provenance
- Number of Events: 421 seizure events
- Total Number of Patients: 82
- Number of Hours (of EEG recordings): 621 hours
- Data Provenance: Retrospective clinical evaluation from neonatal patients seen for routine clinical evaluation at the Neonatal Intensive Care Unit of St. Louis Children's Hospital, USA.
The study included recordings from full term neonates (post-conceptual age of 37 to 46 weeks, defined as from birth to 28 days post-delivery).
To avoid over-weighting, a maximum of 13 events per limited-channel recording were permitted.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications
- Number of Experts: 3
- Qualifications of Experts: Board certified neurophysiologists.
4. Adjudication Method for the Test Set
The document describes how the ground truth was established by experts:
- Experts independently, blindly, and manually marked seizures (no seizure detection algorithm was allowed) in the same manner they would normally do in clinical practice.
- Initially, experts reviewed the full cohort of standard montage recordings (157 of them) marking seizure onset and topography.
- After a 4-week wash-out period, the same reviewers were provided with the limited-channel (C3-P3, C4-P4, and P3-P4) recordings for marking.
- The ground truth used for comparison with the algorithm was the outcome of the expert review on these limited-channel recordings.
- For the inter-rater agreement of experts themselves, individual expert markings were compared against each other (e.g., Rater 1 vs Rater 2, Rater 1 vs Rater 3, Rater 2 vs Rater 3). It isn't explicitly stated if a consensus (e.g., 2+1, 3+1) was used to define the final "gold standard" truth for the algorithm comparison, but rather "the gold standard, defined as seizures detected by a panel of 3 EEG board certified medical professionals" was used. The reported PPA and FDR for the algorithm are compared to the average inter-rater agreement of these experts on the limited-channel montage.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size of Human Improvement with AI vs. Without AI Assistance
A classic MRMC comparative effectiveness study, directly comparing human reader performance with AI assistance versus without AI assistance, was not explicitly described for the RecogniZe module.
The study compared the standalone performance of the RecogniZe algorithm against the performance of human experts (who were themselves establishing the ground truth) on the limited-channel montage. It also reported inter-rater agreement among the human experts.
The document states: "RecogniZe is intended as a tool to aid in the assessment of long EEG recordings to help reduced the amount of time devoted to review." However, it does not quantify this reduction or demonstrate increased accuracy of humans when using the AI.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
Yes, a standalone performance study was done for the RecogniZe algorithm. The algorithm's PPA and FDR were calculated by comparing its output directly against the "gold standard" established by the panel of 3 experts on the limited-channel EEG recordings.
7. The Type of Ground Truth Used
The ground truth used was expert consensus / expert marking. Specifically, it was defined as "seizures detected by a panel of 3 EEG board certified medical professionals" who independently marked seizures on de-identified and randomized EEG recordings.
8. The Sample Size for the Training Set
The document does not report the sample size used for the training set for the RecogniZe algorithm. It only details the "Testing Dataset."
9. How the Ground Truth for the Training Set Was Established
The document does not describe how the ground truth for the training set was established, as it does not provide details on the training set itself. The information provided pertains solely to the clinical validation (testing) dataset.
Ask a specific question about this device
(154 days)
The ICTA software is intended as a review tool to mark previously acquired sections of the adult (greater than or equal to 18 years) EEG recordings (surface or intracranial) that may correspond to electrographic seizures, in order to assist qualified clinical practitioners, who will exercise professional judgment in using the information, in the assessment of EEG traces.
- Surface recordings must be obtained with full montage according to the standard 10/20 . system.
- Intracranial recordings must be obtained with depth electrodes (strips and/or grids). .
This device does not provide any diagnostic conclusion about the patient's condition to the user.
ICTA is a software only product. It runs on a personal computer and requires no specialized hardware. It identifies electroencephalographic activity that might correspond to seizures (referred as "events"). These events are then reviewed, accepted, modified and/or deleted by the qualified medical practitioner. The software does not make any final decisions that result in any automatic diagnosis or treatment. The EEG input is read from a file on the personal computer (or available across the network).
ICTA employs Bayesian formulation to provide a detection variable based on the probabilities that a given section of EEG contains a seizure-like activity. The a priori probabilities that a certain set of features represent seizure or non-seizure data were computed from the training data set. These probabilities are used by the detection method for all seizure detections.
The software has two components: ICTA-S for analysis of surface EEG recordings and ICTA-D for analysis of intracranial recordings. Whether a particular module is active is determined by the user. The user also determines parameters that are needed for the algorithm to perform its intended task. None of the components is responsible for data acquisition, review or any other function different from analysis.
Here's a breakdown of the acceptance criteria and study details for the ICTA device, based on the provided 510(k) summary:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria for ICTA were established through a comparison with a predicate device (NeuroWorks Seizure Detector, K090019) and the "gold standard" of expert neurophysiologists. The key performance metrics are Positive Percent Agreement (PPA) and False Detection Rate (FDR).
| Performance Metric | Acceptance Criteria (Predicate) | ICTA-Surface Reported Performance | ICTA-Depth Reported Performance |
|---|---|---|---|
| PPA (%) | 76% | 75% | 75% |
| FDR (FP/h) | 0.6 FP/h | 2.0 FP/h | 1.8 FP/h |
Note: The document states "Equivalent" for both metrics when comparing to the predicate, even though the FDRs are numerically different. This suggests the FDA considers these values acceptable within the context of seizure detection assistance tools.
2. Sample Sizes Used for the Test Set and Data Provenance
- ICTA-S (Surface EEG):
- Number of Seizures: 615
- Total Number of Patients: 102
- Total Number of Hours: 395
- Data Provenance: Retrospective, patients with medically refractory seizures admitted to an Epilepsy Monitoring Unit. The specific country of origin is not explicitly stated, but Natus Medical Incorporated DBA Excel-Tech Ltd. is based in Oakville, Ontario, Canada.
- ICTA-D (Intracranial EEG):
- Number of Seizures: 429
- Total Number of Patients: 93 (57 Male, 36 Female)
- Total Number of Hours: 619 hours
- Data Provenance: Retrospective, adult patients seen for routine clinical evaluation at Epilepsy Monitoring Units of Toronto Western General Hospital (Canada) and NewYork-Presbyterian Hospital (USA).
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications
- Number of Experts: Three independent, blinded EEG experts were used for both ICTA-S and ICTA-D studies.
- Qualifications: All experts were board-certified Neurophysiologists (or neurologists/epileptologists). The document does not specify their years of experience.
4. Adjudication Method for the Test Set
- Adjudication Method: A "majority rule (at least 2 out of 3)" was applied. This means that for a seizure to be considered a "true" electrographic seizure (ground truth), at least two of the three independent experts had to agree on its presence.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- The document does not describe a multi-reader multi-case (MRMC) comparative effectiveness study where human readers' performance with and without AI assistance was evaluated. The study focuses on evaluating the standalone performance of the ICTA algorithm against a human-established ground truth and comparing it to a predicate device's reported performance.
6. Standalone Performance Study
- Yes, a standalone (algorithm only without human-in-the-loop performance) study was done. The entire clinical testing section describes the evaluation of the ICTA-S and ICTA-D algorithms' performance (PPA and FDR) independently against the ground truth established by the expert panel. The results presented in the tables (e.g., PPA 75% / FDR 2.0 FP/h for ICTA-S) are for the algorithm in standalone mode.
7. Type of Ground Truth Used
- Type of Ground Truth: Expert consensus. Specifically, electrographic seizures identified by a panel of three board-certified Neurophysiologists, with a majority rule for final determination.
8. Sample Size for the Training Set
- The document states that Bayesian formulation was used, and "The a priori probabilities that a certain set of features represent seizure or non-seizure data were computed from the training data set."
- However, the specific sample size for the training set is not provided in the summary.
9. How the Ground Truth for the Training Set Was Established
- The document states that probabilities were computed from the "training data set." It does not explicitly detail the method for establishing ground truth for this training set. However, given the nature of the device and the methods described for the test set, it is highly probable that the ground truth for the training set was also established through expert review and annotation of EEG recordings, likely by qualified medical practitioners. The summary implies that this training data was used to establish the "a priori probabilities" for the Bayesian model.
Ask a specific question about this device
(804 days)
The Sleepworks software works in conjunction with Connex, Trex or Netlink amplifiers intended for polysomnography studies. The software allows recording, displaying, analysis, printing and storage of physiological signals to assist in the diagnosis of various sleep disorders and sleep related respiratory disorders.
The Sleepworks allows:
- Automated analysis of physiological signals that is intended for use only in adults. .
- An optional Audio / visual alert for user defined threshold on calibrated DC input. . These alerts are not intended for use as life support such as vital signs monitoring or continuous medical surveillance in intensive care units.
- Sleep report templates are provided which summarize recorded and scored sleep . data using simple measures including count, average, maximum and minimum values as well as data ranges for trended values;
SleepWorks software does not provide any diagnostic conclusion about the patient's condition and is intended to be used only by qualified and trained medical practitioners. in research and clinical environments.
Not Found
The provided document is a 510(k) summary from the FDA for the Natus SleepWorks device. It states the device's indications for use and classification but does not contain any information about acceptance criteria or a study proving the device meets them.
Therefore, I cannot extract the requested information from this document.
To answer your request, I would need a document that includes:
- Details of specific performance metrics (e.g., sensitivity, specificity, accuracy for event detection)
- Defined acceptance criteria for those metrics.
- A description of the clinical or performance study conducted to evaluate the device against those criteria, including details on sample size, data provenance, ground truth establishment, expert qualifications, and adjudication methods.
Ask a specific question about this device
(318 days)
The GridView software is indicated for use by qualified and trained medical personnel for the visualization and reporting of the electrical activity of the brain in adult patients with intracerebral electrodes. This reporting is obtained by user annotation of images of the patient's brain (MRI) on which images of the electrodes are superimposed.
The GridView software is indicated for use by qualified and trained medical personnel for the visualization and reporting of the electrical activity of the brain in adult patients with intracerebral electrodes. This reporting is obtained by user annotation of images of the patient's brain (MRI) on which images of the electrodes are superimposed.
I'm sorry, but the provided text does not contain the detailed information necessary to answer your request about the acceptance criteria and the study that proves the device meets them. The document is an FDA 510(k) clearance letter for the "Stellate Gridview" (later referred to as "GridView Software") and it primarily focuses on the substantial equivalence determination.
Specifically, the text doesn't include:
- A table of acceptance criteria and reported device performance.
- Details about sample sizes for test sets, data provenance, or training sets.
- Information about the number or qualifications of experts, or how ground truth was established for either test or training sets.
- Adjudication methods.
- Results of any Multi-Reader Multi-Case (MRMC) comparative effectiveness study, or details about standalone algorithm performance studies.
- The type of ground truth used (e.g., pathology, outcomes data).
The document states the indications for use of the GridView software, which is "for the visualization and reporting of the electrical activity of the brain in adult patients with intracerebral electrodes. This reporting is obtained by user annotation of images of the patient's brain (MRI) on which images of the electrodes are superimposed." However, it does not provide study details demonstrating how well the device performs these functions against specific criteria.
Ask a specific question about this device
(134 days)
The NeuroLink IP system is intended to be used as an electroencephalograph: to acquire, store, display and archive electroencephalographic signals.
Not Found
This document is a 510(k) clearance letter from the FDA for a device called "NeuroLink IP, Model PK1113," an electroencephalograph. The letter confirms that the device is substantially equivalent to a legally marketed predicate device.
Unfortunately, this document does not contain any information about acceptance criteria, device performance studies, sample sizes, expert qualifications, ground truth establishment, or comparative effectiveness studies (MRMC or standalone).
The document primarily focuses on:
- The FDA's decision of substantial equivalence.
- Regulatory classification (Class II).
- General controls and applicable regulations.
- Contact information for various FDA divisions.
- The intended use of the device (to acquire, store, display, and archive electroencephalographic signals).
Therefore, I cannot provide the requested table and detailed information based solely on the provided text. To answer your questions, the original 510(k) submission document would be required, as it typically contains the technical details, performance data, and study methodologies supporting the device's clearance.
Ask a specific question about this device
Page 1 of 1