Search Results
Found 11 results
510(k) Data Aggregation
(220 days)
OLU
The NeuroField Analysis Suite is to be used by qualified clinical professionals for the display and storage of electrical activity of a patient's brain, including the post-hoc statistical evaluation of the human electroencephalogram (EEG) and event-related potentials (ERP).
The NeuroField Analysis Suite is a Normalizing Quantitative Electroencephalograph (QEEG) Software that can (1) execute EEG analysis and (2) conduct ERP test and ERP analysis. The NeuroField Analysis Suite is Software as a Medical Device (SaMD). The NeuroField Analysis Suite consists of two modules, the NF EEG Analysis Module and the NF ERP Module. The NF EEG Analysis Module is a separate analysis module that integrates with the Q21 EEG system by adding "Analysis", "Report", and "Tools" menu items and toolbars. It performs real-time and offline analysis functions and displays analysis results in separate windows in the UI, which can be accessed via the "Analysis" and "Reports" menu items. The NF ERP Module is a separate evoked response potential (ERP) module that can control and get data from the Q21 EEG system and performs typical ERP functions like stimulus presentation, EEG epoching, epoch averaging, reaction time, and ERP display. The NF ERP Module is a separate stand-alone application.
The NeuroField Analysis Suite, comprising the NF EEG Analysis Module and the NF ERP Module, was evaluated for performance. Here's a breakdown of the acceptance criteria and the study details:
1. Table of Acceptance Criteria and Reported Device Performance
The FDA document doesn't explicitly state acceptance criteria in terms of numerical performance metrics (e.g., specific accuracy, sensitivity, specificity values). Instead, substantial equivalence is claimed based on comparable functionality and performance to predicate devices. The "performance" reported reflects this comparative approach.
Feature Category | Acceptance Criteria (Implied from Predicate Comparison) | Reported Device Performance |
---|---|---|
NF EEG Analysis Module (vs. NeuroGuide Analysis System) | ||
EEG Analysis Functionality | Comparable re-montaging, filtering, event markers, and data analysis modes (marker-based or whole data). | Performs re-montaging, filtering, adding event markers, and analysis based upon markers or whole data. |
Z-Score Generation | Comparable generation of Z-Scores for absolute and relative power of EEG bands/frequencies. | Generates Z-Scores for absolute power and relative power of EEG bands or individual frequencies. |
Visualization | Comparable generation of head maps and FFT spectra with equivalent frequency resolution. | Generates head maps and FFT spectra, and with equivalent frequency resolution (0.5 Hz). |
Reporting | Comparable tabular export and automated report generation. | Provides for tabular export and automated report generation. |
Inverse Solution (LORETA) | Comparable calculation of inverse solution (Key Institute 2394 LORETA model) and generation of current densities/powers of ROIs using Brodmann Atlas. | Can calculate the inverse solution (using the standard LORETA Key Institute 2394 LORETA model) and generate the current densities and powers of the ROIs (Region of Interest) over the cortex using the Brodmann Atlas. |
Supported File Formats | Comparable ability to read standard EDF and EDF+ files. | Reads standard EDF and EDF+ files, additionally supports XDF. |
Signal/Spectrum Comparison | Comparable signals on EEG Plot and comparable spectrum/sidelobes for real and simulated data. | Shows comparable signals on the EEG Plot for the same EDF file as Neuroguide. Spectrum and sidelobes for real and simulated data are comparable. |
Z-Score Comparison | Comparable Z-Scores for real and simulated data. | Shows comparable Z-Scores. |
Headmap Topography | Comparable topographies over standard EEG bands. | Visualization of headmaps shows comparable topographies over the standard EEG bands. |
NF ERP Module (vs. eVox System) | ||
Averaged ERP Graphs | Comparable generation of averaged ERP graphs, allowing user to measure latency and magnitude. | Generates averaged ERP graphs for EEG channels, allowing user to measure latency and magnitude. |
Session Length | Comparable customizable session length. | Allows for customizable session length. |
Oddball Paradigm | Comparable application of auditory and visual oddball paradigms. | Applies auditory and visual oddball paradigm variations. |
Parameter Settings | Comparable ability to set parameters for visual and auditory oddball paradigm. | Allows the user to set parameters for the visual and auditory oddball paradigm. More user-definable parameters than the predicate (e.g., visual filtering, oddball period, Go probability, customizable audio/visual stims). |
Mathematical Correctness | Mathematically correct generated averaged ERPs. | Demonstrated mathematically correct generated averaged ERPs through mathematical validation. |
ERP Component Elicitation | Elicitation of correct ERP components. | Analysis of real data shows it can elicit correct ERP components. |
Stimulus Reliability | Reliable generated stimulus. | Confirmed via analysis of real data. |
Test Reliability | Confirmed test reliability (split-half reliability). | Split-half reliability measure confirms test reliability. |
2. Sample Size Used for the Test Set and Data Provenance
- NF EEG Analysis Module: The document states that "Comparative performance evaluations showed that the same EDF file, loaded to NF-EEG and Neuroguide shows comparable signals on the EEG Plot." It also mentions "real and simulated data" for spectrum and sidelobe calculations. No specific sample size (number of EDF files or simulated data instances) is provided.
- Data Provenance: Not explicitly stated, but the use of "real and simulated data" suggests a mix. "Same EDF file" implies existing data, which could be retrospective.
- NF ERP Module:
- "Mathematical validation of NF-ERP was performed." - No specific sample size mentioned, as this is a theoretical assessment.
- "Analysis of real data shows that the NF-ERP can elicit correct ERP components, the generated stimulus is reliable, and the split-half reliability measure confirms test reliability." - No specific sample size (number of cases/patients) is provided for this "real data" analysis.
- Data Provenance: "Real data" is used, but country of origin or whether it's retrospective/prospective is not specified.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
The document does not mention the use of experts to establish a "ground truth" for the comparative performance evaluations of either the EEG or ERP modules. The comparison is primarily against the output of the predicate devices.
4. Adjudication Method for the Test Set
Not applicable, as no external "ground truth" established by experts or adjudication process is described for the test set. The comparison is directly between the outputs of the NeuroField Analysis Suite and the predicate devices (Neuroguide or eVox).
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
No, a multi-reader multi-case (MRMC) comparative effectiveness study evaluating human readers' improvement with AI vs. without AI assistance was not conducted or described in this document. The study focuses on the standalone performance and comparison to predicate devices, not human-in-the-loop performance.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) was Done
Yes, a standalone performance evaluation was done for both modules.
- NF EEG Analysis Module: Directly compared its outputs (EEG plots, spectrums, sidelobes, Z-scores, headmaps) against those of the Neuroguide Analysis System.
- NF ERP Module: Underwent mathematical validation to confirm the correctness of averaged ERPs and analysis of real data to demonstrate correct ERP component elicitation, stimulus reliability, and test reliability using split-half reliability.
7. The Type of Ground Truth Used
The "ground truth" for the performance evaluations is implicitly the output or established functionality of the predicate devices (Neuroguide for EEG and eVox for ERP). For the NF ERP module, mathematical correctness and consistency with known physiological responses (correct ERP components, reliable stimulus) served as the "ground truth" for its internal validation. There's no mention of pathology, outcomes data, or external expert consensus as a ground truth.
8. The Sample Size for the Training Set
The document does not provide any information regarding a training set size. This suggests that the NeuroField Analysis Suite, as a QEEG and ERP analysis software, likely relies on established algorithms and mathematical models rather than machine learning models that require extensive training data.
9. How the Ground Truth for the Training Set Was Established
Not applicable, as no training set or machine learning model is described. The device appears to be based on deterministic algorithms for signal processing and statistical evaluation of EEG/ERP data.
Ask a specific question about this device
(177 days)
OLU
The iSyncBrain-C is to be used by qualified medical or qualified clinical professionals for the statistical evaluation of the human electroencephalogram (EEG) in patients aged 4.5 to 81 years.
iSyncBrain-C is a software program for the post-hoc statistical analysis of the human electroencephalogram (EEG). EEG signals can be measured by various EEG equipment, and the measured EEG data is saved in EDF files. iSyncBrain-C can upload, and analyze these EDF files, and personal information or results are automatically stored in AWS (Amazon Web Serve). The analysis consists of the Fast-Fourier Transformation (FFT) of the data to extract the spectral power for each of the designated frequency bands (e.g., Delta, Theta, Alpha, Alpha2, Beta2, Beta3, Gamma) and frequency information from the EEG. These analysis results are displayed in statistical tables and topographical brain maps of absolute and relative power, power ratio, ICA components, power spectrum, occipital alpha peak, source ROI power(sLORETA) & connectivity(iCoh). All EEG devices has its own frequency characteristics which should be included for any data comparisons coming from different devices. iSyncBrain-C has an EEG amplifier matching module where frequency spectra are adjusted with calibration table between database amplifier and recording amplifier. In all over 33,000 measures are derived for comparison against carefully constructed and statistically controlled age-regressed, normative database in which the variables have been transformed and validated for their Gaussian distribution. Each variable extracted by the analysis is compared to the database using parametric statistical procedures that express the differences between the subject and an appropriate sex/aqematched reference group in the form of z-scores.
Here's an analysis of the provided text regarding the iSyncBrain-C device, focusing on acceptance criteria and the supporting study:
1. Table of Acceptance Criteria and Reported Device Performance
The provided document describes the iSyncBrain-C as a software program for post-hoc statistical analysis of human EEG. The performance data section primarily discusses software validation in accordance with FDA guidance, rather than specific diagnostic performance metrics like sensitivity or specificity. The substantial equivalence argument is based on functional and technical similarity to a predicate device (qEEG-Pro), not on meeting specific quantitative clinical performance thresholds.
Therefore, the "acceptance criteria" appear to be focused on software functionality and safety, and substantial equivalence to a predicate device in terms of features and intended use. Specific quantitative performance metrics for disease detection or classification are not explicitly stated as acceptance criteria in this document.
Acceptance Criteria (Implied from the document) | Reported Device Performance |
---|---|
Software functionality (e.g., data upload, analysis, storage, display) | "The software was tested according to Software Design Specifications (SDS) as intended. The testing results support that all the software specifications have met each module's acceptance criteria and interaction of processes. The iSyncBrain-C passed all testing..." |
Safety of operation | "iSyncBrain-C passed all testing and supported the claims of substantial equivalence and safe operation." |
Substantial Equivalence to Predicate Device (qEEG-Pro) | "The information provided in this submission supports that iSyncBrain-C is the substantial equivalence to qEEG-Pro(K171414) and that the system is safe and effective for the users/operators." |
Age Range for Statistical Evaluation | Statistical evaluation for patients aged 4.5 to 81 years. The normative database covers 4 to 82 years, aligning with the indication. |
Frequency Bands for Analysis | 8 specified frequency bands (Delta, Theta, Alpha1, Alpha2, Beta1, Beta2, Beta3, Gamma). |
Indicators Provided | Absolute power, Relative power, Power ratio, ICA component, Power spectrum, Occipital alpha peak, Source ROI power (sLORETA) & Connectivity (iCoh). |
Compatibility with EEG equipment | Can upload and analyze EEG data in EDF files. Includes an EEG amplifier matching module to adjust frequency spectra. |
2. Sample Size Used for the Test Set and Data Provenance
The document does not explicitly state a test set specifically for evaluating the performance of the iSyncBrain-C algorithm against a clinical ground truth. The "Performance Data" section primarily refers to "software validation" against Software Design Specifications.
However, the document mentions statistics regarding the normative database used by the device for comparison:
-
Sample size for Normative Database (used for comparison during analysis):
- Eyes closed: 1289 samples
- Eyes Open: 1288 samples
-
Data Provenance: Not explicitly stated (e.g., country of origin). The document mentions "carefully constructed and statistically controlled age-regressed, normative database," but details about its collection (retrospective/prospective) and origin are absent.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
This information is not provided in the document. The document refers to software validation and substantial equivalence claims, not a clinical study where experts establish ground truth for a test set. The normative database used for comparison is mentioned, but how its "ground truth" (i.e., "normal" characteristics) was established, or by whom, is not detailed.
4. Adjudication Method for the Test Set
This information is not provided as there is no mention of a clinical test set requiring expert adjudication in the context of performance evaluation for the iSyncBrain-C algorithm itself.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance
A Multi-Reader Multi-Case (MRMC) comparative effectiveness study is not mentioned in the provided document. The device is described as a "software program for the post-hoc statistical analysis of the human electroencephalogram (EEG)" and its primary evaluation was software validation and substantial equivalence. There is no information about human readers' performance with and without AI assistance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
A standalone performance study of the algorithm's diagnostic accuracy (e.g., identifying specific EEG abnormalities or conditions) is not explicitly described in the provided text. The "Performance Data" section focuses on software validation against design specifications and claims of substantial equivalence based on functionality. While it performs analyses independently, the document does not present data from a study measuring its standalone clinical diagnostic performance against a ground truth.
7. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, etc.)
For the software validation, the ground truth appears to be the Software Design Specifications (SDS). The device was tested to ensure it met these specifications.
For the normative database against which individual patient EEGs are compared, the "ground truth" is implied to be a statistically controlled dataset representing "normal" age-regressed EEG patterns. However, the specific method used to establish this "normalcy" (e.g., expert review of all samples, lack of clinical symptoms/diagnoses) is not detailed. It's not a ground truth for a diagnostic task for the iSyncBrain-C itself, but rather a reference.
8. The Sample Size for the Training Set
The document does not explicitly mention a "training set" for the iSyncBrain-C algorithm. This suggests that if machine learning is involved in the analytical processes (beyond statistical comparisons to a normative database), the training data details are not provided. The reference to the "normative database" with 1289 (eyes closed) and 1288 (eyes open) samples is a reference database for comparison, not necessarily a training set for the algorithm's core functions.
9. How the Ground Truth for the Training Set Was Established
Since a "training set" is not explicitly mentioned or detailed, the method for establishing its ground truth is not provided.
Ask a specific question about this device
(522 days)
OLU
The BrainView QEEG Software Package is to be used by qualified medical or clinical professionals for the statistical evaluation of the human electroencephalogram (EEG).
BrainView QEEG Software Package is a software program for the post-hoc statistical analysis of the human electroencephalogram (EEG). EEG recorded on a separate device (i.e., the host system) is transferred to the BrainView QEEG software package for display and user-review. The device herein described consists of a set of tables that represent the reference means and standard deviations for representative samples. These tables are implemented as computer files that provide access to the exact tabular data resource for use by software that uses the tables as an information resource. The system requires that the user select reliable samples of artifact-free, eyes-closed or eyes open, resting digital EEG for purposes of analysis. Analysis consists of the Fast-Fourier Transformation (FFT) of the data to extract the spectral power for each of the designated frequency bands (e.g. delta, theta, alpha, and beta), and frequency information from the EEG. The results of this analysis are then displayed in statistical tables and topographical brain maps of absolute and relative power asymmetry, and coherence for 19 monopolar and 171 selected bipolar derivations of the EEG. In all over 4,000 measures are derived for comparison against carefully constructed and statistically controlled age-regressed, normative database in which the variables have been transformed and validated for their Gaussian distribution. Each variable extracted by the analysis is compared to the database using parametric statistical procedures that express the differences between the patient and an appropriate age-matched reference group in the form of z-scores.
Acceptance Criteria and Study for BrainView QEEG Software Package
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
BrainView QEEG software produces results sufficiently in agreement with the predicate device. | Pre-defined acceptance criteria were met. Z-scores for absolute power were calculated for each subject's EEG sample and compared with the predicate device's output, and found to be similar. |
The R-squared factor shall be 0.8 or better. | Specific R-squared values for the comparison are not explicitly stated in the provided text, but the overall conclusion is that criteria were met. |
The observed range of results obtained from the predicate device shall be used to verify that the BrainView QEEG produces results in agreement with the results obtained from the predicate device. | The BrainView QEEG produced results in agreement with the observed range of results from the predicate device. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: "small" sample size of "selected subjects." The text specifies that 10-minute EEG recordings for eyes closed and eyes open were used. Although the initial sample size was small, it was used to derive 23 age-grouped sets of Z-scores for each subject, covering ages 4 to 85.
- Data Provenance: The data used for clinical testing consisted of "clinically acquired EEG waveforms." The document does not explicitly state the country of origin but implies it was collected in a clinical setting. It is retrospective as the data was "clinically acquired" and then used to validate the device's performance against a predicate.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
The document does not explicitly mention the number or qualifications of experts used to establish a separate "ground truth" for the clinical test set. The validation was a direct comparison to the predicate device's output, implying the predicate itself served as a de facto reference. The "qualified medical or clinical professionals" are mentioned in the context of the device's intended use and not specifically to establish ground truth for this validation.
4. Adjudication Method for the Test Set
The document does not describe an adjudication method for the test set. The comparison was primarily a quantitative analysis of Z-scores against the predicate device's output.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. The study focuses on the agreement between the subject device and a predicate device, not on human reader performance with and without AI assistance.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study was Done
Yes, the clinical testing described is a standalone (algorithm only) performance study. The BrainView QEEG software's output (Z-scores for absolute power) was directly compared to the output of the predicate device. The text explicitly states that the "BrainView QEEG software package is used as a standalone diagnostic system in the absence of other clinical data" (though this is also mentioned as a potential misuse scenario, it confirms its standalone operational capability).
7. The Type of Ground Truth Used
The ground truth for the clinical test set was effectively the output of the legally marketed predicate device (NeuroGuide Analysis System K041263). The study's acceptance criteria were based on agreement with this predicate device's results.
8. The Sample Size for the Training Set
The training set for the BrainView QEEG software's normative database consisted of:
- 2303 subjects for eyes closed EEG data.
- 1965 subjects for eyes open EEG data.
9. How the Ground Truth for the Training Set was Established
The ground truth for the training set (normative database) was established through "carefully constructed and statistically controlled age-regressed, normative database in which the variables have been transformed and validated for their Gaussian distribution." This implies the data was collected from a large, healthy population across a wide age range (4-85 years) and processed to establish statistical norms ("reference means and standard deviations"). While not explicitly detailing every step of "ground truth" establishment, this points to a robust statistical methodology based on data from a large population.
Ask a specific question about this device
(90 days)
OLU
The BNA™ Platform is to be used by qualified medical professionals for the post-hoc statistical analysis of the human electroencephalogram ("EEG"), including event-related potentials ("ERPs').
This device is indicated for use in individuals 12 to 85 years of age.
The BNA™ Platform is to be used with the Auditory Oddball, Visual Go No-Go (age range of 25 to 85 years), and Eyes-Closed tasks.
The BNATM Platform is intended for the post-hoc statistical analysis of the human electroencephalogram ("EEG"), utilizing both resting-state EEG and Event-Related Potentials ("ERP") in a patient's response to outside stimuli during various states of alertness, disease, diagnostic testing, treatment, surgery, or drug related dysfunction. An Event-Related Potential (or "evoked response") is an electrical potential recorded from the nervous system following the presentation of a stimulus (e.g., as part of a cognitive task). An ERP signal consists of typical ERP components - positive or negative voltage spatiotemporal peaks within the ERP waveform that are measured within one second post-stimulus presentation. The BNATM Platform is intended to analyze EEG data recorded at rest and during the performance of two conventionally used ERP tasks, the Auditory Oddball (AOB) and the Visual Go No-Go (VGNG).
The EEG is recorded continuously while the patient is at rest with eyes-closed (hereby Eyes-Closed) or performs one of the ERP tasks (hereby ERP tasks). The acquisition site is asked to provide reliable samples of artifact-free digital EEG for purposes of analysis. After the recording, the artifact-free EEG data is imported into the BNATM Platform and is automatically analyzed by the algorithm and the results of the processed data are compiled into individualized Reports:
- . ERP Report
- Behavioral Report ●
- . Summary Report
- Resting-State EEG Report ●
Scores are presented as Z-Scores based on comparing the patient to an age-matched relevant reference group based on elminda's normative database. This presentation expresses the differences between the patient and the reference group.
The BNA™ Reports are intended to be used by clinicians to enable the evaluation of the patient's brain activity during a specific task compared to an age-range matched reference group.
The system consists of the following components: a computer environment; EEG data input software algorithms for BNA™ calculations; a report generator and a functionality for data transfer and storage.
The device processes and analyzes data received from a dedicated, commercially available, and FDA cleared EEG system, which complies with the BNATM Platform specifications.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:
Acceptance Criteria and Reported Device Performance
Performance Metric | Acceptance Criteria (Success Rate) | Reported Device Performance | Discussion |
---|---|---|---|
Normality of EEG Scores | Pass at least 2 of 4 normality tests (p-value > 0.05) | All Resting-EEG and ERP scores pass "two out of four" method tests with a success rate > 97% for Resting-EEG and > 98% for ERP. | The study aimed to demonstrate that the expanded age ranges and tasks maintain the statistical properties of the original predicate device. A success rate above 97% and 98% for ERP scores indicates high compliance with the normal distribution assumption, allowing for reliable Z-score interpretation. This is explicitly stated to be "in accordance with the success rates presented in the predicate device statistical performance and, from a clinical perspective, allow for an accurate clinical interpretation of z-scores." |
Gaussian Leave-One-Out Sensitivity Test (for Resting-EEG) | Success rate > 97.5% | Resting-EEG scores pass with a success rate > 97.5%. | This additional test for Resting-EEG further validates the robustness of the normality assumption, with the reported success rate meeting the criterion. The document notes this is an "acceptable percentage of failures, given the large number of scores tested." |
Poolability of Reference Group Data | Significant age-effect (p |
Ask a specific question about this device
(240 days)
OLU
The system is intended to statistically evaluate brain activity reflected in a broad band of high-gamma frequencies in the human electroencephalogram (EEG). These measures should always be interpreted in conjunction with review of the original EEG waveform.
cortiQ PRO is intended for the evaluation of intracranial EEG recorded with the g.Hlamp.
cortiQ PRO is a system that uses g.Hlamp to map high-gamma broad band brain activity while running an experimental paradigm. The software helps to identify electrode positions coding differences in brain activity by means of experimental paradigm. cortiQ PRO performs the signal analysis in real-time and compares the highgamma broad band activity during specific tasks. Then it performs a statistical analysis and visualizes electrodes coding the information that are statistically significant. It is abstracted from technical details of data acquisition, channel order and signal processing assuring robust and efficient measurements.
cortiQ PRO reads in the digital data from the g.Hlamp amplification system (1200 Hz sampling frequency, up to 256 channels) via USB into the processing computer. The data is acquired without bandpass and notch filtering and without bipolar derivation. The software allows one to select the channels that should be acquired and stores the raw data together with header information for later off-line analysis.
Raw data is visualized on a raw data scope to inspect the data quality. The scope allows scaling of the data in amplitude and time. Furthermore the software allows scaling of all the channels to the same amplitude to make the interpretation easier. In the scope, it is possible to select a new ground and reference channel and to exclude a channel from the processing (if the data quality is bad). The raw data scope filters the data with a high-pass filter to remove DC-offsets for optimal visualization.
cortiQ PRO allows the operator to select an experimental paradigm that instructs the patient to perform certain tasks. The instructions are presented on a patient computer screen or are given via a speaker. The user can select, start and terminate the experimental paradigm. Additionally, the number of repetitions can be selected. A dedicated paradigm editor creates new paradigm files or modifies existing paradigms.
The rapid cortical mapping functions perform a common average reference (CAR) of all the active channels to remove common mode signals such as power line interference. Then the module calculates the high-gamma activity in certain frequency ranges for the different tasks and compares the high-gamma activity to those of another task according to the selected paradigm. Then a statistical analysis is performed and significant activation is plotted as bubble on the defined electrode position in order to identify important regions. When the mapping has ended, cortiQ PRO automatically generates a mapping report containing the montage definition, the paradigm definition, and the mapping results. This report is stored as pdf and can be printed. The mapping result is also stored for later analysis.
cortiQ PRO allows the operator to define montage definition files in the montage creator by loading predefined electrode grids from different manufacturers. Each grid has a certain number of channels. The montage creator allows the operator to assign a patient's name and date of birth, and a montage name to each file. Furthermore, it allows the operator to assign a grid name to each electrode grid. The grids can be placed on different background images to make the location interpretation easier. Electrodes from a grid can be disabled, used as reference or as ground electrodes. The grids can be resized or rotated. The montage creator assigns also the electrode grids automatically to the amplifier inputs channels and creates a report with the channel definition. The results can be stored to be modified later. The report is stored as pdf and can be printed.
cortiQ PRO comes with an installs the software under Windows. A hardlock is required to start the mapping software.
The mapping system comes with Instructions for use and a training program.
The provided 510(k) summary for the cortiQ PRO device does not contain a detailed study with specific acceptance criteria and reported device performance metrics in a tabular format as typically found in comprehensive clinical or performance studies. Instead, it offers a general statement about performance testing.
Here's an attempt to extract and present the information based on the available text:
Acceptance Criteria and Device Performance
The documentation states: "The testing showed that the difference in gamma activity can be correctly mapped to correct electrode channels." This is the primary functional performance claim related to the core function of the device in evaluating high-gamma brain activity. However, specific quantitative acceptance criteria (e.g., sensitivity, specificity, accuracy, precision, or minimum R2 values) are not explicitly provided in the given text, nor are specific reported device performance metrics in a tabular format.
Table 1: Acceptance Criteria and Reported Device Performance (as inferred)
Acceptance Criteria (Inferred from Claims) | Reported Device Performance |
---|---|
Ability to correctly map differences in high-gamma brain activity to correct electrode channels (based on task-related differences for real ECoG data and amplitude differences for artificial test data during "pause" vs. "action" intervals). | "The testing showed that the difference in gamma activity can be correctly mapped to correct electrode channels." |
The system cortiQ PRO works like the predicate devices (implied equivalence in safety and effectiveness regarding signal acquisition and processing capabilities, particularly compared to NeuroGuide Analysis System for statistical evaluation and g.Hlamp for signal acquisition, as detailed in Tables II and III). | "The testing showed that the system cortiQ PRO works like the predicate devices." |
Usability requirements are met at an acceptable risk level by the intended user group. | "The results of the usability testing demonstrate that the cortiQ PRO system (including the control software) meets all specified usability requirements at an acceptable risk level." |
Compliance with safety standards (IEC60601-1, IEC60601-1-2, IEC60601-2-26, ISO 14971, IEC 62304, IEC 62366) and medical safety features (medical grade power supply, isolated inputs/outputs, isolated applied parts). | "In cortiQ PRO the medical safety is realized by using the g.Hlamp which is powered by a medical grade power supply unit and provides isolated input and outputs for communication as well as appropriate isolated applied parts for the treatment." |
Study Details
The provided text describes performance testing, but not a formal clinical study with detailed methodology typically associated with "acceptance criteria" and "device performance" in a quantitative sense.
-
Sample size used for the test set and the data provenance:
- Test Set: "several real electrocorticographic (ECoG) and artificial test signals." No specific number is given for "several."
- Data Provenance: Not specified, but "real electrocorticographic (ECoG) data" would likely be from human subjects, potentially clinical. "Artificial test signals" are generated. Whether these ECoG data were prospective or retrospective is not stated. Country of origin is not specified.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- This information is not provided in the document. The ground truth for artificial signals is inherent in their design. For real ECoG data, how "task-related differences" were confirmed as ground truth is not detailed, nor is the involvement of experts in establishing this ground truth.
-
Adjudication method for the test set:
- This information is not provided. Given the nature of the testing described (mapping gamma activity), it's unclear if an adjudication method between multiple readers/interpreters was necessary or employed.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- A multi-reader multi-case (MRMC) comparative effectiveness study is not mentioned in the document. The cortiQ PRO is described as a system for statistical evaluation and visualization of brain activity, intended to be interpreted in conjunction with review of the original EEG waveform and dependent upon the judgement of the clinician. This implies human-in-the-loop, but without a formal MRMC study evaluating improvement with AI assistance.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- The description of the testing ("The testing showed that the difference in gamma activity can be correctly mapped...") suggests performance validation of the algorithm's ability to identify and map differences in gamma activity. This can be interpreted as a form of standalone performance evaluation for its core function. However, the device's indications for use emphasize that measures "should always be interpreted in conjunction with review of the original EEG waveform," implying it is not intended for fully standalone diagnostic interpretation.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For artificial test signals: The ground truth is the inherent design of the signals, where "lower amplitude in the task baseline interval" and "higher amplitude in the action interval" on specific channels define the expected difference.
- For real electrocorticographic (ECoG) data: The ground truth relies on "task-related differences in the high-gamma frequency band." How these differences are definitively established as "ground truth" (e.g., through other modalities, expert consensus, surgical outcomes) is not explicitly stated.
-
The sample size for the training set:
- This information is not provided. The document describes performance testing but does not detail the development or training of the algorithm that identifies "high-gamma broad band brain activity."
-
How the ground truth for the training set was established:
- This information is not provided as details about a training set are absent.
Ask a specific question about this device
(412 days)
OLU
The qEEG-Pro System is to be used by qualified clinical professionals for the statistical evaluation of the human electroencephalogram (EEG).
qEEG Pro Database (QPD) is a software program for the post-hoc statistical analysis of the human electroencephalogram (EEG). EEG recorded on a separate device (i.e., the host system) is transferred to the QPD for display and user-review. The device herein described consists of a set of tables that represent the reference means and standard deviations for representative samples. These tables are implemented as computer files that provide access to the exact tabular data resource for use by software that uses the tables as an information resource. The system requires that the user select reliable samples of artifact-free, eyes-closed or eyes open, resting digital EEG for purposes of analysis.
Analysis consists of the Fast-Fourier Transformation (FFT) of the data to extract the spectral power for each of the designated frequency bands (e.g. delta, theta, alpha, and beta), and frequency information from the EEG. The results of this analysis are then displayed in statistical tables and topographical brain maps of absolute and relative power, power asymmetry, and coherence for 19 monopolar and 171 selected bipolar derivations of the EEG. In all over 5,000 measures are derived for comparison against carefully constructed and statistically controlled age-regressed, normative database in which the variables have been transformed and validated for their Gaussian distribution.
Each variable extracted by the analysis is compared to the database using parametric statistical procedures that express the differences between the patient and an appropriate age-matched reference group in the form of z-scores.
Here's an analysis of the acceptance criteria and study detailed in the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
qEEG-Pro produces results sufficiently in agreement with the predicate devices. | The study concludes that the subject device's output was "similar" to the predicate device's output. |
The R-squared factor shall be 0.8 or better when comparing the subject device to the predicate device. | Explicit R-squared values are not provided in the text; however, the study concludes that the pre-defined acceptance criteria were met. This implies the R-squared of 0.8 or better was achieved. |
The observed range of results obtained from the predicate devices shall be used to verify that the qEEG-Pro produces results in agreement with the results obtained from the predicate device. | The study states that computing values for a range of discrete ages and comparing them to the predicate device's output showed them to be "similar," thus verifying agreement with the predicate's observed range of results. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: 3 subjects (1 pediatric, 2 adult).
- Data Provenance: The text does not specify the country of origin of the data. It appears to be a prospective study as subjects' EEG recordings were specifically obtained for this validation.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Those Experts
The text does not explicitly mention any experts being used to establish the ground truth for the test set in the clinical study described. The validation appears to be a direct comparison of the qEEG-Pro's output against the predicate device, K041263, which itself contains a normative database. The "ground truth" in this context is essentially the established output and normative data of the predicate device.
4. Adjudication Method for the Test Set
The text does not describe any adjudication method like 2+1, 3+1, or similar. The validation focuses on comparing the numerical outputs (z-scores for absolute power) of the subject device (qEEG-Pro) against the predicate device (K041263).
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done, If So, What was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance
A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. The study described is a technical validation comparing the output of the qEEG-Pro software to a predicate device, not a study involving human readers' performance with or without AI assistance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done
Yes, a standalone study was done. The clinical testing involved comparing the algorithm's output (qEEG-Pro) directly against the predicate device's algorithm output. The "Potential adverse effects" section also highlights the risks if qEEG-Pro is used "as a standalone diagnostic system in the absence of other clinical data from more traditional means of patient evaluation," indicating its standalone capability.
7. The Type of Ground Truth Used
The ground truth used in the clinical testing was the output of a legally marketed predicate device (NeuroGuide Analysis System (NAS), K041263). The comparison was based on z-scores for absolute power calculated for the same EEG data by both the subject and predicate devices.
8. The Sample Size for the Training Set
- qEEG-Pro Normative Database: 1482 samples (eyes-closed); 1231 subjects (eyes-open)
- Predicate Device (NAS) Normative Database: 625 samples
The text specifies these as the sample sizes for the normative databases used by the devices, which serve as their internal "training" or reference data for comparison.
9. How the Ground Truth for the Training Set was Established
The ground truth for the training sets (normative databases) for both the qEEG-Pro and the predicate device was established via "carefully constructed and statistically controlled age-regressed, normative database". These databases contain reference means and standard deviations for representative samples of EEG data. The variables in these databases have been "transformed and validated for their Gaussian distribution." This implies a process of collecting EEG data from a large, healthy population across various age ranges, processing it, and statistically characterizing it to form a reference against which individual patient EEGs are compared to generate z-scores.
Ask a specific question about this device
(834 days)
OLU
The BNA™ Analysis System is to be used by qualified medical professionals for the posthoc statistical analysis of the human electroencephalogram ("EEG"), utilizing evoked response potentials ("ERP"). This device is indicated for use in individuals 14 to 24 years of age. The BNA™ Analysis System is to be used with the auditory oddball task only.
The BNA Analysis System is an accessory to EEG. The BNA Analysis System is a software device that is used to analyze EEG-ERP data with regards to conventional, well established characteristics of amplitude and latency. Statistical analysis is performed to express the differences between the patient (individual) and a task-matched reference group in the indicated age group in the form of Z-scores. The BNA Analysis System report displays the test results in the following format; (1) Test and Patient Information; (2) ERP waveforms; (3) Summarized patient results - Z-Score Tables, Z-Score Maps and BNA Composite Scores. BNA Composite Scores are a calculation of the global comparison of the individual to the normative group (RBNM) for the following well-established EEG-ERP components: amplitude and absolute time. These calculations are a measure of the overall similarity of the single subject EEG-ERP components to the EEG-ERP components of the normative group (RBNM) based on Z-scores. The BNA scores should not be used as stand-alone information: rather, such scores should complement all of the information included in the report, as well as the clinical evaluation.
Here's a breakdown of the acceptance criteria and study details for the ElMindA Ltd.'s BNA™ Analysis System, based on the provided 510(k) summary:
1. Table of Acceptance Criteria and Reported Device Performance:
The document focuses on the repeatability of BNA scores as a key performance metric. The acceptance criteria aren't explicitly stated as numerical targets in the same way clinical accuracy metrics (like sensitivity/specificity) often are. Instead, the study aims to demonstrate that the device produces consistent results across identical test sessions. The "acceptance criteria" here implicitly refer to demonstrating acceptable agreement between repeated measurements, as evaluated by Bland-Altman analysis.
Acceptance Criteria (Implied) | Reported Device Performance (Bland-Altman 95% Limits of Agreement) |
---|---|
Demonstrate acceptable repeatability of BNA scores between two identical test sessions within 7 (±3) days. | Frequent Stimulus: |
Amplitude: -45.60 to 53.79 | |
Absolute Time: -43.16 to 49.20 | |
Novel Stimulus: | |
Amplitude: -44.20 to 45.13 | |
Absolute Time: -47.85 to 58.74 | |
Target Stimulus: | |
Amplitude: -48.71 to 37.41 | |
Absolute Time: -53.91 to 40.36 |
Note on Interpretation: Bland-Altman limits of agreement quantify the range within which 95% of the differences between two measurements are expected to lie. Smaller limits indicate better agreement (repeatability). The document doesn't provide a specific numerical threshold for "acceptable" limits, relying on the overall demonstration of repeatability.
2. Sample Size Used for the Test Set and Data Provenance:
- Test Set Sample Size: Not explicitly stated. The document mentions "clinical performance testing was conducted to assess the repeatability of the BNA scores," but doesn't give the number of subjects.
- Data Provenance: Not specified in terms of country of origin. The study was conducted as part of the regulatory submission for ElMindA Ltd., an Israeli company. The type of study was prospective, as it involved conducting two identical test sessions within a specific timeframe to assess repeatability.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:
This information is not applicable and not provided in the document. The study was a repeatability study comparing the device's own scores across two sessions, not a study evaluating the device's diagnostic accuracy against an expert-established ground truth for a specific condition. The "ground truth" in this context is the subsequent measurement by the same device.
4. Adjudication Method for the Test Set:
This information is not applicable and not provided. As explained above, this was a repeatability study, not one requiring adjudication against a clinical ground truth.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
This information is not applicable and not provided. This document pertains to a standalone software device that performs post-hoc statistical analysis of EEG/ERP data. It is not an AI-assisted diagnostic tool designed to be used by human readers for interpretation, nor does it conduct comparative effectiveness studies with human readers. The BNA scores are intended to "complement all of the information included in the report, as well as the clinical evaluation," but the study is not designed to measure human reader improvement with the device.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:
Yes, a standalone performance assessment was done. The entire "Performance Data" section describes the assessment of the BNA Analysis System's repeatability, which is an intrinsic characteristic of the algorithm's output (BNA scores) when analyzing EEG-ERP data. The device itself performs the statistical analysis and generates the scores.
7. The Type of Ground Truth Used:
The "ground truth" in this repeatability study is the device's own measurement from the first test session. The study aims to see how repeatable the device's output (BNA scores) is when the same subject undergoes an identical test shortly after. It's a self-assessment of consistency, not a comparison to an external clinical or pathological ground truth.
8. The Sample Size for the Training Set:
This information is not provided in the document. The BNA Analysis System performs statistical analysis relative to a "task-matched reference group in the indicated age group." While this implies a normative dataset was used, the sample size for this reference group (which could be considered analogous to a training or normative dataset) is not detailed.
9. How the Ground Truth for the Training Set Was Established:
This information is not provided in the document. It mentions that BNA Composite Scores compare the individual to a "normative group (RBNM)." However, it does not explain how this normative group was established, including the criteria for inclusion, data acquisition methods, or any expert review process for defining the "ground truth" or normal range for these parameters.
Ask a specific question about this device
(232 days)
OLU
The HBldb product is to be used by qualified medical professionals for the post-hoc statistical evaluation of the human electroencephalogram (EEG), utilizing evoked response potentials (ERP).
The Human Brain Index Software (HBldb) product is a software program for the post-hoc statistical analysis and comparison with a reference data (the Human Brain Index Reference Database (HBIRD)) of the human electroencephalogram (EEG) including spontaneous oscillations of the human brain potentials and event-related potentials (ERPs). The EEG is recorded on a separate device under the standardized HBIdb conditions and is transferred to the HBIdb in EDF+ format for
- display .
- . spectral analysis and
- analysis of event-related potentials .
- comparison of the gathered parameters against the Human Brain Index Reference Database . (HBIRD), and
- compilation of a report. .
Results of comparison of individual EEG/ERP parameters with the database are intended for use as an aid to diagnosis. No medication or treatment is applied based on this comparison alone. The results have to be considered only in conjunction with other clinical findings.
Here's an analysis of the provided text regarding the acceptance criteria and study for the HBIdb software:
Important Note: The provided document is a 510(k) summary for a medical device. This type of submission focuses on demonstrating substantial equivalence to existing, legally marketed predicate devices, rather than proving efficacy or meeting specific performance acceptance criteria through clinical trials. Therefore, the information typically found in a clinical study report regarding detailed acceptance criteria, effect sizes, and deep dives into ground truth establishment is largely absent. The "acceptance criteria" here implicitly refer to demonstrating equivalence in intended use, technology, and performance with predicate devices.
Acceptance Criteria and Reported Device Performance
Since this is a 510(k) summary, formal "acceptance criteria" in the sense of specific numerical thresholds for performance metrics are not explicitly stated. The comparison is made against predicate devices, focusing on equivalency of features and intended use. The "device performance" is therefore implicitly that it functions similarly to the predicate devices.
Feature / Criterion (Implicitly "Acceptance Criteria") | HBIdb Software Performance (Reported) |
---|---|
Intended Use | Used by qualified medical professionals for post-hoc statistical evaluation of human EEG, utilizing evoked response potentials (ERP). Intended for use on children and adults from age 7 to 80 years. |
EEG Data Comparison against Normative Database | Yes (Functions as intended to compare EEG data against a normative database). |
ERP/EP Data Comparison against Normative Database | Yes (Functions as intended to compare ERP/EP data against a normative database). |
Population | 7 to 80 years (Comparable to predicate devices: 6-90 years, Birth-82 years, Unknown, 6-90 years). |
Product Code | GWQ (Matches predicate devices). |
Classification | 882.1400 (Matches predicate devices). |
ICA based Artifact Correction | Yes (This is a distinguishing feature from predicates, indicating advanced capability, not a "performance" metric in this context). |
Decomposition of ERPs into Independent Components | Yes (This is a distinguishing feature from predicates, indicating advanced capability, not a "performance" metric in this context). |
Software Development and Validation Standards | Developed and validated according to IEC 62304:2007. Software documentation and testing provided in accordance with FDA guidance for "Content of Premarket Submissions for Software Contained in Medical Devices." All software functions tested using specific EEG test files. Verification and validation performed on Windows XP 32 bit, Vista 32 bit, Windows 7 32 and 64 bit. Security dongle validated with a special utility. |
Study Information:
-
Sample sizes used for the test set and the data provenance:
- Test Set Sample Size: The document does not specify a numerical sample size for a "test set" in terms of subject data. It states, "For verification and validation of the product, all software functions were tested by means of specific EEG test files." This implies a set of pre-defined, synthetic, or existing EEG data files were used to test the software's functionality, not necessarily a clinical dataset for performance evaluation.
- Data Provenance: Not specified for these "specific EEG test files." Given the manufacturer is German, the test environment and data could be internal or from Germany, but this is not directly stated. The data used for the Human Brain Index Reference Database (HBIRD) is not described here, only that the device compares against it.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not applicable/Not specified. The testing described focuses on software functionality (verification and validation against IEC and FDA guidance), not on diagnostic accuracy of the device's output against a clinical "ground truth" established by experts. The HBIdb product aids diagnosis but doesn't provide a definitive diagnosis on its own.
-
Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not applicable/Not specified. Since the testing was for software functionality rather than clinical diagnostic performance adjudicated by experts, no such method is mentioned.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No. An MRMC comparative effectiveness study was not performed or described in this 510(k) summary. This type of study demonstrates clinical utility with human readers, which is beyond the scope of this type of premarket submission which is focused on substantial equivalence.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, in a sense. The testing performed was "standalone" in that it validated the software's functions (e.g., display, spectral analysis, ERP analysis, comparison against the HBIRD, report compilation) using "specific EEG test files." This focused on the algorithm's correct execution and output formation, not on diagnostic accuracy by itself. The device is an "aid to diagnosis" and meant to be used by "qualified medical professionals," indicating that human interpretation is always in the loop for diagnosis.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For the software's functional testing, the "ground truth" would be the expected output or behavior of the software for the "specific EEG test files" based on its design specifications and algorithms. This is a technical ground truth, not a clinical diagnostic ground truth. The HBIdb compares data against a Human Brain Index Reference Database (HBIRD), which is a form of ground truth (normative data), but the method of establishing that database's ground truth is not detailed in this document.
-
The sample size for the training set:
- Not specified. The document does not mention a "training set" for the HBIdb software itself. This suggests the software is likely rule-based or statistical in nature, comparing inputs against a pre-established "reference database" (HBIRD), rather than a machine learning model that requires a distinct training phase with labeled data. The HBIRD itself would have been built from a large sample, but its size is not in this document.
-
How the ground truth for the training set was established:
- Not applicable, as a distinct "training set" for a machine learning model is not described. The HBIdb compares data against the Human Brain Index Reference Database (HBIRD). How the "ground truth" for this normative database (HBIRD) was established (e.g., participant selection, EEG acquisition protocols, data processing, statistical methods for defining "normative ranges") is beyond the scope of this 510(k) summary and not provided.
Ask a specific question about this device
(266 days)
OLU
The BRC software product is to be used by qualified medical professionals for the posthoc statistical evaluation of the human electroencephalogram (EEG), utilizing evoked response potentials (ERP).
The BRC software product is composed of the following major components: BRC The DRC software produisition Software and the BRC Neuromarker Analysis Software. Neuromarker Data Acquisition Software is used to collect the data gathered at BRC I he Nedromancer Data requisition protocols and equipment are utilized at each nechsed laboratory to ensure uniformity of collected data. The data is then transmitted to the DRC Central Analysis Facility where the Neuromarker Analysis Software is used to DIC Central Interfall Pagainst the Brain Resource International Database (BRID). The process the gatibled and againta from approximately 2,400 normative (i.e., without a database Currently contains ada cohol abuse, or serious medical condition) participants. The results of the processed data is compiled into an individualized report called the NeuroMarker Report.
The BRC Cognition Acquisition Software is one component of the BRC NeuroMarker The DRC Cognition Frequence is loaded on a computerized touchscreen system and used to gather cognitive patient performance information. This data is transmitted from used to gamer cogment person personal Analysis Facility for processing and formatting into report form (IntegNeuro Report).
The BRC software product is to be used by qualified medical professionals for the posthoc statistical evaluation of the human electroencephalogram (EEG), utilizing evoked response potentials (ERP).
Here's an analysis of the provided text regarding acceptance criteria and the study performed, structured according to your request:
1. Table of Acceptance Criteria and Reported Device Performance
Based on the provided text, no specific acceptance criteria or quantitative performance metrics are mentioned for the BRC Software Product. The document primarily focuses on establishing substantial equivalence to predicate devices, outlining the device description, intended use, and technological characteristics.
Therefore, the table would look like this:
Acceptance Criteria | Reported Device Performance |
---|---|
Not specified | Not specified |
2. Sample Size Used for the Test Set and Data Provenance
The document mentions that the BRC Neuromarker Analysis Software processes data against the Brain Resource International Database (BRID), which "currently contains data from approximately 2,400 normative (i.e., without a database Currently contains ada, cohol abuse, or serious medical condition) participants."
- Sample Size for Test Set: Although the document refers to this database as a "normative" one against which processed data is compared, it is not explicitly stated as a "test set" in the context of validating the device's performance against specific acceptance criteria. The 2,400 participants are described as providing a normative database for comparison, not as a dedicated test set for performance evaluation in the typical sense of a validation study.
- Data Provenance: The document does not explicitly state the country of origin of the data. However, the manufacturer is BRC Operations Pty. Ltd, located in Sydney, NSW, Australia, which might suggest data could originate from Australia or be internationally sourced for the "Brain Resource International Database." The data is described as being "collected at each network laboratory."
- Retrospective or Prospective: The document does not specify whether the data in the BRID is retrospective or prospective.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
The document does not provide any information about experts being used to establish a "ground truth" for a test set. The 2,400 participants are described as "normative" individuals without specific conditions, suggesting their data serves as a baseline for comparison rather than requiring expert labeling for a specific outcome.
4. Adjudication Method for the Test Set
No information is provided regarding an adjudication method for a test set.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
The provided text does not mention a multi-reader multi-case (MRMC) comparative effectiveness study, nor does it discuss human reader improvement with or without AI assistance. The device is a "software product" for statistical evaluation of EEG/ERP, and the submission is focused on establishing substantial equivalence to predicate devices that are also software products for similar analysis.
6. Standalone (Algorithm Only) Performance Study
The document describes the BRC software product as performing "posthoc statistical evaluation" of EEG/ERP data by comparing it against a normative database. This implicitly describes the algorithm's standalone function (processing data and generating a report). However, the document does not report specific performance metrics from a standalone study, such as sensitivity, specificity, accuracy, or AUC. The focus is on the functional equivalence and comparison against normative data rather than a detailed performance validation against an adjudicated ground truth in a standalone study.
7. Type of Ground Truth Used
The "ground truth" for the comparison performed by the device is essentially the normative data from the Brain Resource International Database (BRID). This database contains data from "approximately 2,400 normative (i.e., without ... alcohol abuse, or serious medical condition) participants." The device compares incoming patient data against this established normative range. This is akin to using a healthy control group's data as a baseline for statistical comparison. It is not expert consensus, pathology, or outcomes data in the sense of labeling specific conditions.
8. Sample Size for the Training Set
The document mentions the Brain Resource International Database (BRID) is used for comparison and contains data from approximately 2,400 normative participants. While this database serves as a reference for the software's analysis, the document does not explicitly describe a separate "training set" in the context of machine learning model development. For statistical comparison against normative data, the entire normative database might be considered the "reference" or "training" data in a broader sense, but not a distinct training set for a classified model.
9. How the Ground Truth for the Training Set Was Established
For the normative database (BRID) of 2,400 participants, the "ground truth" or defining characteristic is that these participants are "without ... alcohol abuse, or serious medical condition." This suggests that participants were screened and confirmed to be "normative" based on their medical history and absence of specific conditions. The specific methods for establishing this "normative" status (e.g., clinical examination, diagnostic tests, self-reporting) are not detailed in the provided text.
Ask a specific question about this device
(84 days)
OLU
For clinical use the NeuroGuide Analysis system is to be used by qualified medical and qualified clinical professionals for the post-hoc statistical evaluation of the human electroencephalogram (EEG).
The NeuroGuide Analysis System (NAS) is a software program for the post-hoc statistical analysis of the human electroencephalogram (EEG). EEG recorded on a separate device (i.e., the host system) is transferred to the NAS for display and user-review. The system requires that the user select reliable samples of artifact-free, eyes-closed or eyes open, resting digital EEG for purposes of analysis. Analysis consists of the Fast-Fourier Transformation (FFT) of the data to extract the spectral power for each of the four primary frequency bands (delta, theta, alpha, and beta), and frequency information from the EEG. The results of this analysis are then subjected to univariate, bivariate, and multivariate statistical analyses and displayed in statistical tables and topographical brain maps of absolute and relative power, power asymmetry, and coherence for 19 monopolar and 171 selected bipolar derivations of the EEG. In all over 1,200 measures are derived for comparison against a carefully constructed and statistically controlled age-regressed, normative database in which the variables have been transformed and confirmed for their Gaussian distribution. Each variable extracted by the analysis is compared to the database using parametric statistical procedures that express the differences between the patient and an appropriate age-matched reference group in the form of Z-scores. Multivariate features are compared to the normative database using Gaussian Univariate and Multivariate Distance Statistics. The Gaussian multivariate Distance statistic controls for the interrelationship of the measures of brain cortical function in the feature set, and provides an accurate estimate of their difference from normal. The multivariate measures permit an evaluation of regional indices of brain function that reflect the perfusion fields of the brain. Extracted feature sets are further analyzed to determine if the pattern of 'hits' (statistically significant feature score values identified for the patient) are consistent with patterns of 'hits' identified in prior neuroguide evaluations of clinical patients with known disorders. A step-wise discriminate analysis program classifies the patient in terms of their similarity to known neuroguide-defined patterns of abnormality, providing a probability estimate of the patient's profile with the average profile of groups of individuals constituting the normative and clinical database. The discriminant classification program is restricted by confining potential outcomes to specific patient symptoms derived from the patient history profile. Established discriminant functions were evaluated through the use of Receiver Operating Characteristic (ROC) curves for their sensitivity and specificity. The outcome of the statistical analysis is presented in report form that includes (a) patient demographic and history information, (b) selected EEG epochs, (c) statistical tables of monopolar, bipolar, and multivariate extracted feature values, and topographical brain mans. This information is to be read and interpreted within the context of the current clinical assessment of the patient by the attending physician/clinician. The decision to accept or reject the results of the neuroguide analysis, and incorporate these results into their clinical appraisal of the patient, is dependent upon the judgment of the attending physician or clinician.
The NeuroGuide Analysis System (NAS) is a software program designed for the post-hoc statistical analysis of the human electroencephalogram (EEG). It extracts spectral power, frequency information, and performs statistical analyses, comparing results against a normative database. The system is intended to be used by qualified medical and clinical professionals as an adjunct to traditional visually-appraised EEG.
Here's an analysis of the acceptance criteria and the study that proves the device meets them:
1. Table of Acceptance Criteria and Reported Device Performance:
Acceptance Criteria (Stated Goal of Study) | Reported Device Performance |
---|---|
Non-Clinical Testing: Algorithms and statistical methods used for data analysis must be accurate. Conformity between host digital EEG system and NAS for frequency and power analysis. Reproduce sampling frequency of host system and visualize/evaluate EEG waveform accuracy between host system and NAS translation. Consistency and accuracy of NAS analysis of stored subject data with prior analyses (using same methods). | Control signals (generated waveforms) were analyzed for frequency and power, confirming accuracy. EEG signals were analyzed for conformity between the host digital EEG system and the NAS. The NAS includes a feature to reproduce the host system's sampling frequency and visualize/evaluate EEG waveform accuracy. Data from previous NeuroGuide analysis implementations were evaluated for consistency and accuracy, and the NAS's analysis of stored subject data conformed to these prior analyses, indicating reproducibility. |
Clinical Testing: Accurate translation and presentation of EEGs from clinical patients. Agreement between NAS analysis results (statistical tables and topographical brain maps) and results from the host system used at the Applied Neuroscience Laboratory (ANL). Discriminant analysis outcome on NAS must be consistent with the host system, without errors of misclassification. Reproducibility of results within acceptable variation, consistent with reliability estimates from normative studies, when analyzing eyes-closed resting, artifact-free EEG. | Non-clinical testing confirmed the ability of the NAS to accurately translate and present EEGs. The results of the analysis (statistical tables and topographical brain maps) were in agreement with the results of the analysis conducted on the host system used for processing patient information at ANL. The outcome of the discriminant analysis was consistent, not resulting in errors of misclassification, aligning with the host system's performance. These tests confirmed that when eyes-closed resting, and artifact-free EEG was selected for analysis, the results were reproducible within an acceptable degree of variation consistent with reliability estimates identified in the normative studies. |
Overall Conclusion (Safety & Effectiveness): Safe and effective for the quantitative analysis of eyes-closed resting EEG in alert human subjects, and to help determine if EEG is normal or abnormal. If abnormal, to statistically characterize the distribution of selected derived features by their probability of being similarly distributed in specified groups of clinical patients. Provides information that complements and supplements traditional EEG analysis, when used as a safe and effective adjunctive aid to diagnosis, treatment planning, and follow-up. | The non-clinical and clinical testing conducted over 25 years demonstrates that the NAS is both safe and effective for the quantitative analysis of eyes-closed resting EEG in alert human subjects. It can help determine if the EEG is normal or abnormal, and if abnormal, statistically characterize feature distribution. This information complements and supplements traditional EEG analysis, and when used properly with other clinical tests, serves as a safe and effective adjunctive aid for diagnosis, treatment planning, and follow-up of neurologic and psychiatric patients. |
2. Sample Size Used for the Test Set and Data Provenance:
- Test Set Sample Size: The document states that subjects ranged in age from "2 months to 82 years". It does not provide a specific number for the test set sample size. It refers to "stored subject data" and "clinical patients".
- Data Provenance: The data used for both the normative and clinical databases were developed over a "25-year effort" at the Applied Neuroscience Laboratory (ANL) at the University of Maryland. These included "numerous government and privately funded normative and clinical database projects." Subjects were either volunteers or clinical patients referred to ANL by the Department of Psychiatry University of Maryland School of Medicine, and/or Shock Trauma and the Applied Neuroscience Institute at the University of Maryland Eastern Shore. This indicates a retrospective and prospective collection, likely primarily from the United States (University of Maryland).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications:
- The document does not explicitly state the number of experts used to establish the ground truth for the test set.
- It mentions that "the results of the analysis were conveyed to the referring physician or Ph.D. clinician who was asked to use the information as an adjunct to their clinical interpretation of the patient's traditional EEG." This implies that the referring physicians/Ph.D. clinicians were involved in evaluating the NAS's output in comparison to their traditional EEG assessment. The qualifications of these individuals are stated as "referring physician or Ph.D. clinician."
4. Adjudication Method for the Test Set:
- The document describes the evaluation as a comparison where the results of the NAS analysis (statistical tables and topographical brain maps) "had to be in agreement with the results of the analysis conducted on the host system used in the processing of patient information at the Applied Neuroscience Laboratory." Additionally, the discriminant analysis outcome "had to be consistent, not resulting in errors of misclassification" with the host system.
- This suggests a comparison against a previously established "host system" output and clinical interpretation by referring physicians/Ph.D. clinicians. It does not explicitly mention a formal expert adjudication method like 2+1 or 3+1. The "consistency" with the host system and the clinical interpretation by referring physicians/Ph.D. clinicians served as the implicit "adjudication."
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done:
- No, an MRMC comparative effectiveness study was not explicitly mentioned. The study focuses on verifying the NAS's consistency with a prior "host system" and its adjunct utility for clinicians, rather than directly measuring human reader improvement with AI assistance. The device is explicitly stated to be an "adjunct" and not a standalone diagnostic system.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- No, a standalone performance study was not performed or intended. The device is explicitly developed and tested as a tool for "post-hoc statistical evaluation" to be used by "qualified medical and qualified clinical professionals" as an "adjunct to the traditional visually-appraised EEG." The application contraindicates using the NeuroGuide Analysis System as a stand-alone diagnostic system.
7. The Type of Ground Truth Used:
- The ground truth for the device's validation appears to be a combination of:
- Expert Consensus/Clinical Agreement: The "referring physician or PhD. clinician" determined the relevance of the NAS information to their "clinical evaluation and diagnosis or treatment." The NAS results had to be consistent with the "host system" analysis, which itself would have been based on established methods and clinical understanding.
- Normative and Clinical Database: The extensive 25-year effort at ANL built a "viable normative and clinical database." This database, based on a combination of healthy volunteers and patients with known disorders, served as the statistical ground truth against which individual patient data is compared to identify deviations.
- Internal Consistency: Non-clinical testing verified algorithmic accuracy and consistency with prior analyses conducted using the same methods on the "host system."
8. The Sample Size for the Training Set:
- The document states that the NAS's design was based on a "25-year effort to construct a viable normative and clinical database" at the Applied Neuroscience Laboratory. This database serves as the foundation for the algorithms and statistical comparisons within NAS. While specific numbers are not given for the training set per se, this "normative and clinical database" effectively acts as the large-scale training/reference data. It involved subjects ranging from "2 months to 82 years" and included "numerous government and privately funded normative and clinical database projects."
9. How the Ground Truth for the Training Set Was Established:
- The ground truth for the "normative and clinical database" (which serves as the basis for the device's analysis and "training") was established through:
- Extensive Data Collection: A 25-year effort involving data from volunteers and clinical patients.
- Clinical Diagnosis/Categorization: Clinical patients were "referred for neuroguide evaluation to the Applied Neuroscience Laboratory by the Department of Psychiatry University of Maryland School of Medicine, and/or Shock Trauma and the Applied Neuroscience Institute." This implies that the clinical categories of these patients (e.g., specific disorders) were established through traditional medical diagnostic processes by these departments.
- Careful Construction and Statistical Control: The database was "carefully constructed and statistically controlled" with age-regressed variables confirmed for Gaussian distribution. This indicates a robust statistical methodology in building the reference data for what constitutes "normal" and "abnormal" patterns based on known clinical groups.
- Prior NeuroGuide Evaluations: The "patterns of 'hits' identified in prior neuroguide evaluations of clinical patients with known disorders" were used to develop discriminant analysis capabilities, effectively using previously established clinical-EEG correlations as a form of ground truth for classification.
Ask a specific question about this device
Page 1 of 2