Search Results
Found 3 results
510(k) Data Aggregation
(166 days)
BrainScope TBI is a multi-modal, multi-parameter assessment indicated for use as an adjunct to standard clinical practice to aid in the evaluation of patients who have sustained a closed head injury, and have a Glasgow Coma Scale (GCS) score of 13-15 (including patients with concussion/mild traumatic brain injury (mTBI)).
BrainScope TBI provides a multi-parameter measure (CI)) to aid in the evaluation of concussion in patients between the ages of 13-25 years who present with a GCS score of 15 following a head injury within the past 72 hours (3 days), in conjunction with a standard neurological assessment of concussion. The CI is computed from a multivariate algorithm based on the patient's electroencephalogram (EEG), augmented by neurocognitive measures and selected clinical symptoms.
The BrainScope TBI Structural Injury Classification ("SIC") uses brain electrical activity (EEG) to determine the likelihood of structural brain injury visible on head CT for patients between the ages of 18-85 years (have a GCS score of 13 – 15), have sustained a closed head injury within the past 72 hours (3 days) who are being considered for a head CT. BrainScope TBI should not be used as a substitute for a CT scan. Negative likely corresponds to those with no structural brain injury visible on head CT. Positive likely corresponds to those with a structural brain injury visible on head CT. Equivocal may correspond to structural brain injury visible on head CT or may indicate the need for further observation or evaluation.
BrainScope TBI provides a measure of brain Function Index, (BFI)) for the statistical evaluation of the human electroencephalogram (EEG), aiding in the evaluation of head injury as part of a multi-modal, multi-parameter assessment, in patients 18-85 years of age (have a GCS score of 13 - 15) who have sustained a closed head injury within the past 72 hours (3 days).
The BrainScope TBI device is intended to record, measure, analyze, and display brain electrical activity utilizing the calculation of standard quantitative EEG (QEEG) parameters from frontal locations on a patient's forehead. The BrainScope TBI calculates and displays raw measures for the following standard QEEG measures: Absolute and Relative Power, Asymmetry, Coherence and Fractal Dimension. These raw measures are intended to be used for post hoc analysis of EEG signals for interpretation by a qualified user.
BrainScope TBI also provides clinicians with quantitative measures of cognitive performance in patients 13-85 years of age to aid in the assessment of an individual's level of cognitive function. These measures interact with the CI and can be used stand alone.
BrainScope TBI also stores and displays electronic versions of standardized clinical assessment tools that should be used in accordance with the assessment tools' general instructions. These tools do not interact with any other device measures, and are stand alone.
BrainScope TBI (model: Ahead 500) is a portable, non-invasive, non-radiation emitting, point of care device intended to provide results and measures to support clinical assessments and aid in the diagnosis of concussion / mild traumatic brain injury (mTBI). The BrainScope TBI includes a new multivariate classification algorithm that analyzes a patient's electroencephalogram (EEG), augmented by neurocognitive performance and selected clinical symptoms to compute a multi-modal index called the Concussion Index (CI). BrainScope TBI provides the healthcare provider with a multi-parameter measure to aid in the evaluation of concussion following a head injury within the past 72 hours (3 days). The BrainScope TBI (Ahead 500) retains all the capabilities of the predicate (BrainScope TBI, model: Ahead 400) including the Structural Injury Classification (SIC) and the Brain Function Index (BFI). It also contains configurable, selectable computerized cognitive performance tests and digitized standard clinical assessment tools intended to provide a multi-modal panel of measures to support the clinical assessment of concussion / mTBI.
Here's a breakdown of the acceptance criteria and the study details for the BrainScope TBI (model: Ahead 500) device, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
| Metric | Acceptance Criteria (Performance Goal) | Reported Device Performance (95% CI) |
|---|---|---|
| Sensitivity | 0.69 | 0.8599 (0.8050, 0.9041) |
| Specificity | 0.565 | 0.7078 (0.6588, 0.7535) |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: 580 subjects
- 229 matched controls
- 144 healthy volunteers
- 207 subjects who sustained closed head injury and were removed from play
- Data Provenance: The study was conducted across 10 US clinical sites, including High Schools, Colleges, and Concussion Clinics. The study design appears to be prospective, given it involved testing subjects at different time points and with specific inclusion/exclusion criteria.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
The document does not explicitly state the number of experts used or their specific qualifications (e.g., "Radiologist with 10 years of experience") for establishing the ground truth.
4. Adjudication Method for the Test Set
The document does not explicitly state the adjudication method used for the test set.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No, the document does not mention a Multi-Reader Multi-Case (MRMC) comparative effectiveness study to assess how much human readers improve with AI vs. without AI assistance. The study focuses on the standalone performance of the Concussion Index (CI) algorithm.
6. Standalone (Algorithm Only) Performance Study
Yes, a standalone performance study was done for the Concussion Index (CI). The reported sensitivity and specificity values are for the algorithm's performance in classifying concussions.
7. Type of Ground Truth Used
The clinical reference standard (ground truth) incorporated elements from guidelines published in the International Conference on Concussion in Sport (McCrory 2017; 2013) as well as the National Collegiate Athletic Association (NCAA) concussion policy. This suggests a clinical diagnosis/consensus-based ground truth, likely established by clinicians based on established guidelines and possibly direct observations or outcomes related to concussion (e.g., "removed from play"). It's not explicitly stated to be solely pathology or patient outcomes data, but rather a combination of clinical criteria.
8. Sample Size for the Training Set
The document states that the "cutoff (threshold) CI [was] derived from an algorithm development study that was independent of the validation study," but it does not provide the sample size for this algorithm development (training) study.
9. How the Ground Truth for the Training Set Was Established
The document implies that the ground truth for the "algorithm development study" (training set) would have been established using similar clinical criteria as the validation study, i.e., "consistent with similar changes seen in subjects with concussion," incorporating elements from the International Conference on Concussion in Sport guidelines and NCAA concussion policy. However, it does not explicitly detail the process for establishing ground truth for the training set.
Ask a specific question about this device
(144 days)
• The Ahead® 200, consisting of two models, i.e., the Ahead® CV-200, is indicated for use as an adjunct to standard clinical practice to aid in the evaluation of patients who are being considered for a head CT, but should not be used as a substitute for a CT scan. This device is to be used for this purpose in patients who sustained a closed head injury within 24 hours, clinically present as a mild traumatic brain injury with a Glasgow Coma Scale score (GCS) of 13-15, and are between the ages of 18-80 years.
· A negative BrainScope® Classification may correspond to brain electrical activity consistent with no structural brain injury visible on head CT in patients presenting as a mild traumatic brain injury, within 24 hours of injury.
· A positive BrainScope® Classification corresponds to brain electrical activity that may be present in both patients with or without a structural brain injury visible on head CT. A positive BrainScope® Classification does not establish the presence of a structural brain injury visible on head CT.
· The Ahead® 200 device is intended to record, measure, and display brain electrical activity utilizing the calculation of standard quantitative EEG (qEEG) parameters from frontal locations on a patient's forehead. The Ahead® 200 calculates and displays raw measures for the following standard qEEG measures: Absolute and Relative Power, Asymmetry, Coherence and Fractal Dimension. These raw measures are intended to be used for post hoc analysis of EEG signals for interpretation by a qualified user.
• The Ahead® M-200 model additionally stores and displays an electronic version of the Military Acute Concussion Evaluation (MACE) cognitive assessment and user-entered responses to the MACE questions. There is no interaction between EEG-related functionality, including and displaying brain electrical activity, and the function of storing and displaying MACE information.
· The Ahead® 200 is intended for use by physicians, or under the direction of a physician, who have been trained in the use of the device.
• The Ahead® 200 is a prescription use device.
BrainScope® Ahead® 200 is a Brain Injury Adjunctive The Interpretive Electroencephalograph Assessment Aid, Class II device. It is a portable, non-sterile, non-invasive, non-radiation emitting, point of care, electroencephalogram (EEG) device and is intended to provide an objective assessment of brain electrical activity associated with traumatic brain injury (TBI). This brain injury adjunctive interpretive EEG assessment aid is for use as an adjunct to standard clinical practice only as an assessment aid for a medical condition for which there exists other valid methods of diagnosis.
The BrainScope® Ahead® 200 is comprised of two hardware components: the Handheld Device and the disposable Electrode Headset with patient application supplies. The main software components of the Ahead 200 are the application software and the BrainScope Algorithm Library (BSAL). The Ahead® 200 has two models: (1) the Ahead® M-200, which is intended for use by the military, and (2) the Ahead® CV-200, which is intended for civilian use. The Ahead® M-200 and the Ahead® CV-200 have the same indications for use and intended uses with the exception that the Ahead® M-200 model additionally stores and displays an electronic version of the Military Acute Concussion Evaluation (MACE) and user-entered responses to the MACE questions. There is no interaction between EEG-related functionality, including analyzing and displaying brain electrical activity, and the function of storing and displaying MACE information. The Ahead® M-200 and the Ahead® CV-200 have identical technological characteristics except that the Ahead® M-200 contains the additional MACE feature, i.e., an electronic version of the paper and pencil based MACE.
This document is a 510(k) premarket notification for the BrainScope Ahead 200 device, asserting its substantial equivalence to a predicate device, the BrainScope Ahead 100. It focuses on device specifications, safety, and performance comparisons rather than detailed clinical acceptance criteria and a definitive study demonstrating them.
Here's an analysis based on the provided text, outlining the limitations due to the nature of the document:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly state "acceptance criteria" in terms of clinical performance metrics (e.g., sensitivity, specificity, AUC) for the Ahead 200 device. Instead, it focuses on verifying that the Ahead 200 functions as intended and is comparable to its predicate device. The performance data presented are primarily engineering and safety tests:
| Design Verification Test | Result |
|---|---|
| Software (including User Interface) Verification Testing | Pass |
| Hardware Verification | Pass |
| System Performance and Functionality | Pass |
| Algorithm Performance | Pass |
| Packaging Testing | Pass |
| Basic Safety and Essential Performance (IEC 60601-1, 3rd ed.) including PEMS clause 14 for Software | Pass |
| Electromagnetic Compatibility | Pass |
| Biocompatibility | Pass |
| Reliability | Pass |
The document asserts the "Algorithm Performance" passed, and that the "Harmony" classification algorithm used is the same as the predicate device, implying its performance is at least equivalent. However, specific performance metrics (sensitivity, specificity) of this algorithm are not provided in this document.
Similarly, electrical hardware specifications are compared, suggesting functional acceptance criteria for components:
| Electrical Hardware Feature | Ahead® 200 Performance | Comparable Predicate Ahead® 100 Performance | Note (Indicating improvement or equivalence) |
|---|---|---|---|
| Common Mode Rejection Ratio (CMRR) | < -100 dB | < - 85 dB | Ahead® 200 better CMRR |
| Low pass filtering prior to signal processing | 0.3 Hz to 43Hz | 0.3 Hz to 43Hz | Same |
| System Noise Floor | < 0.4 µV in 0.3 Hz to 43Hz bandwidth | < 0.4 µV in 0.3 Hz to 43Hz bandwidth | Same |
| ADC Resolution | 45 nV/bit | 31.2nV/bit | Both devices have better resolution than their noise floor |
| ADC Sampling Rate | 1000 Hz, down sampled to 100 Hz for algorithm processing | 1000 Hz, down sampled to 100 Hz for algorithm processing | Processing bandwidth used by algorithm is same |
| Data Channel | 7 | 7 | Same |
The clinical "acceptance criteria" for the device, in terms of diagnostic accuracy, would typically have been established during the clearance of the predicate device (Ahead 100), as this submission for Ahead 200 claims "substantial equivalence" based on using the "Same Harmony algorithm" and "Same fundamental technology." This document does not present a new clinical study to establish these criteria for the Ahead 200 itself.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
This information is not provided in the document for the clinical performance of the classification algorithm. The document states that "All tests were conducted on the new model to establish substantial equivalence to the predicate (Ahead® 100, models M-100 and CV-100, DEN 140025)." These are primarily engineering and functionality tests. Any clinical performance data regarding the "Harmony" algorithm would likely be referenced in the predicate device's 510(k) (DEN 140025), which is not part of this input.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This information is not provided in the document as it focuses on engineering and functional testing for equivalence. For the clinical performance of the classification algorithm, ground truth (e.g., presence/absence of structural brain injury on head CT) would have been established for the studies that supported the predicate device's clearance.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not provided in the document.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
There is no mention of an MRMC comparative effectiveness study in this document. The device is a "Brain Injury Adjunctive Interpretive Electroencephalograph Assessment Aid," meaning it's intended to aid in evaluation, not directly replace or be compared in an MRMC setting with human readers. It's an algorithm generating a "classification" based on EEG, which is then used by a physician.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The document mentions "Algorithm Performance" as a test that passed. The "BrainScope® Classification" (positive/negative) output by the device is described as "brain electrical activity consistent with no structural brain injury visible on head CT" or "brain electrical activity that may be present in both patients with or without a structural brain injury visible on head CT." This implies a standalone classification from the algorithm. However, quantitative performance metrics (e.g., sensitivity, specificity, PPV, NPV) of this standalone classification are not explicitly provided in this document. Any such data would likely be found in the original 510(k) for the predicate device, as this device uses the "Same Harmony algorithm."
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
The "Indications for Use" mention "structural brain injury visible on head CT." This strongly implies that the ground truth for assessing the algorithm's performance (implicitly for the predicate device, as the algorithm is the same) was based on head CT scans.
8. The sample size for the training set
The document does not provide the sample size for the training set for the "Harmony" algorithm. This information would typically be part of the development and validation of the algorithm, likely detailed in the predicate device's 510(k) submission.
9. How the ground truth for the training set was established
The document does not provide this information. Given the ground truth type mentioned (structural brain injury visible on head CT), it would logically have involved interpretation of head CT scans.
Ask a specific question about this device
(314 days)
The ZOOM-100DC is used to measure and record the electrical activity of a patient's brain. The ZOOM-100DC is intended to monitor the state of the brain by acquisition and display of electroencephalogram (EEG) signals and by the calculation of standard quantitative EEG (qEEG) parameters.
The ZOOM-100DC records, measures and displays Electroencephalographic (EEG) waveforms which are digitized and processed.
The provided text describes a 510(k) premarket notification for BrainScope's ZOOM-100DC, an Electroencephalograph (EEG). The document focuses on demonstrating substantial equivalence to predicate devices, rather than an independent study proving the device meets specific acceptance criteria based on diagnostic accuracy or clinical outcomes. Therefore, much of the requested information regarding study design, sample sizes, expert adjudication, and ground truth establishment is not present in the provided text.
Here's a breakdown of the available information:
1. A table of acceptance criteria and the reported device performance
The document doesn't explicitly state "acceptance criteria" in the sense of predefined performance targets for diagnostic accuracy or clinical utility. Instead, it relies on demonstrating substantial equivalence to predicate devices by comparing technical characteristics and intended use. The "performance" assessment is based on this comparison.
| Characteristic | Predicate Devices (e.g., Nicólet Bravo, Crystal-EEG, BRM3, I-2000) (Range/Example) | ZOOM-100DC | Assessment/Comparison |
|---|---|---|---|
| Intended Use | Measure and record electrical activity of patient's brain, monitor state by acquisition/display of EEG signals and calculation of qEEG parameters (specifics vary slightly by predicate, e.g., BRM3 for neonates) | Measure and record electrical activity of patient's brain, monitor state by acquisition/display of EEG signals and calculation of standard qEEG parameters. | Equivalent: The intended use is directly comparable and within the scope of the predicates. |
| Modalities | EEG (some predicates also include EP, EMG, CSA) | EEG | Equivalent: ZOOM-100DC offers EEG, consistent with the core modality of the predicates. |
| Channels | 8, 16, 3, 2 channel arrays; 10/20 array capable | 10/20 Array (8 single-ended channels, 5 differential concurrently) | Equivalent: Supports standard EEG channel configurations. |
| Real Time EEG Display | Yes | Yes | Equivalent |
| Real Time EEG Bandwidth | 0.5 - 500 Hz (example from Bravo) | 0.5 - 4000 Hz available | Equivalent or Superior: Offers a broader available range, exceeding some predicates. |
| Processed EEG Bandwidth | 0.5 - 500 Hz (example from Bravo) | 0.5 - 45 Hz, 50Hz | Comparable: Falls within the typical range for processed EEG. |
| Amplifier CMRR | ≥ 110 dB (Nicolet Bravo) | ≥ 100 dB | Comparable: Meets a high standard, though slightly lower than one predicate. |
| Amplifier Input Impedance | > 100 Meg Ohms (Nicolet Bravo) | ≥ 10 Meg Ohms | Comparable: Meets a standard for EEG amplifiers, though lower than one predicate. |
| Electrode Impedance Test | Yes | Yes | Equivalent |
| EEG Derived Measures | Yes - Derived from FFT* (Nicolet Bravo) | Yes - Derived from FFT* | Equivalent: Offers similar qEEG parameters. |
The conclusion states that "Performance data demonstrate that the device performs equivalently to the predicate devices." This equivalence, rather than a specific numerical acceptance criterion, is the basis for substantial equivalence.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document does not provide information on a test set, sample size, or data provenance from a clinical study for the ZOOM-100DC. The submission relies on a comparison of technical specifications and intended use against existing predicate devices, not on new clinical performance data from a specific study.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This information is not provided as there is no mention of a clinical study involving a test set that required expert-established ground truth.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not provided as there is no mention of a clinical study involving a test set that required adjudication.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
This information is not provided. The ZOOM-100DC is an electroencephalograph, not an AI-assisted diagnostic tool that would typically involve a multi-reader, multi-case study for comparative effectiveness with human readers.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
This information is not provided. The device is an EEG recording and display system, not an algorithm that operates standalone for diagnostic purposes.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
This information is not provided as there is no mention of a clinical study that required establishing 'ground truth' for diagnostic outputs. The device measures and records EEG signals and calculates qEEG parameters; its "performance" is assessed by its ability to perform these functions reliably and comparably to predicate devices.
8. The sample size for the training set
This information is not provided. As the submission is for an EEG device and not a machine learning algorithm requiring a "training set," this concept is not applicable in the context of this document.
9. How the ground truth for the training set was established
This information is not provided. See point 8.
Ask a specific question about this device
Page 1 of 1