K Number
K241589
Manufacturer
Date Cleared
2025-04-09

(310 days)

Product Code
Regulation Number
882.1400
Panel
NE
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The Ceribell Seizure Detection Software is intended to mark previously acquired sections of EEG recordings in patients greater or equal to 1 year of age that may correspond to electrographic seizures in order to assist qualified clinical practitioners in the assessment of EEG traces. The Seizure Detection Software also provides notifications to the user when detected seizure prevalence is "Frequent", "Abundant", or "Continuous, per the definitions of the American Clinical Neurophysiology Society Guideline 14. Delays of up to several minutes can occur between the beginning of a seizure and when the Seizure Section notifications will be shown to a user.

The Ceribell Seizure Detection Software does not provide any diagnostic conclusion about the subject's condition and Seizure Detection notifications cannot be used as a substitute for real time monitoring of the underlying EEG by a training expert.

Device Description

The Ceribell Seizure Detection Software is a software-only device that is intended to mark previously acquired sections of EEG recordings that may correspond to electrographic seizures in order to assist qualified clinical practitioners in the assessment of EEG traces.

AI/ML Overview

Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) clearance letter:


1. Table of Acceptance Criteria & Reported Device Performance

Metric / CategoryAcceptance Criteria (95% CI)Reported Device Performance (95% Confidence Interval)Pass / Fail
Positive Percent Agreement (PPA)Lower bound ≥ 70% PPA for each threshold
Seizure Burden ≥10% (Frequent)
Ages 1-11Lower bound ≥ 70%96.12% [88.35, 99.28]Pass
Ages 12-17Lower bound ≥ 70%87.01% [73.16, 93.55]Pass
Ages 18+Lower bound ≥ 70%95.71% [91.30, 98.43]Pass
OverallLower bound ≥ 70%93.93% [90.03, 96.52]Pass
Seizure Burden ≥50% (Abundant)
Ages 1-11Lower bound ≥ 70%96.67% [87.50, 100.00]Pass
Ages 12-17Lower bound ≥ 70%95.45% [73.33, 100.00]Pass
Ages 18+Lower bound ≥ 70%96.72% [88.37, 100.0]Pass
OverallLower bound ≥ 70%96.50% [92.12, 98.77]Pass
Seizure Burden ≥90% (Continuous)
Ages 1-11Lower bound ≥ 70%92.59% [76.00, 100]Pass
Ages 12-17Lower bound ≥ 70%100.0% [100, 100]Pass
Ages 18+Lower bound ≥ 70%93.55% [78.26, 100.0]Pass
OverallLower bound ≥ 70%94.12% [85.45, 98.48]Pass
False Positive rate per hour (FP/hr)Upper bound ≤ 0.446 FP/hr for each threshold
Seizure Burden ≥10% (Frequent)
Ages 1-11Upper bound ≤ 0.4460.2700 [0.2445, 0.2986]Pass
Ages 12-17Upper bound ≤ 0.4460.2141 [0.1920, 0.2394]Pass
Ages 18+Upper bound ≤ 0.4460.1343 [0.1250, 0.1445]Pass
OverallUpper bound ≤ 0.4460.1763 [0.1670, 0.1859]Pass
Seizure Burden ≥50% (Abundant)
Ages 1-11Upper bound ≤ 0.4460.1561 [0.1369, 0.1772]Pass
Ages 12-17Upper bound ≤ 0.4460.0921 [0.0776, 0.1082]Pass
Ages 18+Upper bound ≤ 0.4460.0547 [0.0480, 0.0615]Pass
OverallUpper bound ≤ 0.4460.08180 [0.0754, 0.0885]Pass
Seizure Burden ≥90% (Continuous)
Ages 1-11Upper bound ≤ 0.4460.0843 [0.0697, 0.1006]Pass
Ages 12-17Upper bound ≤ 0.4460.0399 [0.0301, 0.0511]Pass
Ages 18+Upper bound ≤ 0.4460.0249 [0.0204, 0.0299]Pass
OverallUpper bound ≤ 0.4460.03951 [0.0351, 0.0443]Pass

2. Sample Size and Data Provenance for the Test Set

  • Sample Size for Test Set:
    • Total Number of Patients: 1701
      • Ages 1-11: 450 patients
      • Ages 12-17: 392 patients
      • Ages 18+: 859 patients
  • Data Provenance: The EEG recordings dataset used for performance validation was gathered from real-world clinical usage of the Ceribell Pocket EEG Device. The specific country of origin is not explicitly stated, but it's implied to be from acute care hospital settings where the predicate device (Ceribell Pocket EEG Device) is used. The data is retrospective as it was previously acquired. There were no patient inclusion or exclusion criteria applied, indicating a representative sample of the intended patient population.

3. Number of Experts and Qualifications for Ground Truth

  • Number of Experts: More than three expert neurologists (implied by "a two-thirds majority agreement was required"). The tables later specify "3 expert reviewers" for the seizure burden distribution, suggesting at least 3, and possibly more given the "two-thirds majority" rule.
  • Qualifications of Experts: Fellowship trained in epilepsy or neurophysiology. No specific years of experience are mentioned.

4. Adjudication Method for the Test Set

  • Adjudication Method: A two-thirds majority agreement among the expert neurologists was required to establish the ground truth for seizures. This implies a method of consensus. The experts were fully blinded to the outputs of the Seizure Detection Software.

5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

  • Was an MRMC study done? No. The documentation describes a standalone algorithm performance study, not a comparative effectiveness study involving human readers with and without AI assistance.
  • Effect Size: Not applicable, as no MRMC study was performed.

6. Standalone Algorithm Performance

  • Was a standalone study done? Yes. The study directly evaluates the "performance of the Seizure Detection algorithm" by comparing its output (algorithm marks/notifications) against the expert-established ground truth. The algorithm's PPA and FP/hr metrics are presented, which are standard for standalone AI performance.

7. Type of Ground Truth Used

  • Type of Ground Truth: Expert consensus (specifically, a two-thirds majority agreement among fellowship-trained neurologists reviewing EEG recordings). This is clinical expert ground truth based on visual review of EEG.

8. Sample Size for the Training Set

  • Sample Size for Training Set: Not explicitly stated. The document only mentions that "none of the data in the validation dataset were used for training of the Seizure Detection algorithm; the validation dataset is completely independent." This ensures the integrity of the test set but does not provide information about the training set size.

9. How Ground Truth for the Training Set Was Established

  • Ground Truth Establishment for Training Set: Not explicitly stated. The document focuses exclusively on the validation dataset's ground truth methodology. However, given the nature of deep learning models, it's highly probable that the training data also underwent a rigorous ground truth labeling process, likely similar to (or potentially identical in methodology to) the validation set, though this is not detailed here.

§ 882.1400 Electroencephalograph.

(a)
Identification. An electroencephalograph is a device used to measure and record the electrical activity of the patient's brain obtained by placing two or more electrodes on the head.(b)
Classification. Class II (performance standards).