Search Results
Found 1 results
510(k) Data Aggregation
(268 days)
The Dreem 3S is intended for prescription use to measure, record, display, transmit and analyze the electrical activity of the brain to assess sleep and awake in the home or healthcare environment.
The Dreem 3S can also output a hypnogram of sleep scoring by 30-second epoch and summary of sleep metrics derived from this hypnogram.
The Dreem 3S is used for the assessment of sleep on adult individuals (22 to 65 years old). The Dreem 3S allows for the generation of user/predefined reports based on the subject's data.
The Dreem 3S headband contains microelectronics, within a flexible case made of plastic, foam, and fabric. It includes 6 EEG electrodes and a 3D accelerometer sensor.
The EEG signal is measured by two electrodes in the frontal position) and two at the back of the head (occipital position), along with one reference electrode and one ground electrode.
The 3D accelerometer is embedded in the top of the headband to ensure accurate measurements of the wearer's head movement during the night. The raw EEG and accelerometer data are transferred to Dreem's servers for further analysis after the night is over.
The device includes a bone-conduction speaker with volume control to provide notifications to the wearer, and a power button circled by a multicolor LED light
The device generates a sleep report that includes a sleep staging for each 30-second epoch during the night. This output is produced using an algorithm that analyzes data from the headband EEG and accelerometer sensors. A raw data file is also available in EDF format.
The algorithm uses raw EEG data and accelerometer data to provide automatic sleep staging according to the AASM classification. The algorithm is implemented with an artificial neural network. Frequency spectrums are computed from raw data and then passed to several neural network layers including recurrent layers and attention layers. The algorithm outputs prediction for several epochs of 30 seconds at the same time, every 30 seconds. The various outputs for a single epoch of 30 seconds are combined to provide robust sleep scoring.
Here's a breakdown of the acceptance criteria and study details for the Dreem 3S device based on the provided text:
Acceptance Criteria and Device Performance
Acceptance Criteria (Implicit from Study Results) | Reported Device Performance (Dreem 3S vs. Expert-scored PSG) |
---|---|
Sleep Stage Classification Accuracy | |
Wake Classification Performance | Positive Agreement (PA): 88.5% (85.1%, 91.3% CI) |
N1 Classification Performance | Positive Agreement (PA): 58.0% (52.7%, 63.0% CI) |
N2 Classification Performance | Positive Agreement (PA): 83.4% (80.7%, 85.7% CI) |
N3 Classification Performance | Positive Agreement (PA): 98.2% (96.73%, 99.3% CI) |
REM Classification Performance | Positive Agreement (PA): 91.57% (86.63%, 95.72% CI) |
EEG Data Quality for Manual Scoring | 96.6% epochs per night of recording were acceptable for manual scoring and sleep staging by at least two out of three reviewers. |
Minimum Scoreable Data | All data recordings reviewed had ≥4 hours of data considered to be scoreable by at least two out three reviewers. |
Usability in Home Setting | The device could be successfully used and was tolerated by study subjects. |
Note: The document primarily presents performance results rather than explicitly stating pre-defined acceptance criteria with numerical thresholds the device needed to meet. The "acceptance criteria" listed above are inferred from the demonstrated performance that supported substantial equivalence.
Study Details
-
Sample size used for the test set and the data provenance:
- Sample Size: 38 subjects
- Data Provenance: The study was a "clinical investigation... completed... in a sleep lab setting." Subjects ranged from 23 to 66 years old, equally split between male and female, and included individuals self-identified as White, Black African American, Asian, Hispanic, and some not identified. This suggests a prospective study with diverse participants, likely conducted in a single country, though the specific country of origin is not explicitly stated. The study included a total of 36447 epochs, corresponding to about 303 hours and 43 minutes of sleep.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- The ground truth for sleep staging was established by "expert-scored sleep stages from a cleared device." For EEG data quality, this was assessed by "at least two out of three reviewers qualified to read EEG and/or PSG data." The specific number of experts used for the primary sleep staging ground truth is not explicitly stated (e.g., whether it was one expert, or a consensus of multiple). Their exact qualifications (e.g., years of experience as a radiologist) are also not detailed beyond "expert-scored" and "qualified to read EEG and/or PSG data."
-
Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- For the primary sleep staging (Table 2), the ground truth is referred to as "Consensus from manual staging" or "expert-scored PSG." The specific adjudication method (e.g., 2+1, 3+1) is not detailed.
- For EEG data quality, acceptability was determined if "at least two out of three reviewers qualified to read EEG and/or PSG data" agreed. This implies a 2-out-of-3 consensus (similar to a 2+1 method if one reviewer was the primary and two others adjudicated).
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, a multi-reader multi-case (MRMC) comparative effectiveness study focusing on human reader improvement with AI assistance was not done. This study solely evaluated the standalone performance of the Dreem 3S algorithm against expert-scored PSG.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone study was done. The clinical performance evaluation directly compares the "Dreem 3S (Automated analysis)" output to "Consensus from manual staging" (expert-scored PSG), indicating the performance of the algorithm without human intervention in the loop.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Expert consensus of manual staging from a 510(k)-cleared PSG system.
-
The sample size for the training set:
- The sample size for the training set is not specified in the provided document. The document only mentions "The algorithm is implemented with an artificial neural network. Frequency spectrums are computed from raw data and then passed to several neural network layers including recurrent layers and attention layers," which implies a training process, but no details on the training data size are given.
-
How the ground truth for the training set was established:
- How the ground truth for the training set was established is not specified in the provided document.
Ask a specific question about this device
Page 1 of 1