(90 days)
Sleepware G3 is a software application used for analysis (automatic and manual scoring), display, retrieval, summarization, report generation, and networking of data received from monitoring devices used to categorize sleep related events that help aid in the diagnosis of sleep-related disorders. It is indicated for use with adults (18 and older) and infant patients (one year old or less) in a clinical environment by or on the order of a physician.
The optional Somnolyzer scoring algorithms are for use with adults (18 and older) to generate an output that is ready for review and interpretation by a physician. Cardio-Respiratory Sleep Staging (CReSS) is an additionality of Somnolyzer which uses standard Home Sleep Apnea Test HSAT signals (in the absence of EEG signals) to infer sleep stage.
Sleepware G3 software is a polysomnography scoring application, used by trained clinical professionals, for managing data from sleep diagnostic devices using a personal computer. Sleepware G3 is able to configure sleep diagnostic device parameters, transfer data stored in sleep diagnostic device memory to the personal host computer, process and auto-score data to display graphical and statistical analyses, provide aid to clinical professionals for evaluating the physiological data waveforms relevant to sleep monitoring, and create unique patient reports.
Sleepware G3 includes an optional Somnolyzer plug-in. The auto-scoring algorithms of the Somnolyzer Inside software can be used in addition to, or in the place of, the auto-scoring algorithms that are included in Sleepware G3.
Sleepware G3, remains unchanged in function and fundamental scientific technology from Sleepware G3 which was cleared under K142988.
Here's a breakdown of the acceptance criteria and the study proving the device meets those criteria, based on the provided text:
Acceptance Criteria and Reported Device Performance
The acceptance criteria for the Somnolyzer scoring algorithms are based on demonstrating non-inferiority to manual expert scoring. The reported device performance indicates that all primary and secondary endpoints were met.
Acceptance Criterion (Non-Inferiority to Manual Expert Scoring) | Reported Device Performance |
---|---|
Full PSG Acquisition: | |
Sleep stages according to AASM criteria | Non-inferior (all primary and secondary endpoints met) |
Arousals during sleep according to AASM criteria | Non-inferior (all primary and secondary endpoints met) |
Apneas and hypopneas during sleep according to AASM criteria | Non-inferior (all primary and secondary endpoints met) |
Periodic limb movements during sleep according to AASM criteria | Non-inferior (all primary and secondary endpoints met) |
HST Acquisition: | |
Apneas and hypopneas according to AASM criteria | Non-inferior (all primary and secondary endpoints met) |
Cardio-Respiratory Sleep Staging (CReSS): | |
REI based on cardio-respiratory feature-based sleep time is superior to REI based on monitoring time (for HST acquisition) | Evidence provided that REI calculated using CReSS is a more accurate estimate of AHI than REI calculated using total recording time. Accuracy further improved with additional signals: mean difference between REI and AHI reduced from -6.6 events/hour (95% CI -7.51 to -5.71) to -1.76 events/hour (95% CI -2.27 to -1.24). |
Detailed Study Information:
-
Sample sizes used for the test set and the data provenance:
- Test Set Sample Size: A total of 1,204 polysomnography (PSG) and home sleep apnea test (HSAT) files were used in the five clinical studies.
- Data Provenance: The document does not explicitly state the country of origin. The studies are described as using a "large, diverse sample... collected via a number of different platforms," suggesting diverse sources but not specifying geographical location. The studies were likely retrospective, as they involved validating algorithms against existing manual scoring, but this is not explicitly stated.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: Not explicitly stated how many individual experts were used across all studies. However, the non-inferiority margin for comparisons was "set at the lower-margin of the agreement observed across expert technologists." This implies multiple experts were involved in defining the range of agreement for ground truth.
- Qualifications of Experts: The experts are referred to as Registered Polysomnographic Technologists (RPSGT). This indicates their professional qualification in sleep study scoring.
-
Adjudication method for the test set:
- The document implies a form of consensus or agreement among experts was utilized to set the non-inferiority margin, but it does not explicitly describe a specific adjudication method like 2+1 or 3+1 for individual cases within the test set. The focus is on comparing the algorithm's performance against the established range of agreement among experts.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study was not the primary focus described. The study design primarily involved a standalone evaluation of the AI algorithm (Somnolyzer) against human expert scoring, demonstrating its non-inferiority.
- The document states that Somnolyzer's output is "ready for review and interpretation by a physician," implying it assists human readers by providing a pre-scored output. However, it does not quantify the improvement in human reader performance with AI assistance versus without AI assistance.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone evaluation was performed. The clinical studies "validated the Somnolyzer and CReSS algorithms against manual scoring." The non-inferiority claims ("Somnolyzer scoring... is non-inferior to manual expert scoring") directly refer to the algorithm's performance without human intervention in the scoring process.
-
The type of ground truth used:
- The ground truth was expert consensus scoring. The document states that the algorithms were validated "against manual scoring" by "expert technologists" (RPSGTs). The non-inferiority margin was based on "the agreement observed across expert technologists."
-
The sample size for the training set:
- The document does not provide information on the training set sample size. The provided text focuses solely on the clinical performance testing for validation.
-
How the ground truth for the training set was established:
- As the training set information is not provided, the method for establishing its ground truth is also not described in the document.
§ 882.1400 Electroencephalograph.
(a)
Identification. An electroencephalograph is a device used to measure and record the electrical activity of the patient's brain obtained by placing two or more electrodes on the head.(b)
Classification. Class II (performance standards).