Search Results
Found 2 results
510(k) Data Aggregation
(90 days)
Sleepware G3
Sleepware G3 is a software application used for analysis (automatic and manual scoring), display, retrieval, summarization, report generation, and networking of data received from monitoring devices used to categorize sleep related events that help aid in the diagnosis of sleep-related disorders. It is indicated for use with adults (18 and older) and infant patients (one year old or less) in a clinical environment by or on the order of a physician.
The optional Somnolyzer scoring algorithms are for use with adults (18 and older) to generate an output that is ready for review and interpretation by a physician. Cardio-Respiratory Sleep Staging (CReSS) is an additionality of Somnolyzer which uses standard Home Sleep Apnea Test HSAT signals (in the absence of EEG signals) to infer sleep stage.
Sleepware G3 software is a polysomnography scoring application, used by trained clinical professionals, for managing data from sleep diagnostic devices using a personal computer. Sleepware G3 is able to configure sleep diagnostic device parameters, transfer data stored in sleep diagnostic device memory to the personal host computer, process and auto-score data to display graphical and statistical analyses, provide aid to clinical professionals for evaluating the physiological data waveforms relevant to sleep monitoring, and create unique patient reports.
Sleepware G3 includes an optional Somnolyzer plug-in. The auto-scoring algorithms of the Somnolyzer Inside software can be used in addition to, or in the place of, the auto-scoring algorithms that are included in Sleepware G3.
Sleepware G3, remains unchanged in function and fundamental scientific technology from Sleepware G3 which was cleared under K142988.
Here's a breakdown of the acceptance criteria and the study proving the device meets those criteria, based on the provided text:
Acceptance Criteria and Reported Device Performance
The acceptance criteria for the Somnolyzer scoring algorithms are based on demonstrating non-inferiority to manual expert scoring. The reported device performance indicates that all primary and secondary endpoints were met.
Acceptance Criterion (Non-Inferiority to Manual Expert Scoring) | Reported Device Performance |
---|---|
Full PSG Acquisition: | |
Sleep stages according to AASM criteria | Non-inferior (all primary and secondary endpoints met) |
Arousals during sleep according to AASM criteria | Non-inferior (all primary and secondary endpoints met) |
Apneas and hypopneas during sleep according to AASM criteria | Non-inferior (all primary and secondary endpoints met) |
Periodic limb movements during sleep according to AASM criteria | Non-inferior (all primary and secondary endpoints met) |
HST Acquisition: | |
Apneas and hypopneas according to AASM criteria | Non-inferior (all primary and secondary endpoints met) |
Cardio-Respiratory Sleep Staging (CReSS): | |
REI based on cardio-respiratory feature-based sleep time is superior to REI based on monitoring time (for HST acquisition) | Evidence provided that REI calculated using CReSS is a more accurate estimate of AHI than REI calculated using total recording time. Accuracy further improved with additional signals: mean difference between REI and AHI reduced from -6.6 events/hour (95% CI -7.51 to -5.71) to -1.76 events/hour (95% CI -2.27 to -1.24). |
Detailed Study Information:
-
Sample sizes used for the test set and the data provenance:
- Test Set Sample Size: A total of 1,204 polysomnography (PSG) and home sleep apnea test (HSAT) files were used in the five clinical studies.
- Data Provenance: The document does not explicitly state the country of origin. The studies are described as using a "large, diverse sample... collected via a number of different platforms," suggesting diverse sources but not specifying geographical location. The studies were likely retrospective, as they involved validating algorithms against existing manual scoring, but this is not explicitly stated.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: Not explicitly stated how many individual experts were used across all studies. However, the non-inferiority margin for comparisons was "set at the lower-margin of the agreement observed across expert technologists." This implies multiple experts were involved in defining the range of agreement for ground truth.
- Qualifications of Experts: The experts are referred to as Registered Polysomnographic Technologists (RPSGT). This indicates their professional qualification in sleep study scoring.
-
Adjudication method for the test set:
- The document implies a form of consensus or agreement among experts was utilized to set the non-inferiority margin, but it does not explicitly describe a specific adjudication method like 2+1 or 3+1 for individual cases within the test set. The focus is on comparing the algorithm's performance against the established range of agreement among experts.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study was not the primary focus described. The study design primarily involved a standalone evaluation of the AI algorithm (Somnolyzer) against human expert scoring, demonstrating its non-inferiority.
- The document states that Somnolyzer's output is "ready for review and interpretation by a physician," implying it assists human readers by providing a pre-scored output. However, it does not quantify the improvement in human reader performance with AI assistance versus without AI assistance.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone evaluation was performed. The clinical studies "validated the Somnolyzer and CReSS algorithms against manual scoring." The non-inferiority claims ("Somnolyzer scoring... is non-inferior to manual expert scoring") directly refer to the algorithm's performance without human intervention in the scoring process.
-
The type of ground truth used:
- The ground truth was expert consensus scoring. The document states that the algorithms were validated "against manual scoring" by "expert technologists" (RPSGTs). The non-inferiority margin was based on "the agreement observed across expert technologists."
-
The sample size for the training set:
- The document does not provide information on the training set sample size. The provided text focuses solely on the clinical performance testing for validation.
-
How the ground truth for the training set was established:
- As the training set information is not provided, the method for establishing its ground truth is also not described in the document.
Ask a specific question about this device
(151 days)
Sleepware G3
Sleepware G3 is a software application used for analysis (automatic and manual scoring), display, retrieval, summarization, report generation and networking of data received from monitoring devices used to categorize sleep related events that help aid in the diagnosis of sleep related disorders. It is indicated for use with infant or adult patients in a clinical environment by or on the order of a physician.
The optional Somnolyzer Inside scoring package has the same intended use as Sleepware G3, but is indicated for use with adult patients only.
The Sleepware G3 software is a diagnostic tool, used by trained clinical professionals, for managing Respironics polysomnography recorders using a personal computer. The Sleepware G3 software uses data that is stored in the recorder device memory, and then processes this data to display a graphical and statistical analysis for use by the trained clinical professional. It also allows the trained clinical professional to evaluate (score) the recorded respiratory, sleep staging and/or oximetry waveforms to ensure that the patient being monitored is properly diagnosed. The clinician can create unique patient reports for the patient being evaluated, and configure device parameters in the software. Sleepware G3 was formally known as the Alice Sleepware Software cleared in K040595.
Somnolyzer 24x7 is a similar standalone, polysomnography scoring application that provides automated analysis of respiratory, sleep staging and/or oximetry waveforms recorded during sleep studies. Like Sleepware G3, it processes information recorded during sleep with electrodes and sensors attached to the body according to worldwide accepted scoring standards. It then generates results that include quantitative sleep, breathing, and motion parameters, used to evaluate sleep and respiratoryrelated disorders. Somnolyzer is located on a remote server and is considered to be a software service, unlike Sleepware G3 which is located on a personal computer.
Sleepware G3 is being updated to add the scoring algorithms from the Somnolyzer 24x7 software service as an optional software package. The scoring algorithms from the cleared Somnolyzer 24x7 software were modified to operate in real time, as a sleep study is occurring, and be displayed on the Sleepware G3 user interface.
The provided document does not contain detailed acceptance criteria or a specific study proving the device meets these criteria in the format requested. It primarily focuses on the regulatory submission for the Sleepware G3 device, comparing it to predicate devices.
However, based on the information provided, particularly in Section VII "Performance Data: Software Verification and Validation Testing" and "Non-Clinical Tests," we can infer some aspects of the device's performance assessment.
Here's an attempt to extract and frame the information, while noting what is not explicitly stated in the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not provide a formal table of acceptance criteria with numerical targets. Instead, it states that the device's performance was assessed against its predicate device. The key performance aspect mentioned is the equivalence of auto-scoring algorithms.
Acceptance Criteria (Inferred from text) | Reported Device Performance (Inferred from text) |
---|---|
Functional Equivalence of Somnolyzer auto-scoring algorithms to predicate Somnolyzer 24x7 (K131994) (for adult patients) | Testing confirmed that the Somnolyzer auto-scoring algorithms perform equivalently to the device predicate algorithms in the Somnolyzer 24x7 software (K131994). |
Overall integration and meeting of product requirements for Sleepware G3 with Somnolyzer Inside package | Overall testing of Sleepware G3 with Somnolyzer Inside package showed that the plug-in has been successfully integrated and all product requirements have been met. |
Functional Equivalence of Sleepware G3 to predicate Alice Sleepware Software (K040595) (for infant or adult patients) | Testing confirmed that the Sleepware G3 performs equivalently to the device predicate Alice Sleepware Software (K040595). All tests had passing results. |
Safety and Effectiveness (not explicitly quantified, but a general regulatory requirement) | The verification and validation testing demonstrated that safety and effectiveness has not been inadvertently affected by modifications to the system. Non-clinical tests were deemed adequate to assess safety and effectiveness. |
Software Level of Concern (internal classification) | The software for this device was considered to have a "moderate" level of concern, since a failure or latent flaw in the software could result in minor harm to the patient. (This is not a performance metric itself but indicates the criticality of the software and, indirectly, the rigor of testing expected). |
Note: The document does not provide specific metrics like sensitivity, specificity, accuracy, or agreement rates that would typically be associated with performance criteria for medical devices, particularly those involving algorithmic analysis. Instead, it uses qualitative statements of "equivalence" and "passing results."
2. Sample Size and Data Provenance for the Test Set
- Sample Size: The document does not specify the sample size (number of polysomnography recordings or events) used for the non-clinical tests that confirmed the algorithmic equivalence.
- Data Provenance: The document does not specify the country of origin of the data or whether the data was retrospective or prospective.
3. Number of Experts and Qualifications for Ground Truth Establishment
- The document does not specify the number of experts used to establish ground truth or their qualifications. The reference to "trained clinical professionals" evaluating and scoring waveforms for diagnosis implies expert involvement in the intended use, but not necessarily in the validation test set creation.
4. Adjudication Method for the Test Set
- The document does not specify any adjudication method (e.g., 2+1, 3+1).
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- A MRMC comparative effectiveness study was not mentioned in the document. The study was focused on algorithmic equivalence to predicate devices, not on human reader improvement with AI assistance.
6. Standalone (Algorithm Only) Performance
- Yes, a standalone (algorithm only) performance assessment was done. The key statement is "Testing confirmed that the Somnolyzer auto-scoring algorithms perform equivalently to the device predicate algorithms in the Somnolyzer 24x7 software (K131994)." This directly refers to the performance of the automated scoring algorithms without human-in-the-loop assistance in the context of the validation tests.
7. Type of Ground Truth Used
- The type of ground truth used is not explicitly stated in terms of "expert consensus," "pathology," or "outcomes data." However, given the context of polysomnography scoring, it is highly probable that the ground truth for evaluating the auto-scoring algorithms would have been established by manual expert scoring of reference data. The document mentions "clinician can evaluate (score) the recorded respiratory, sleep staging and/or oximetry waveforms to ensure that the patient being monitored is properly diagnosed," implying expert scoring as the gold standard for clinical use. It also states the algorithms were "designed to fulfill the same scoring rules as the predicate Somnolyzer 24x7 version 2.5."
8. Sample Size for the Training Set
- The document does not specify the sample size used for training the Somnolyzer scoring algorithms (or if they were "trained" in a machine learning sense, or simply rule-based). It mentions "modification #1" was "Added Somnolyzer 24x7 scoring algorithms to Sleepware G3 as an optional software package" and they were "modified to operate in real time," but doesn't detail how the original algorithms were developed or trained.
9. How the Ground Truth for the Training Set Was Established
- The document does not specify how the ground truth for any training set was established. As mentioned above, the origin and development steps of the Somnolyzer algorithms (which were integrated and adapted) are not detailed in this regulatory submission summary.
Ask a specific question about this device
Page 1 of 1