Search Results
Found 15 results
510(k) Data Aggregation
(172 days)
The Positional Sleep Assessment System (PoSAS) Software is a software-only device intended for use by healthcare professionals to access previously recorded data for review and interpretation of sleep length, sleep disruptions, snoring, and positional sleep disordered breathing events in adults. The PoSAS Software is used in healthcare facilities to generate sleep study reports from data obtained with the Night Shift and pulse oximeter devices; reports include sleep/wake, position, snoring, SpO2, pulse rate, pulse event (6 bpm change), and/or desaturation event information. The report requires clinician interpretation of the results; it does not suggest a course of treatment or generate a diagnosis.
The Positional Sleep Assessment System (PoSAS) Software is a standalone desktop software application that provides the capability to generate reports from the data acquired with the Advanced Brain Monitoring (ABM) Night Shift (K140190) and/or a pulse oximeter. The PoSAS Software is currently compatible with the Nonin WristOx> (model 3150) cleared in K102350. The Night Shift is a small neck-worn device with software indicated for use in reporting position, movement, and sound so that positional changes in sleep quality and snoring can be assessed for adult patients only. The Nonin WrisOx₂ is a small wrist-worn device indicated for use in measuring, displaying, and storing functional oxygen saturation of arterial hemoglobin (SpO₂) and pulse rate of adults and pediatric patients. The PoSAS Software is intended to be used in healthcare facilities (e.g. Clinician's office) to transfer and analyze data obtained with the Night Shift and WristOx2 by adult patient's in the home or sleep lab. The Night Shift uses a USB 2.0 data cable to transfer data via Standard USB 2.0 flash drive and the WristOx₂ uses a proprietary USB 2.0 data cable to transfer data via USB 2.0 Virtual COM Port. Once the devices are recognized, the PoSAS Software graphical user interface (GUI) enables the user to synchronize the devices by writing the same computer date/time (down to the second) to each of the devices. The PoSAS Software also allows for the data to be erased from both the Night Shift and WristOx2 devices, and allows for the Night Shift device settings (I.e. Vibration feedback on, Vibration feedback off, or Trial mode) to be updated prior to recording. Once the clock times from the two devices are synchronized, and the user has set the desired settings of the Night Shift device (i.e., identical to features available using Night Shift software), the Night Shift and WristOx2 are worn by a patient to record sleep study data. The PoSAS Software reads the study files and recognizes the respective clock times within each file, and aligns the data for generation of a PoSAS report. The PoSAS Software report presents the Night Shift data, analyzed pulse oximeter data, and combines the positional data from the Night Shift with the analyzed results from the pulse oximetry data. A PoSAS Software report can be generated either from the data saved on the connected Night Shift and pulse oximeter devices or from data saved to the hard disk of the personal computer. Data from the Night Shift is displayed on the PoSAS Software report without modification. The PoSAS Software calculates oximetry data based on the once per second SpO₂ and pulse data acquired by the pulse oximeter. 3% or 4% desaturations and pulse events (6 bpm change) are also calculated. The PoSAS software combines Night Shift position information with calculated oximetry data to present metrics based on the patient's position (supine or non-supine).
Here's a breakdown of the acceptance criteria and the study details for the Positional Sleep Assessment System (PoSAS) Software, based on the provided 510(k) summary:
Acceptance Criteria and Device Performance
The provided document primarily focuses on demonstrating substantial equivalence to a predicate device rather than presenting a formal table of specific acceptance criteria with corresponding performance metrics for all features. However, it does highlight key performance aspects and claims of equivalence.
From the text, the core acceptance criteria are implied to be:
- Accuracy and Sensitivity of Pulse Oximeter Event Recognition: The software must accurately recognize pulse events and 3%/4% desaturation events.
- Equivalence of SpO2 and Pulse Metrics: The PoSAS software's calculation and reporting of SpO2 and pulse rate metrics must be equivalent to the reference Sleep Profiler software.
- Equivalence of Position-Related Metrics: The display of position-related metrics, particularly when combining Night Shift data with oximetry, must be equivalent to what the predicate device or comparable reference software provides.
Reported Device Performance:
| Acceptance Criteria / Performance Aspect | Reported Device Performance |
|---|---|
| Accuracy and Sensitivity of Pulse Oximeter Event Recognition | Performance testing used a Pulse Oximeter Tester to generate patterns of pulse rate and SpO2 changes, confirming the software is "accurate and sensitive enough to recognize pulse events and 3%/4% desaturation events." |
| Equivalence of SpO2 and Pulse Metrics | "All primary endpoints for agreement between SpO2 and pulse metrics were achieved demonstrating the equivalence of the PoSAS and reference Sleep Profiler software SpO2 and pulse rate metrics." The algorithms for processing pulse oximeter data (SpO2 and pulse rate) are "identical" to those used in the Sleep Profiler (K153412). |
| Equivalence of Position-Related Metrics and Overall Data Presentation | "The PoSAS Software displays some data without analysis (i.e. all data provided by the Night Shift), it analyzes fewer signals (i.e. only Pulse rate and SpO2), and it does not present any of the ARES signals that can be manually edited." "The PoSAS Software reports are similar in content to the ARES predicate device and identical to that of the reference Night Shift software for display of data calculated by Night Shift firmware." |
| SpO2 and Pulse Rate Analysis Algorithms | "Analysis of SpO₂ and pulse rate is equivalent to the ARES, as the PoSAS is based on the similar algorithms as the ARES, and is identical to the algorithms used for the Sleep Profiler (reference device)." |
| Combination of Positional Data with Oximetry for Positional-Related Metrics | "PoSAS Software combines positional data obtained from the accessory Night Shift device with the analyzed SpO₂ and Pulse data to determine positional-related and SpO₂/Pulse statistics." |
Study Used to Prove Device Meets Acceptance Criteria:
The document describes a non-clinical study that involved several components:
- Software Verification and Validation Testing: Conducted in compliance with FDA guidance for software in medical devices, ISO 14971:2007 (risk management), "General Principles of Software Validation," and "Guidance for Content of Premarket Submissions for Management of Cybersecurity in Medical Devices."
- Performance Testing with Pulse Oximeter Tester: This test specifically assessed the software's ability to recognize pulse and desaturation events.
- Retrospective Comparative Analysis: Data from sleep studies acquired with a different device (Advanced Brain Monitoring X8/PSG2) were analyzed independently by both the PoSAS software and the reference Sleep Profiler software (K153412). The goal was to demonstrate agreement and equivalence of SpO2 and pulse metrics between the two software applications.
Detailed Study Information:
-
Sample Size used for the Test Set and the Data Provenance:
- Sample Size: Not explicitly stated. The document mentions "Retrospective data from sleep studies acquired with the Advanced Brain Monitoring X8/PSG2 device." It does not provide the number of patients or studies included in this retrospective test set.
- Data Provenance: The data was retrospective and acquired using the Advanced Brain Monitoring X8/PSG2 device (K152040). The country of origin for the data is not specified.
-
Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts:
- The document does not explicitly state that human experts established the ground truth for the test set in the comparative analysis between PoSAS and Sleep Profiler. Instead, the Sleep Profiler software itself (a cleared reference device) served as the "ground truth" or reference for comparison against PoSAS for SpO2 and pulse metrics.
- For the performance testing using the Pulse Oximeter Tester, the "ground truth" would be the known, programmed patterns of SpO2 and pulse changes generated by the tester.
-
Adjudication Method (e.g., 2+1, 3+1, none) for the Test Set:
- No adjudication method involving human experts is described for the test set used in the comparative analysis between PoSAS and Sleep Profiler. The comparison was directly between the outputs of the two software programs.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study was not done. The PoSAS software is a standalone desktop application for report generation and analysis of recorded data. It does not describe a human-in-the-loop scenario or evaluate the improvement of human readers with AI assistance.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone performance assessment was conducted. The entire performance testing and retrospective comparative analysis described (comparing PoSAS to a Pulse Oximeter Tester and to Sleep Profiler software) evaluates the algorithms and software without explicit human intervention in the interpretation process of those specific tests. The intended use states the report "requires clinician interpretation of the results; it does not suggest a course of treatment or generate a diagnosis," indicating that while the software is standalone in its analysis, clinical interpretation remains a human task.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For SpO2 and pulse metrics: The "ground truth" for demonstrating equivalence was the algorithms and output of the legally marketed reference device software, Sleep Profiler (K153412).
- For recognition of pulse events and 3%/4% desaturation events: The "ground truth" was the known, generated patterns from a Pulse Oximeter Tester.
- For Night Shift data (e.g., position, snoring metrics): The "ground truth" for demonstrating equivalence was the Night Shift software (K140190), which provides "identical" processing and reporting for these metrics.
-
The sample size for the training set:
- The document does not provide information on a training set or its sample size. This is common for software systems that replicate or combine existing, cleared algorithms rather than developing new predictive models requiring extensive supervised learning. The PoSAS software primarily harmonizes and displays data, and uses algorithms "identical" or "similar" to previously cleared devices.
-
How the ground truth for the training set was established:
- As no training set is mentioned, no information on establishing its ground truth is provided.
Ask a specific question about this device
(110 days)
Sleep Profiler is intended for use for the diagnostic evaluation by a physician to assess sleep quality and score sleep disordered breathing events in adults only. The Sleep Profiler is a software-only device to be used under the supervision of a clinician to analyze physiological signals and automatically score sleep study results; including the staging of sleep, detection of arousals, snoring and sleep disordered breathing events (obstructive apneas, hypopneas and respiratory event related arousals). Central and mixed apneas can be manually marked within the records.
The Sleep Profiler is a software application that analyzes previously recorded physiological signals obtained during sleep. The Sleep Profiler software can analyze any EDF files acquired with the Advanced Brain Monitoring X4 System and the X8 System models SP40 and SP29. Automated algorithms are applied to the raw signals in order to derive additional signals and interpret the raw and derived signal information. The software automates recognition of: Sleep stages Rapid Eye Movement (REM) and nREM (N1, N2, N3) and wake, Heart/pulse rate, Snoring loudness, Sleep/wake, Head movement and position, Snoring, sympathetic, behavioral and cortical arousals, ECG,EOG, EMG waveform, SpO2, Airflow, Respiratory Effort, Apneas and Hypopneas, Oxygen desaturations. The software identifies and rejects periods with poor electroencephalography signal quality. The full disclosure recording of derived signals and automated analyses can be visually inspected and edited prior to the results being integrated into a sleep study report. Medical and history information can be input from a questionnaire. Responses are analyzed to provide a pre-test probability of Obstructive Sleep Apnea (OSA). The automated analyses of physiological data are integrated with the questionnaire data, medical and history information to provide a comprehensive report. Several report formats are available depending on whether the user has acquired more than one night of data, wishes to obtain a narrative summary report or provide patient reports. The Sleep Profiler software can be used as a stand-alone application for use on Microsoft Windows 7 & 8 operating system platforms (desktop model). Alternatively, the user interface (i.e., entry or editing of information) can be delivered via a web-portal (portal model). The capability to enter or edit patient information, call the application to generate a study report, and/or download a report is provided using either the desktop PC application or web-portal application. The same analysis and report generation software is used for both the desktop and web-portal applications.
Here's a breakdown of the acceptance criteria and study information for the Sleep Profiler device, based on the provided FDA 510(k) summary:
1. Table of Acceptance Criteria and Reported Device Performance
| Endpoint | Acceptance Criteria (Equivalent to Predicate Device) | Reported Device Performance (Sleep Profiler) |
|---|---|---|
| AHI for OSA Diagnosis | Minimum targeted positive likelihood ratios for AHI > 5 and AHI > 15 are 3.5 and 5.0 respectively (equivalent to predicate ARES). | Overall AHI: - AHI ≥ 5: Positive Likelihood Ratio = 6.67 - AHI ≥ 15: Positive Likelihood Ratio = 33.0 REM AHI: - AHI ≥ 5: Positive Likelihood Ratio = 8.84 - AHI ≥ 15: Positive Likelihood Ratio = 18.33 (These values exceed the minimum targeted likelihood ratios indicating equivalence) |
| Sleep Staging (Agreement with Expert Consensus) | Comparison of auto-detected staging to PSG results obtained by expert raters, showing equivalent agreement to the predicate Sleep Profiler (K130007). No specific numeric thresholds are explicitly stated as acceptance criteria, but generally high concordance is expected for substantial equivalence. | Overall (n=43 subjects, 3 raters): - Wake: Positive Agreement 0.73, Negative Agreement 0.94 - N1: Positive Agreement 0.25, Negative Agreement 0.93 - N2: Positive Agreement 0.77, Negative Agreement 0.84 - N3: Positive Agreement 0.76, Negative Agreement 0.94 - REM: Positive Agreement 0.74, Negative Agreement 0.97 (The document states "met" for this endpoint by comparison to the predicate.) |
2. Sample Sizes and Data Provenance for Test Set
- Sample Size (for AHI/OSA detection): 60 subjects for overall AHI, 40 subjects for REM AHI.
- Sample Size (for Sleep Staging): A subset of 43 subjects from the AHI study, with at least 4 hours of raw X8 diagnostic recording time.
- Data Provenance: The document states "signals acquired with the X8 System concurrent to polysomnography (PSG)." This implies the data was prospectively collected for this evaluation, likely from a clinical setting, but the country of origin is not specified.
3. Number of Experts and Qualifications for Ground Truth (Test Set)
- Number of Experts:
- For AHI/OSA detection: "one rater per study" for PSG results.
- For Sleep Staging: "weighted majority agreement of three raters" for the 43-subject subset.
- Qualifications of Experts: Not explicitly stated beyond being "rater(s)" for PSG and "expert scoring" for sleep staging. Standard practice for such studies would imply board-certified sleep technologists or physicians experienced in PSG scoring.
4. Adjudication Method for the Test Set
- For AHI/OSA detection: "one rater per study." This implies no formal adjudication/consensus process among multiple independent reviewers, as only a single rater's PSG results were used as ground truth for each case.
- For Sleep Staging: "weighted majority agreement of three raters." This indicates an adjudication method where the consensus of three experts established the ground truth.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No MRMC comparative effectiveness study is explicitly described where human readers' performance with and without AI assistance is compared. The study primarily focuses on the standalone performance of the Sleep Profiler software against expert-scored PSG.
6. Standalone Performance Study
- Yes, a standalone study was performed. The "Sleep Profiler software accuracy was clinically validated with signals acquired with the X8 System concurrent to polysomnography (PSG)." The results presented for AHI and sleep staging are for the algorithm's performance without human intervention, compared to human-scored PSG.
7. Type of Ground Truth Used
- Expert Consensus / Human Scoring: The primary ground truth for both AHI detection and sleep staging was derived from Polysomnography (PSG) results scored by human experts/raters. For sleep staging, it was specifically based on the "weighted majority agreement of three raters."
8. Sample Size for the Training Set
- The document does not explicitly state the sample size used for the training set of the Sleep Profiler software. It describes the clinical validation study (test set) but not the development data.
9. How Ground Truth for the Training Set Was Established
- The document does not provide details on how the ground truth for the training set was established. Typically, for such devices, training data would also be derived from expert-scored PSG, similar to the test set, but this information is not included in the 510(k) summary.
Ask a specific question about this device
(100 days)
The X8 System is intended for prescription use in the home, healthcare facility, or clinical research environment to acquire, record, transmit, and display physiological signals from adult patients. All X8 models (SP40, SP29, and XS29) acquire, record, transmit, and/or display electroencephalogram (EEG), electrooculogram (EOG), electrocardiogram (ECG), and/or electromyogram (EMG) signals, with optional accelerometer, acoustical, and photoplethysmographic signals. Model SP29 additionally includes a nasal pressure transducer and cannula (for airflow), thoracic and abdomen respiratory effort, and pulse rate and oxyhemoglobin saturation from the finger. The X8 system only acquires and displays physiological signals; no claims are being made for analysis of the acquired signals with respect to the accuracy, precision, and reliability.
The X8 System is indicated for acquiring, recording/storing, transmitting, and displaying physiological data in patients. It can be used by patients in the home, healthcare facility, or clinical research environment. Patients can move within their home or healthcare environment without having to remove the device (e.g. walk to the restroom).
The X8 System is comprised of the X8 device which is worn on the patient's head and body, accessories, the Device Manager software, and the X-Series Basic-Utility Software. The study records are saved to the PC in EDF format and are available for analysis by third party software applications, such as Persyst Reveal (K011397).
The X8 System combines hardware, firmware, and software to acquire physiological signals. It acquires physiological data through a battery powered headset worn by the patient and provides a flexible platform for applying sensors and acquiring signals from multiple locations on the head or body, transmitting and recording the signals, and providing visual and auditory indications to ensure high quality data are obtained.
Model SP40 Sleep Profiler is applied by the patient to acquire and record electroencephalogram (EEG), electrooculogram (EOG), electrocardiogram (ECG), and/or electromyogram (EMG) signals, with optional accelerometer, acoustical, and photoplethysmographic signals during sleep. This model utilizes the X8 Sleep Profiler Strip.
Model SP29 Sleep Profiler PSG2 is applied by the patient to acquire and record electroencephalogram (EEG), electrooculogram (EOG), electrocardiogram (ECG), and/or electromyogram (EMG) signals, with optional accelerometer, acoustical, and photoplethysmographic signals during sleep. This model additionally includes a nasal pressure transducer and cannula (for airflow), thoracic and abdomen respiratory effort, and pulse rate and oxyhemoglobin saturation from the finger. This model utilizes the X8 Sleep Profiler Strip.
Model XS29 X8 Stat is applied by a technician to acquire, record, transmit, and/or display electroencephalogram (EEG), electrooculogram (EOG), electrocardiogram (ECG), and/or electromyogram (EMG) signals, with optional accelerometer, acoustical, and photoplethysmographic signals acquired during non-sleep conditions. This model utilizes either the X8 Midline or the X8 Referential strip.
The Device Manager software application provides a means to communicate with the X8 Device, transfer study data, and format a device. The software transfers data saved in the memory of the X8 device using either a PC application or a web-based data entry application operating in a cloud server environment.
The X-Series Basic-Utility Software acquires, presents, and stores physiological signals from the X8 Device. The software has a modular architecture that allows the users to interact using either the Graphical User Interface (GUI) provided with the installation, or programmatically via a Software Development Kit. Additional functionalities provided by X-Series Basic-Utility Software include impedance measurements, custom markers, and interface with the Bluetooth Receiving Dongle.
Here's a breakdown of the acceptance criteria and study details based on the provided document:
1. Table of Acceptance Criteria and Reported Device Performance
Device: X8 System (Sleep Profiler (SP40), Sleep Profiler PSG2 (SP29), Stat X8 (XS29))
Study Type: Prospective Comparison and Self-Application Study of X8 System Model SP29
| Acceptance Criterion (Primary Endpoint) | Reported Device Performance and Outcome |
|---|---|
| A) Comparison of Signals (X8 System airflow and respiratory effort vs. predicate device Compumedics Somte (K072201)) | |
| No more than 10% of the breathing events recorded with the X8-PSG2 airflow signal were inferior to predicate signal. | Achieved: No events recorded with the X8-PSG2 airflow signal (0%) were found inferior to the predicate signal. |
| No more than 20% of the breathing events recorded with the X8-PSG2 respiratory effort will be inferior to the predicate signal. | Achieved: The X8 thorax and abdomen belts were inferior to the predicate in only 4.0% (9/225) and 1.3% (3/232) of the events, respectively (both well below 20%). |
| B) Demonstrate that high quality signals can be obtained when the X8 System is self-applied with the user instructions. | |
| At least 80% of subjects will be able to acquire at least one night of data (i.e., the entire period they were in bed). | Achieved: 91% (10 of 11 subjects) were able to acquire at least one night of data for the entire night. |
| At least 70% of each night of recording time will be valid across the oximetry, nasal pressure, and effort belt signals. (While not identified as a primary endpoint for cardio-respiratory signal quality, high EEG quality was also assessed.) | Achieved: The percentage of good data obtained for oximetry, nasal pressure (airflow), and respiratory effort (thorax and abdomen) exceeded 90% on each night. High EEG quality was also obtained in over 90% of the recording time on each night. |
| At least 70% of subjects did not report X8-PSG2 audio alerts (for signal quality) substantially affected (i.e., strongly agreed) their ability to stay asleep. | Achieved: 90% (9 of 10 subjects) did not "strongly agree" that the PSG2 made it difficult for them to stay asleep. (Note: One subject was excluded from this calculation, resulting in 10 subjects for this specific endpoint, even though 11 were initially reported for data acquisition ability). |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size:
- For comparison of airflow and respiratory effort signals (part A of the study): The specific number of breathing events analyzed is provided (225 for thorax, 232 for abdomen), but the number of subjects contributing to these events is not explicitly stated.
- For self-application and signal quality (part B of the study): 11 subjects.
- For audio alerts acceptance criterion: 10 subjects (one subject was excluded from this analysis).
- Data Provenance: The study was described as "prospective," indicating that the data was collected specifically for this study under controlled conditions defined prior to data collection. The document does not specify the country of origin for the data.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
The document does not explicitly state the number of experts or their qualifications for establishing the ground truth for this comparative study. However, the ground truth for part A (comparison of signals) was implicitly the readings from the predicate device, Compumedics Somte (K072201), an FDA-cleared device. For part B (self-application and signal quality), the ground truth for signal quality likely involved assessment against predefined criteria or by qualified personnel, but this is not detailed.
4. Adjudication Method for the Test Set
The document does not specify an explicit adjudication method (e.g., 2+1, 3+1). For the comparative study (part A), the comparison was against an "FDA cleared device," implying the predicate device's output served as the reference standard. For self-application (part B), signal quality was assessed, presumably against accepted norms for such physiological signals, but no multi-expert adjudication process is described.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study Was Done, What Was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance
No, a Multi Reader Multi Case (MRMC) comparative effectiveness study was not conducted or described in this document. The X8 System "acquires and displays physiological signals; no claims are being made for analysis of the acquired signals with respect to the accuracy, precision, and reliability." Therefore, this device does not involve AI analysis or human reader interpretation for diagnostic claims, but rather accurate signal acquisition.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
The device's intended use is to "acquire, record, transmit, and display physiological signals." It does not include automated analysis or diagnostic algorithms. Therefore, a standalone algorithm performance study was not applicable and not done. The focus of the performance study was on the quality and equivalence of the acquired physiological signals.
7. The Type of Ground Truth Used
- For comparison of airflow and respiratory effort signals (Part A): The ground truth was based on the signals obtained from an FDA-cleared predicate device (Compumedics Somte (K072201)).
- For self-application and signal quality (Part B): The ground truth for "valid" recording time and "high EEG quality" would have been established by assessing the acquired physiological signals against accepted physiological standards and criteria for signal integrity, likely by trained professionals. The specific method of establishing this ground truth is not detailed beyond the term "good data."
8. The Sample Size for the Training Set
The document describes "prospective studies" for verification and validation. Given that the device's function is signal acquisition and display, and no claims for analysis accuracy or reliability are being made, there isn't a "training set" in the context of machine learning. The studies described are performance evaluations of the hardware and software's ability to accurately capture and transmit signals.
9. How the Ground Truth for the Training Set Was Established
As noted above, no "training set" in the machine learning sense is described for this device. The ground truth for performance evaluation was established through comparison with a predicate FDA-cleared device and assessment against physiological signal quality standards.
Ask a specific question about this device
(125 days)
The Night Shift is indicated for prescription use for the treatment of adult patients with positional obstructive sleep apnea with a non-supine apnea-hypopnea index < 20, and to reduce or alleviate snoring. It records position, movement, and sound so that positional changes in sleep quality and snoring can be assessed.
The Night Shift is worn around the neck to reduce the amount of time the user sleeps in the supine position as a treatment for positional obstructive sleep apnea. Night Shift combines hardware and firmware to detect when the user attempts to sleep in the supine position and can initiate vibro-tactile feedback with increasing intensity until the user shifts to a non-supine position. The initiation of positional feedback from the Night Shift is turned on is programmable to allow the user to fall asleep (if they must) on their back. Each night the Night Shift is worn, it monitors sleep position (% time supine), behavioral sleep efficiency, and snoring levels (% time snoring > 40 and 50 dB) as well as the frequency, duration and intensity of the feedback (when applied). These data can be optionally transferred via the USB port to the Night Shift Web Portal where the user can generate a report to assess how well the positional feedback is working. A "trial" protocol can include one night with no feedback to establish a baseline and two nights with feedback to assess compliance/efficacy. Utilization information is saved on the device that allows reports to be generated that compares daily use by month and monthly averages for one year. The portal also allows the device to be reformatted (to eliminate all previously recorded data) for a new user, adjust the feedback settings to a new user's personal preferences, and/or upgrade the firmware. For large healthcare organizations that limit internet access, desktop software is provided as an alternative to the portal.
Here's a breakdown of the acceptance criteria and the study details for the Advanced Brain Monitoring, Inc. Night Shift device, based on the provided text:
Acceptance Criteria and Reported Device Performance
| Endpoint | Acceptance Criteria | Reported Device Performance | Conclusion |
|---|---|---|---|
| Effectiveness of Night Shift therapy (Primary) | 65% of PT compliant participants with baseline overall AHI > 10 and non-supine AHI < 20 will demonstrate a clinically important reduction in sleep apnea severity based on a minimum 50% reduction. | 85.2% (23 out of 27) participants with pre-treatment positional obstructive sleep apnea with a non-supine AHI < 20 achieved a >50% decrease in AHI. | Met (85.2% vs. 65% target) |
| Safety | 80% of participants will complete the study with no adverse events resulting in voluntary dropping. | 100% of compliant subjects successfully completed the study. No adverse events were reported. | Met (100% vs. 80% target) |
| Accuracy of Supine Position Measurement | Computation of percent time supine by Night Shift is within +/- 5% of the percent time supine by video recordings plus chest sensor in 73% of subjects. | Night Shift was within 5% of chest/video supine time in 92% of the studies. | Met (92% vs. 73% target) |
| Treatment Compliance | At least 80% of participants will be compliant (use Night Shift for a minimum of 5.5 hours/night or length of their time in bed, five nights/week). | 100% of the participants wore the Night Shift for a minimum of 20 nights across the 28 nights of intended use. | Met (100% vs. 80% target) |
| Reduction in Supine Time | At least 70% of participants will average less than 15% time supine across the four weeks of home use. | 97% of the participants averaged less than 15% of time in bed in the supine position when therapy was delivered. | Met (97% vs. 70% target) |
| Improved Epworth Sleepiness Score (ESS) | 50% of PT compliant participants will show an improved ESS of ≥ 2. | 50% of participants exhibited an improvement of 2 or more, and 50% showed no change. None of the ESS scores worsened by 2 or more. | Met (50% vs. 50% target, with no worsening) |
| Improved Functional Outcomes of Sleep Questionnaire (FOSQ) total | FOSQ total will improve by ≥ 2 points in at least 50% of subjects. | 57% exhibited an improvement of 2 or more, 23% showed no change, and 20% showed a worsening of 2 or more. | Met (57% vs. 50% target) |
| Mean Sensitivity (sleep) and Specificity (wake) for Night Shift | The mean sensitivity (sleep) and specificity (wake) for Night Shift will be 0.85 and 0.50, respectively. | The endpoint was met based on the sensitivity and specificity of 90% and 58% across 65 studies. | Met (90% sensitivity vs. 0.85, 58% specificity vs. 0.50) |
| Night Shift Total Sleep Time (TST) within predicate range | 73% of subjects will be within the range of the predicate when subtracting PSG Total Sleep Time (TST) from Night Shift TST (i.e., range 151 and -129 minutes). | 99% of the studies had TST derived from Night Shift within the maximum error (based on two standard deviations of the TST error for the predicate device) vs. PSG TST. | Met (99% vs. 73% target) |
| Night Shift Sleep Efficiency (SE) within predicate range | 73% of subjects will be within the range of the predicate when subtracting PSG Sleep Efficiency (SE) from Night Shift SE (i.e., range 19.1 and -17.2%). | 92% of studies had SE values derived from Night Shift within the maximum error (based on two standard deviations of the SE error for the predicate device) vs. PSG SE. 80% of subjects had sleep onset values <15-minutes. 82% of subjects had wake after sleep onset (WASO) values <45 minutes. | Met (92% vs. 73% target) |
| No consistent patterns of increased N1 and cortical arousals or decreased N3 and REM. | There are no consistent patterns of increased N1 and cortical arousals or decreased N3 and REM. | 87% showed a decrease in N1, 80% a decrease in cortical arousals, 17% an increase in N3, and 33% an increase in REM sleep. Only 3% of subjects showed increase in N1, 7% an increase in cortical arousals, 13% a decrease in N3, and 17% a decrease in REM sleep. | Met |
| Snoring > 50 dB to identify AHI ≥ 10 (sensitivity > 0.80, specificity > 0.65) | The percent time snoring > 50 dB can be used to identify patients with an AHI ≥ 10 with a sensitivity > 0.80 and a specificity > 0.65. | When the percentage of time snoring above 50 dB exceeds 10% of sleep time, the sensitivity was 0.85 and the specificity exceeded 0.58. | Not Met (Specificity of 0.58 did not meet >0.65 target) |
| Identification of treatment success/failure based on AHI, ESS, PHQ9, ISI, GAD7, FOSQ | Those successfully or unsuccessfully treated with Night Shift can be identified via a combination of changes in the AHI, daytime drowsiness (ESS), depression (PHQ9), Insomnia (ISI), anxiety (GAD7) and quality of life (FOSQ). | Evaluating trends across these measures, 50% of subjects showed a substantial improvement as a result of Night Shift therapy and an additional 10% showed improvement, and 33% showed no change. None showed a worsening and two cases (7%) showed substantial overall worsening of subjective measures. | Met (with caveat) - "numbers of failures were too few to characterize" |
Study Details for Clinical Tests:
-
Sample size used for the test set and the data provenance:
- Test Set Sample Size for Primary Effectiveness Endpoint: 27 patients with pre-treatment positional obstructive sleep apnea with a non-supine AHI < 20 were included in the analysis for the primary effectiveness endpoint.
- Test Set Sample Size for other Endpoints: 30 subjects (the 27 patients plus an additional 3 subjects who had a pre-study non-supine AHI >20).
- Data Provenance: Not explicitly stated, but the study was a clinical study conducted to evaluate safety and efficacy, implying prospective data collection. The location (country of origin) is not mentioned.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- This information is not provided in the given text. Ground truth for sleep studies typically involves highly trained sleep technologists and physicians interpreting polysomnography (PSG) data. However, the document does not specify the number or qualifications for this particular study.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- This information is not provided in the given text.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, a multi-reader multi-case (MRMC) comparative effectiveness study was not done. The study evaluated the device's performance in treating sleep apnea and recording sleep parameters, not how human readers improve with or without AI assistance in interpreting diagnostic data from the device. The Night Shift is a therapeutic and monitoring device, not an AI diagnostic interpretation tool.
-
If a standalone (i.e. algorithm only without human-in-the loop performance) was done:
- Yes, the study primarily assessed the standalone performance of the Night Shift device. While it is a prescription device, its effectiveness was measured by its ability to reduce supine sleep and associated AHI, as well as its accuracy in measuring sleep parameters (position, TST, SE) independently. Human interaction is primarily for setup, compliance, and physician review of the generated reports, but the core therapeutic and monitoring function is standalone.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- The ground truth for sleep parameters (AHI, supine time, TST, SE) appears to be Polysomnography (PSG), a gold standard for sleep disorder diagnosis. For the "Accuracy of Supine Position Measurement" endpoint, the ground truth was "video recordings plus chest sensor." For subjective measures (ESS, FOSQ), the ground truth was the participant's self-reported scores.
-
The sample size for the training set:
- This information is not provided in the given text. The document describes a clinical validation study, not the development or training phase of an algorithm.
-
How the ground truth for the training set was established:
- This information is not provided as the training set details are not mentioned.
Ask a specific question about this device
(197 days)
The X-Series System is intended for prescription use in the home, healthcare facility, or clinical research environment to acquire, transmit, display and store physiological signals from patients ages 6 and older. The X-Series system requires operation by a trained technician. The X-Series System acquires, transmits, displays and stores electroencephalogram (EEG), electrooculogram (EOG), electrocardiogram (ECG), and/or electromyogram (EMG), and accelerometer signals. The X-Series System only acquires and displays physiological signals, no claims are being made for analysis of the acquired signals with respect to the accuracy, precision and reliability.
The X-Series System is indicated for acquiring, transmitting, displaying and storing physiological data in patients. It can be used with ambulatory patients in the home, health care facility, or clinical research environment. The X-Series system requires operation by a trained technician. The X-Series System.is comprised the X10 and X24 Headsets and accessories, Synapse Cream, X-Series Basic Software and BT receiving unit. The X-Series Basic Software is also compatible with the Models X4-E, and X4-M (K130013) when used in wireless mode. The X-Series System combines hardware, firmware and software to acquire physiological signals. It acquires physiological data through a battery powered headset worn by the patient and provides a flexible platform for applying sensors using synapse cream and acquiring signals from multiple locations on the head or body, transmitting and recording the signals and providing visual indications to ensure high quality data are obtained. Model X24 provides for acquisition of twenty channels of electroencephalography (EEG) and four optional channels connected to two sensors via a dual-lead connector. Model X10 provides for acquisition of nine channels of electroencephalography (EEG) and an optional channel connected to two sensors via a dual-lead connector. Both models measure movement and position measured via a 3-D accelerometer. The device is designed so it can be affixed by a technician and displays the signals via a wireless connection during acquisition. The X-Series Basic software monitors signal quality to ensure that the sensors are properly applied and that high quality signals are being acquired. The X-Series Basic software provides a means to: a) initiate a study and track patient information, b) acquire and wirelessly transmit signals from the device, c) visually inspect the signal quality. The acquired signals are saved in a universal data format (European Data Format Plus, EDF+) that is intended to be analyzed by a Physician using FDA cleared third party software, i.e. Persyst Software (K011397).
This document describes the X-Series System, an electroencephalograph device for acquiring, transmitting, displaying, and storing physiological data. The submission focuses on demonstrating substantial equivalence to a predicate device (K130013 X4 System) through non-clinical and limited clinical testing.
Here's a breakdown of the requested information:
1. Table of Acceptance Criteria and Reported Device Performance:
The document leverages comparison to a predicate device and adherence to recognized standards as its primary acceptance criteria. Performance metrics are largely presented as being equivalent to the predicate.
| Acceptance Criteria Category | Specific Criteria (from document) | Reported Device Performance (X-Series System) |
|---|---|---|
| Functional Equivalence | Equivalence to predicate device (X4 System) for electrophysiological (EEG, EOG, ECG, EMG), wireless acquisition, and actigraphy. Technologies used in the same manner. | EEG: Sampling Rate: 256 s/s, 0.1 Hz High Pass (hardware), 100 Hz Low Pass (hardware), Dynamic range: +/- 1000μV, Resolution 0.03 µV, Peak to peak noise: 3.7 µV (typical), 110dB Common Mode Rejection Ratio (typically), Input Impedance: 100GOhm (Equivalent to X4) Aux - EOG/EMG: Sampling rate 256 Hz, 0.1 Hz High Pass (hardware), 100 Hz Low Pass (hardware). EMG/EOG: Dynamic range: +/- 1000μV, Resolution 0.03 μV, Peak to peak noise: 3.7 μV (typical) (Equivalent to X4) Aux - ECG: Bandwidth: 0.1 to 100Hz -3dB, Notch Filters: 50, 60Hz (software), Common Mode Rejection Ratio: 110dB, Input Impedance: 100GOhm, Input Range: +/-4mV, Accuracy: 16 bits, Noise: <5μVpp (typically 4.2μVpp), Max Electrode DC Offset: +/-125mV, Dynamic range: +/- 2000μV, Resolution: 0.06 μV (Equivalent to X4) Accelerometer: Sampled 100Hz, downsampled to 10Hz; X x Y x Z @ 10 s/s, Resolution nominal 10 bit at 2g, Actual output range for -90 to 90 degrees is 12 bit. Position accuracy typically +/- 3.0 degrees, maximum +/- 5.0 degrees in the +/- 60 degrees range. Sensitivity: 10 bit / 2g, Non-linearity: +/-0.5 %FS, Cross-Axis: +/-1 %, Zero-Level: +/-0.35-0.4 mg, Sensitivity change due temperature: +/-0.01 LSB/°C (Equivalent to X4) Bench tests confirmed EEG and optional dual-lead (for ECG) analog signals were equivalent to predicate. Actigraph measures were equivalent. |
| Electrical Safety | Compliance with IEC 60601-2-26:2002 (Electroencephalographs) and IEC 60601-1-11: 2010 (Home healthcare environment) | All tests "passing demonstrating the compliance of the X-Series system to FDA recognized standards for electro-medical equipment." |
| Biocompatibility | Cytotoxicity, Sensitization, Irritation per ISO 10993 standards (for patient-contacting components). | Cytotoxicity (ISO 10993-5): No reactivity (grade 0) in all cultures. Concluded "Non-cytotoxic." Sensitization (ISO 10993-10): Irritation absent from all animals, animals gained weight, no overt toxicity, no skin reaction scores. Concluded "Non-sensitizer." Irritation (ISO 10993-10): No signs of erythema or edema at observation points. Test Primary Irritation Index (PII) was 0.0. Concluded "Non-irritant." |
| Cleaning & Disinfection | Acceptance criteria: ≥ log 3.0 reduction in bioburden after cleaning according to instructions, per AAMI TIR 12-94 and AAMI TIR 30: 2003. | "All tests passed and demonstrate that the cleaning methods are appropriate for ensuring the X-Series acquisition device is clean between uses." |
| Software Integrity | Verification and validation of specifications, function, and intended use, per FDA's Guidance for Premarket Submissions for Software Contained in Medical Devices (May 2005). | "The results of the verification and validation activities that have been performed demonstrate that the software meets requirements for safety, function, and intended use." |
| EEG Electrode Performance | Electrical performance acceptable, using applicable methods from AAMI/ANS1 EC12:2000 (R)2010. | "The results demonstrate electrical performance of the EEG electrodes used with the X-Series System is acceptable." |
| Clinical Performance (Ages 6+) | Ability to acquire appropriate EEG recordings in children aged 6-8, with equivalent data quality to adults. | 100% of EEG sessions contained appropriate EEG recordings. Data quality from children was equivalent to an equivalent adult population. Technician setup was quick, and no discomfort or agitation was reported by subjects. |
2. Sample Size Used for the Test Set and the Data Provenance:
- Non-clinical bench tests (EEG, ECG, Actigraph comparison): The sample size is not explicitly stated as 'n=X' but implies a sufficient number of measurements were taken from the X-Series and predicate devices to compare their performance in a jig and with an oscilloscope/signal generator.
- Biocompatibility: The specific number of items/animals per test is mentioned in the results (e.g., "all the animals," "all test animals").
- Cytotoxicity: Mammalian cell cultures (mouse fibroblast L929).
- Sensitization: Albino guinea pigs.
- Irritation: New Zealand White rabbits.
- Clinical tests (for ages 6+):
- Sample size: Seven children aged 6-8 and seven adults.
- Data provenance: "A subset of data from a research study was analyzed." This suggests the data was retrospective and likely collected in a clinical research environment (Country of origin not specified, but likely USA given the FDA submission).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts:
- Non-clinical tests: The document implies standard laboratory testing procedures and comparisons to the predicate, overseen by qualified personnel. No specific "experts" for ground truth establishment are mentioned as the tests rely on objective measurements and established standards.
- Clinical tests (for ages 6+): The document states that the acquired EEG recordings "could be further examined in third-party software" by a Physician. While a physician would interpret the EEG, their role in establishing "ground truth" for the device's acquisition capabilities (the focus of this study) is not explicitly detailed. The acceptance criteria here are about the ability to acquire appropriate EEG recordings and data quality equivalence, which are evaluated directly from the device's output. The "ground truth" for this aspect is the objective quality of the signals themselves, compared between subject groups.
4. Adjudication Method for the Test Set:
- No formal adjudication method (like 2+1 or 3+1) is mentioned for any of the studies.
- The non-clinical tests rely on objective measurement against defined specifications or predicate device performance.
- The clinical study for children's EEG acquisition focused on the presence of "appropriate EEG recordings" and data quality equivalence, which would likely be assessed by comparing signal characteristics.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done, If So, What was the Effect Size of How Much Human Readers Improve with AI vs without AI Assistance:
- No MRMC comparative effectiveness study was done.
- The X-Series System is a data acquisition device; it explicitly states, "The X-Series System only acquires and displays physiological signals, no claims are being made for analysis of the acquired signals with respect to the accuracy, precision and reliability." It is not an AI-assisted diagnostic tool.
6. If a Standalone (i.e. algorithm only without human-in-the loop performance) was Done:
- Not applicable. This device is purely for data acquisition and display, to be interpreted by a physician using separate (third-party) software. There is no algorithm providing diagnostic or analytical output within the X-Series System itself.
7. The Type of Ground Truth Used:
- Non-clinical/Bench tests: Objective measurements (e.g., signal generator output, oscilloscope readings, defined angles for actigraph, standard test methods for biocompatibility and cleaning). The comparison to the predicate device serves as a de facto "ground truth" for performance equivalence where applicable.
- Clinical tests (for ages 6+): The "ground truth" for this specific study was the presence of "appropriate EEG recordings" and subjectively assessed "equivalent data quality" for the acquired signals between age groups. This relies on the qualitative assessment of electrical signal characteristics typically reviewed by trained professionals.
8. The Sample Size for the Training Set:
- Not explicitly stated, nor is a dedicated "training set" relevant for this device. The X-Series System is a data acquisition device, not a machine learning model that requires a training set. The clinical study for ages 6+ was a validation of its acquisition capabilities, not for training an algorithm.
9. How the Ground Truth for the Training Set was Established:
- Not applicable, as no training set was used for a machine learning algorithm.
Ask a specific question about this device
(105 days)
Sleep Profiler is intended for the diagnostic evaluation by a physician to assess sleep quality in adults only. The Sleep Profiler is a software-only device to be used under the supervision of a clinician to analyze physiological signals and automatically score sleep study results, including the staging of sleep, detection of arousals and snoring.
The Sleep Profiler is a software application that analyzes previously recorded physiological signals obtained during sleep. The Sleep Profiler software can analyze any EDF files meeting defined specifications, including signals acquired with the Advanced Brain Monitoring X4 System. Automated algorithms are applied to the raw signals in order to derive additional signals and interpret the raw and derived signal information. The software automates recognition of: a) sleep stage, b) snoring frequency and severity, c) pulse rate, d) cortical (EEG), sympathetic (pulse) and behavioral (actigraphy and snoring) arousals. A single channel of electrocardiography, electrooculargraphy, electromyography, or electroencephalography can be optionally presented for visual inspection and interpretation. The software identifies and rejects periods with poor electroencephalography signal quality. The full disclosure recording of derived signals and automated analyses can be visually inspected and edited prior to the results being integrated into a sleep study report. Medical and history information can be input from a questionnaire. Responses are analyzed to provide a pre-test probability of Obstructive Sleep Apnea (OSA) (a condition that cannot be diagnosed with Sleep Profiler) so an appropriate referral to a sleep physician is made. The automated analyses of physiological data are integrated with the questionnaire data, medical and history information to provide a comprehensive report. Several report formats are available depending on whether the user has acquired more than one night of data, wishes to obtain a narrative summary report or provide patient reports. The capability to enter or edit patient information, call the application to generate a study report, and/or download a report is provided using either the desktop PC application or in a web-based module which emulates the desktop functionality. The same analysis and report generation software is used for both the desktop and web-portal applications.
The provided text describes a 510(k) submission for the Advanced Brain Monitoring, Inc. Sleep Profiler (K130007). This submission is for modifications to an already cleared device (K120450) to introduce a web-based module for patient information entry and report generation/download. The core analysis and report generation software remains unchanged from the predicate device.
Therefore, the acceptance criteria and study information provided in this document focus exclusively on the non-clinical testing performed to demonstrate that the new web-based module functions equivalently to the desktop application. There is no clinical study described here that would involve human readers, ground truth establishment through expert consensus or pathology, or outcome data for performance metrics like sensitivity, specificity, etc.
Here's a breakdown of the requested information based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
| Acceptance Criteria (Key Metric for Software Verification) | Reported Device Performance |
|---|---|
| Confirmation of identical performance using either the desktop or portal for key functions: | The results of the verification and validation activities demonstrate that the software meets requirements for safety, function, and intended use, including: |
| - Enter questionnaire responses | - Performance is identical for entering questionnaire responses via desktop or portal. |
| - Edit study data | - Performance is identical for editing study data via desktop or portal. |
| - Initiate generation of a study report | - Performance is identical for initiating generation of a study reports via desktop or portal. |
| - Download a study report | - Performance is identical for downloading study reports via desktop or portal. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: Not explicitly stated. The verification and validation activities tested the functionality of the web-based module against the desktop application, indicating a comparative test of functionalities. It does not refer to a "test set" in the context of clinical data.
- Data Provenance: Not applicable in the context of clinical data, as this was a non-clinical software verification study. The tests would have involved functional verification of the software.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Not applicable. This submission focuses on software functionality verification, not clinical performance requiring expert ground truth.
4. Adjudication Method for the Test Set
- Not applicable. This was a non-clinical software verification, not a clinical study requiring adjudication of expert interpretations.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No. An MRMC study was not conducted or described in this document. The submission explicitly states that "The modifications ... did not require clinical studies to support substantial equivalence."
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Yes, indirectly. The Sleep Profiler is described as a "software-only device" that "automates recognition of: a) sleep stage, b) snoring frequency and severity, c) pulse rate, d) cortical (EEG), sympathetic (pulse) and behavioral (actigraphy and snoring) arousals." The verification discussed here specifically confirms that the web-based module performs these functions identically to the pre-existing desktop version, which does perform these analyses automatically without continuous human intervention during the analysis phase. However, it's intended to be "used under the supervision of a clinician" and the reporting can be "visually inspected and edited prior to the results being integrated into a sleep study report," implying a human-in-the-loop for final interpretation. The performance of the automated algorithm itself was established in the predicate device (K120450) and is not being re-evaluated for K130007.
7. The Type of Ground Truth Used
- Not applicable for clinical performance. For the software verification described, the "ground truth" was the expected functional output and behavior of the established desktop application (K120450). The web-based module was verified to produce identical results.
8. The Sample Size for the Training Set
- Not provided/not applicable. This submission does not discuss algorithmic development or training sets. It is a modification to an already existing software application. The predicate device (K120450) would have had this information.
9. How the Ground Truth for the Training Set Was Established
- Not provided/not applicable. As above, this information relates to the original algorithmic development, not the current submission for a web-based module.
Ask a specific question about this device
(29 days)
The X4 System is intended for prescription use in the home, healthcare facility, or clinical research environment to acquire, record, transmit and display physiological signals from adult patients. The X4 System acquires, records, transmits and displays electroencephalogram (EEG), electrooculogram (EOG), electrocardiogram (ECG), and/or electromyogram (EMG), accelerometer, acoustical and photoplethesmographic signals. The X4 system only acquires and displays physiological signals, no claims are being made for analysis of the acquired signals with respect to the accuracy, precision and reliability.
The X4 system is used for configurable acquisition of physiological signals. Model X4-E provides for acquisition of three channels of electroencephalography (EEG) and one photoplethesmographic (PPG) signal from a head strip, with an optional channel connected to two sensors via a dual-lead connector with twice the gain. Model X4-M provides four channels of EEG with the dual-lead connector providing the input for reference sensors. Both models measure sound via an acoustic microphone, and movement and position measured via a 3-D accelerometer. The device is designed so it can be affixed by the patient and to record data. Alternatively, a technician can affix the device and display the signals via a wireless connection during acquisition. The X4 system firmware monitors signal quality to ensure that the sensors are properly applied and that high quality signals are being acquired.
The X4 software provides a means to: a) initiate a study and track patient information, b) acquire and save signals to the memory of the device, c) acquire and wirelessly transmit signals from the device, d) upload data saved in the memory of the device to a PC, and e) visually inspect the signal quality.
The acquired signals are saved in a universal data format (European Data Format - EDF). The study record, once saved on the PC, is available for analysis by Advanced Brain Monitoring's Sleep Profiler software application. The X4's downloaded study will reside on either or local PC or a cloud server, which can be a physical or virtual server. Software on the cloud server is accessed via web portal software.
The Advanced Brain Monitoring, Inc. X4 System (K130013) is a device for configurable acquisition of physiological signals, including EEG, EOG, ECG, EMG, accelerometer, acoustical, and photoplethysmographic signals. It is intended for prescription use in various environments to acquire, record, transmit, and display these signals from adult patients.
The documentation indicates that no clinical studies were performed to establish substantial equivalence for this particular submission (K130013). Instead, the determination of substantial equivalence was based on non-clinical tests, specifically focusing on risk management and software testing. The X4 System (K130013) is described as identical to a previously cleared X4 System (K120447), with the only change being the development of a Device Manager Module accessible via a web portal, which duplicates functions already present in the PC software.
Therefore, the following information is based on the provided text, and it's important to note that the acceptance criteria and performance are related to the software functionality as evaluated for this specific 510(k) submission, rather than the accuracy of physiological signal acquisition, for which no new claims are being made.
1. Table of Acceptance Criteria and Reported Device Performance
| Acceptance Criteria (Software Functionality) | Reported Device Performance |
|---|---|
| Format the device for a new patient | Confirmed identical performance using either the desktop or web portal for this function. Software verification/validation activities demonstrated the software meets requirements for safety, function, and intended use. |
| Enter and upload study identification information to the device | Confirmed identical performance using either the desktop or web portal for this function. Software verification/validation activities demonstrated the software meets requirements for safety, function, and intended use. |
| Download study information to data storage | Confirmed identical performance using either the desktop or web portal for this function. Software verification/validation activities demonstrated the software meets requirements for safety, function, and intended use. |
| Upload new firmware | Confirmed identical performance using either the desktop or web portal for this function. Software verification/validation activities demonstrated the software meets requirements for safety, function, and intended use. |
| Ensure software meets requirements for safety, function, and intended use | The results of the verification and validation activities demonstrate that the software meets these requirements. This was confirmed by thorough testing through verification of specifications and validation, including software validation. The key metric was identical performance between the desktop and web portal for the specified key functions of the Device Manager software. |
2. Sample size used for the test set and the data provenance
- Sample Size: The document does not specify a numerical sample size for the test set. The evaluation was based on "thoroughly tested through verification of specifications and validation, including software validation." This typically implies a systematic testing process of software functions rather than a patient-based test set size.
- Data Provenance: Not applicable in the context of this software-focused non-clinical evaluation. The tests were performed on the software itself.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Not applicable. The ground truth for this software evaluation was the expected functional behavior of the software, as defined by its specifications. The evaluation confirmed whether the software performed these functions identically between two interfaces (desktop vs. web portal). This does not involve expert interpretation of data.
4. Adjudication method for the test set
Not applicable. As the testing focused on software functionality and identical performance, adjudication methods typically used in clinical studies (e.g., 2+1, 3+1) were not employed. The performance was assessed by direct comparison of software behavior.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No. An MRMC comparative effectiveness study was not done. The device does not involve AI for interpretation or human-in-the-loop assistance. The device is for signal acquisition and display only, and "no claims are being made for analysis of the acquired signals with respect to the accuracy, precision and reliability."
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, in a sense, but not for diagnostic interpretation. A standalone software functionality verification and validation was done for the Device Manager module. This evaluated the module's ability to perform its specified tasks (formatting device, uploading study info, downloading data, uploading firmware) independently and identically across different interfaces (desktop vs. web portal). This is an algorithm/software-only evaluation for its operational functions, not for clinical diagnostic performance.
7. The type of ground truth used
The ground truth used was the functional specifications and expected behavior of the Device Manager software. The verification and validation activities confirmed that the software's performance matched these predefined specifications, particularly that its functions were identical between the desktop and web portal versions.
8. The sample size for the training set
Not applicable. As this submission focused on software functionality and its equivalence to a predicate, there was no machine learning component, and thus, no "training set."
9. How the ground truth for the training set was established
Not applicable, as there was no training set.
Ask a specific question about this device
(218 days)
Sleep Profiler is intended for the diagnostic evaluation by a physician to assess sleep quality in adults only. The Sleep Profiler is a software-only device to be used under the supervision of a clinician to analyze physiological signals and automatically score sleep study results, including the staging of sleep, detection of arousals and snoring.
The Sleep Profiler is a software application that analyzes previously recorded physiological signals obtained during sleep. The Sleep Profiler software can analyze any EDF files meeting defined specifications, including signals acquired with the Advanced Brain Monitoring X4 System which is the subject of a separate 510(k). Automated algorithms are applied to the raw signals in order to derive additional signals and interpret the raw and derived signal information. The software automates recognition of: a) sleep stage, b) snoring frequency and severity, c) pulse rate, d) cortical (EEG), sympathetic (pulse) and behavioral (actigraphy and snoring) arousals. A single channel of electrocardiography, electrooculargraphy, electromyography, or electroencephalography can be optionally presented for visual inspection and interpretation. The software identifies and rejects periods with poor electroencephalography signal quality. The full disclosure recording of derived signals and automated analyses can be visually inspected and edited prior to the results being integrated into a sleep study report. Medical and history information can be input from a questionnaire. Responses are analyzed to provide a pre-test probability of Obstructive Sleep Apnea (OSA) (a condition that cannot be diagnosed with Sleep Profiler) so an appropriate referral to a sleep physician is made. The automated analyses of physiological data are integrated with the questionnaire data, medical and history information to provide a comprehensive report. Several report formats are available depending on whether the user has acquired more than one night of data, wishes to obtain a narrative summary report or provide patient reports.
Here's a breakdown of the acceptance criteria and the study details for the Sleep Profiler device, based on the provided 510(k) summary:
Acceptance Criteria and Device Performance
The acceptance criteria are implied by the comparison to a predicate device, MICHELE (K112102). The goal is to demonstrate "similar" performance. The specific metrics are overall percent agreement and agreement for each sleep stage.
Table 1: Sleep Profiler Performance vs. Predicate Device Performance (Sleep Staging)
| Metric | Sleep Profiler Performance Data (from 44 subjects) | Predicate Device (MICHELE, K112102) Performance Data (from its study) |
|---|---|---|
| Overall % Agreement | Not explicitly stated as an overall value, but individual positive and negative agreements are provided. | 82.6% |
| Agreement by Sleep Stage (Positive Agreement / Sensitivity) | ||
| Wake | 0.79 | 89.9% |
| N1 | 0.40 | 50.4% |
| N2 | 0.80 | 82.9% |
| N3 | 0.76 | 82.9% |
| REM | 0.72 | 89.8% |
| Agreement by Sleep Stage (Negative Agreement / Specificity) | ||
| Wake | 0.95 | 96.4% |
| N1 | 0.91 | 94.7% |
| N2 | 0.83 | 89.6% |
| N3 | 0.97 | 97.5% |
| REM | 0.97 | 98.5% |
The summary states, "The positive and negative percent agreement obtained during clinical validation of the Sleep Profiler are similar to that obtained by the predicate device, MICHELE (K112102), which was validated using a different data set."
Study Details
-
Sample Size used for the test set and the data provenance:
- Test Set Sample Size: 44 subjects.
- Data Provenance: Not explicitly stated, but it's a "clinical validation" comparing to manual observation. It doesn't state whether it's retrospective or prospective, or the country of origin.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: Three raters.
- Qualifications: "either sleep technicians or physicians."
-
Adjudication method for the test set:
- The table indicates a "No-Consensus" row for the experts, implying adjudication by consensus was used. Specifically, the "Epochs assigned by Expert Scoring" includes a "No-Consensus" category (653 epochs out of 39191 total), suggesting that if the three raters did not agree, those epochs were excluded from the primary agreement calculations for individual stages. The main performance metrics are likely based on epochs where there was full consensus (3 out of 3, or potentially 2 out of 3 if that's what "consensus" meant here, although the "No Consensus" row suggests agreement was required).
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study was not done. The study's purpose was to validate the "sleep staging algorithms by comparison to sleep staging made by manual observation by three raters." This is a standalone algorithm performance study compared to human experts as ground truth, not a study evaluating human performance with or without AI assistance.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone algorithm performance study was done. The results presented in the table ("Epochs assigned by Sleep Profiler" vs. "Epochs assigned by Expert Scoring") directly report the algorithm's performance without human interaction or modification. The description also states the software "automates recognition" and that the "full disclosure recording of derived signals and automated analyses can be visually inspected and edited prior to the results being integrated into a sleep study report," but the presented validation data is for the automated algorithm's output.
-
The type of ground truth used:
- Expert Consensus. The ground truth for the test set was established by the "manual observation by three raters who were either sleep technicians or physicians."
-
The sample size for the training set:
- Not specified. The document does not provide details about the training set size or how it was established. It only discusses the clinical validation (test set) of the software.
-
How the ground truth for the training set was established:
- Not specified. Since details about the training set are not provided, how its ground truth was established is also not mentioned.
Ask a specific question about this device
(127 days)
The X4 System is intended for prescription use in the home, healthcare facility, or clinical research environment to acquire, record, transmit and display physiological signals from adult patients. The X4 System acquires, records, transmits and displays electroencephalogram (EEG), electrooculogram (EOG), electrocardiogram (ECG), and/or electromyogram (EMG), accelerometer, acoustical and photoplethesmographic signals. The X4 system only acquires and displays physiological signals, no claims are being made for analysis of the acquired signals with respect to the accuracy, precision and reliability.
The X4 system is used for configurable acquisition of physiological signals. Model X4-E provides for acquisition of three channels of electroencephalography (EEG) and one photoplethesmographic (PPG) signal from a head strip, with an optional channel connected to two sensors via a dual-lead connector with twice the gain. Model X4-M provides four channels of EEG with the dual-lead connector providing the input for reference sensors. Both models measure sound via an acoustic microphone, and movement and position measured via a 3-D accelerometer. The device is designed so it can be affixed by the patient and to record data. Alternatively, a technician can affix the device and display the signals via a wireless connection during acquisition. The X4 system firmware monitors signal quality to ensure that the sensors are properly applied and that high quality signals are being acquired.
The X4 software provides a means to: a) initiate a study and track patient information, b) acquire and save signals to the memory of the device, c) acquire and wirelessly transmit signals from the device, d) upload data saved in the memory of the device to a PC, and e) visually inspect the signal quality.
The acquired signals are saved in a universal data format (European Data Format – EDF) that is intended to be analyzed with third party software, including Advanced Brain Monitoring's Sleep Profile Software (K120450).
The provided text describes the X4 System, a device for acquiring physiological signals. The 510(k) summary focuses on demonstrating substantial equivalence to predicate devices through non-clinical testing and a limited clinical study.
Here's an analysis of the acceptance criteria and study information provided:
Acceptance Criteria and Reported Device Performance
The core acceptance criterion for the X4 System, as articulated in the clinical tests, is based on the interpretability of the acquired signals and the ease of device application.
| Acceptance Criteria | Reported Device Performance |
|---|---|
| Non-Clinical: | |
| Compliance with system-level requirements | All features of the X4 System were compliant with the system level requirements. |
| Electrical, biological safety, performance, and software tests | Confirmation of conformity to FDA recognized consensus standards and voluntary standards (IEC 60601-1-1, IEC 60601-1-2, ISO 10993-1, IEC 60601-1-11, IEC 60601-2-26). |
| Signals provide similar information to predicate device for physician interpretation | Signals acquired with the X4 System provide similar information as compared to the predicate device that would allow a physician to interpret the signals. (Specific metrics or comparison data are not provided in this summary.) |
| Clinical: | |
| Over 90% of overnight studies recorded with the X4 are interpretable and behave as expected | Result: Over 90% of studies recorded overnight with the X4 are interpretable and behave as expected. |
| Device instructions can be applied without difficulty by patients in the home | Result: The X4 instructions were applied without difficulty, allowing all subjects to properly apply the device so that it remained in the proper position and allowing any problems that triggered an audio alarm to be properly resolved. (No specific quantitative metric for "difficulty" or "proper application" is given beyond the qualitative statement.) |
Study Details for Acceptance Criteria
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Test set sample size: While a specific number for the sample size for the clinical test is not explicitly stated, it refers to "all subjects" and "over 90% of studies recorded overnight." This suggests a prospective clinical study involving multiple subjects.
- Data provenance: Not explicitly stated, but the context of a 510(k) submission to the FDA for a US company suggests it would likely involve data from the US, potentially a prospective clinical trial.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- The summary states that "signals acquired with the X4 System provide similar information as compared to the predicate device that would allow a physician to interpret the signals." and that "over 90% of studies recorded overnight with the X4 are interpretable."
- This implies that physicians are the experts who would interpret the signals to determine interpretability. However, the number of experts and their specific qualifications (e.g., board-certified sleep physicians, neurologists, years of experience) are not specified in the provided text.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- The text does not specify an adjudication method. It mentions that "physicians" would interpret the signals, but it does not detail how potential disagreements among interpreters would be resolved or if multiple interpreters were used for each case.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, an MRMC comparative effectiveness study was not done.
- The X4 System is described as a device for acquiring, recording, transmitting, and displaying physiological signals, with "no claims being made for analysis of the acquired signals with respect to the accuracy, precision and reliability." This indicates that the device itself does not perform AI-assisted analysis and therefore, a study on human reader improvement with AI assistance would not be applicable to this submission.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- No, a standalone (algorithm only) performance study was not done.
- The device's intended use is to acquire, record, transmit, and display physiological signals. It explicitly states "no claims are being made for analysis of the acquired signals." Thus, there is no standalone algorithm being evaluated for diagnostic or analytical performance.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- The ground truth for the clinical study appears to be expert interpretation (physician interpretability) of the acquired signals. The study aimed to demonstrate that the signals are "interpretable" and "behave as expected," which would rely on a physician's assessment.
8. The sample size for the training set
- Not applicable / Not provided. The X4 System is a signal acquisition and display device, not an analytical or AI-driven system. Therefore, it does not involve a "training set" in the context of machine learning algorithms.
9. How the ground truth for the training set was established
- Not applicable. As a signal acquisition device without analytical claims, there is no training set or associated ground truth establishment process for algorithm training.
Ask a specific question about this device
(132 days)
The Apnea Risk Evaluation System (ARES™), Model 610 is indicated for use in the diagnostic evaluation by a physician of adult patients with possible sleep apnea. The ARES™ can record and score respiratory events during sleep (e.g., apneas, hypopneas, mixed apneas and flow limiting events). The device is designed for prescription use in home diagnosis of adults with possible sleep-related breathing disorders.
The Apnea Risk Evaluation System (ARES™) includes a device called a Unicorder which records oxygen saturation, pulse rate, snoring level, head movement and head position, and airflow. Additionally, a physiological signal from the forehead used to stage sleep or respiratory effort signal obtained from an optional piezo respiratory effort belt can be acquired. The battery powered Unicorder provides sufficient capacity to record two nights of data. The device monitors signal quality during acquisition and notifies the user via voice messages when adjustments are required. A standard USB cable connects the Unicorder to a USB port on a host computer when patient data is to be uploaded or downloaded. The USB cable provides power to the Unicorder during recharging from the host computer or from a USB wall charger. The Unicorder cannot record nor can it be worn by the patient when connected to the host computer or the wall charger. Software, residing on a local PC or a physical or virtual server controls the uploading and downloading of data to the Unicorder, processes the sleep study data and generates a sleep study report. The ARES™ can auto-detect positional and non-positional obstructive and mixed apneas and hypopneas similarly to polysomnography. It can detect sleep/wake and REM and non-REM. After the sleep study has been completed, data is transferred off the Unicorder is prepared for the next study. The downloaded sleep study record is then processed with the ARES™ Insight software to transform the raw signals and derive and assess changes in oxygen saturation (SpO2), pulse rate, head movement, head position, snoring sounds, airflow, and EEG or respiratory effort. The red and IR signals are used to calculate the SpO2 and pulse rate. The actigraphy signals are transformed to obtain head movement and head position. A clinician can convert an auto-detected obstructive apnea to a central apnea based on visual inspection of the waveforms. ARES™ Screener can predict pre-test probability of obstructive sleep apnea (OSA). The ARES™ can also assist the physician to identify patients who will likely have a successful OSA treatment outcome, including CPAP and oral appliance therapies. ARES™ can help identify patients who would benefit from a laboratory PAP titration.
The provided FDA 510(k) summary for the Apnea Risk Evaluation System (ARES™), Model 610 (K112514) primarily focuses on demonstrating substantial equivalence to a predicate device rather than providing a detailed study proving the device meets specific acceptance criteria in a standalone performance evaluation. The changes in the modified device are related to software platform, improved filtering of the SpO2 signal, and new claims in the report messages.
Here's an attempt to extract and organize the information based on your request, with significant caveats that much of the requested detail is not explicitly provided in the document for the performance of the modified device in the way you've outlined:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are primarily for the SpO2 accuracy after the filtering changes. The document explicitly states that the accuracy in all ranges is less than 3.5% as recommended by draft FDA guidance.
| Acceptance Criteria (Modified Device) | Reported Device Performance (Modified Device) |
|---|---|
| SpO2 Accuracy (Arms) for various ranges: | |
| 60-100% | < 3.0% |
| 90-100% | < 3.0% |
| 80-90% | < 3.0% |
| 70-80% | < 3.0% |
| 60-70% | < 3.0% |
| Up to 32% of reading may fall outside listed range for all | Up to 32% of reading may fall outside listed range |
| Accuracy in all ranges is less than 3.5% (as per FDA Draft Guidance) | Confirmed to be < 3.5% |
| Equivalence to original ARES™ accuracy for SpO2 signal | Confirmed to be equivalent |
| Software operates properly from cloud server | Confirmed |
Note: The document states "The Arms has changed due to the filtering changes but labeling will reflect specification of < 3.0%. Accuracy in all ranges is less than 3.5% as recommended in Draft Guidance..." This implies < 3.0% is the target specification for labeling, and the performance meets the broader FDA guidance of < 3.5%.
2. Sample Size Used for the Test Set and Data Provenance
The document mentions validation of the SpO2 filtering changes using "clinical data previously acquired in two clinical studies" but does not specify the sample size for these studies. The data provenance is described as "clinical data previously acquired," suggesting it was retrospective analysis of existing data. The country of origin is not specified.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications
This information is not provided in the document. For the new claims regarding treatment messages and PAP titration identification, it states these are "supported by published literature," but it doesn't detail the ground truth establishment process for this specific device's testing.
4. Adjudication Method for the Test Set
This information is not provided in the document.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done
A Multi Reader Multi Case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance was not mentioned in the document. The study described focuses on the device's SpO2 accuracy and the functionality of the new software platform and report messages.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done
A standalone performance evaluation of the algorithm's core diagnostic capabilities (e.g., apnea/hypopnea detection accuracy compared to PSG) is not detailed as a new clinical study for this 510(k) submission. The clinical tests mentioned specifically relate to the filtering changes to the SpO2 signal. The document states: "The ARES™ can auto-detect positional and non-positional obstructive and mixed apneas and hypopneas similarly to polysomnography," which implies standalone performance, but the K112514 document primarily refers back to the predicate device (K111194) for such claims and only describes validation of the SpO2 filtering for the current submission.
7. The Type of Ground Truth Used
For the SpO2 accuracy validation:
- Original breathe down data: This typically involves controlled desaturation events where arterial blood gas measurements or a highly accurate reference pulse oximeter serve as ground truth for SpO2.
- Breath hold data: This involves evaluating performance during dynamic SpO2 changes, again implying a reference standard for SpO2 measurements.
- For the new claims (treatment planning messages, PAP titration): The ground truth is cited as published literature.
8. The Sample Size for the Training Set
The document does not specify the sample size for the training set. It mentions "previously acquired clinical data" being used for validation, but not for training.
9. How the Ground Truth for the Training Set Was Established
The document does not provide information on how ground truth was established for any training set for the modified device, as it primarily focuses on validating changes to an already cleared device.
Ask a specific question about this device
Page 1 of 2