Search Results
Found 3 results
510(k) Data Aggregation
(110 days)
Sleep Profiler is intended for use for the diagnostic evaluation by a physician to assess sleep quality and score sleep disordered breathing events in adults only. The Sleep Profiler is a software-only device to be used under the supervision of a clinician to analyze physiological signals and automatically score sleep study results; including the staging of sleep, detection of arousals, snoring and sleep disordered breathing events (obstructive apneas, hypopneas and respiratory event related arousals). Central and mixed apneas can be manually marked within the records.
The Sleep Profiler is a software application that analyzes previously recorded physiological signals obtained during sleep. The Sleep Profiler software can analyze any EDF files acquired with the Advanced Brain Monitoring X4 System and the X8 System models SP40 and SP29. Automated algorithms are applied to the raw signals in order to derive additional signals and interpret the raw and derived signal information. The software automates recognition of: Sleep stages Rapid Eye Movement (REM) and nREM (N1, N2, N3) and wake, Heart/pulse rate, Snoring loudness, Sleep/wake, Head movement and position, Snoring, sympathetic, behavioral and cortical arousals, ECG,EOG, EMG waveform, SpO2, Airflow, Respiratory Effort, Apneas and Hypopneas, Oxygen desaturations. The software identifies and rejects periods with poor electroencephalography signal quality. The full disclosure recording of derived signals and automated analyses can be visually inspected and edited prior to the results being integrated into a sleep study report. Medical and history information can be input from a questionnaire. Responses are analyzed to provide a pre-test probability of Obstructive Sleep Apnea (OSA). The automated analyses of physiological data are integrated with the questionnaire data, medical and history information to provide a comprehensive report. Several report formats are available depending on whether the user has acquired more than one night of data, wishes to obtain a narrative summary report or provide patient reports. The Sleep Profiler software can be used as a stand-alone application for use on Microsoft Windows 7 & 8 operating system platforms (desktop model). Alternatively, the user interface (i.e., entry or editing of information) can be delivered via a web-portal (portal model). The capability to enter or edit patient information, call the application to generate a study report, and/or download a report is provided using either the desktop PC application or web-portal application. The same analysis and report generation software is used for both the desktop and web-portal applications.
Here's a breakdown of the acceptance criteria and study information for the Sleep Profiler device, based on the provided FDA 510(k) summary:
1. Table of Acceptance Criteria and Reported Device Performance
| Endpoint | Acceptance Criteria (Equivalent to Predicate Device) | Reported Device Performance (Sleep Profiler) |
|---|---|---|
| AHI for OSA Diagnosis | Minimum targeted positive likelihood ratios for AHI > 5 and AHI > 15 are 3.5 and 5.0 respectively (equivalent to predicate ARES). | Overall AHI: - AHI ≥ 5: Positive Likelihood Ratio = 6.67 - AHI ≥ 15: Positive Likelihood Ratio = 33.0 REM AHI: - AHI ≥ 5: Positive Likelihood Ratio = 8.84 - AHI ≥ 15: Positive Likelihood Ratio = 18.33 (These values exceed the minimum targeted likelihood ratios indicating equivalence) |
| Sleep Staging (Agreement with Expert Consensus) | Comparison of auto-detected staging to PSG results obtained by expert raters, showing equivalent agreement to the predicate Sleep Profiler (K130007). No specific numeric thresholds are explicitly stated as acceptance criteria, but generally high concordance is expected for substantial equivalence. | Overall (n=43 subjects, 3 raters): - Wake: Positive Agreement 0.73, Negative Agreement 0.94 - N1: Positive Agreement 0.25, Negative Agreement 0.93 - N2: Positive Agreement 0.77, Negative Agreement 0.84 - N3: Positive Agreement 0.76, Negative Agreement 0.94 - REM: Positive Agreement 0.74, Negative Agreement 0.97 (The document states "met" for this endpoint by comparison to the predicate.) |
2. Sample Sizes and Data Provenance for Test Set
- Sample Size (for AHI/OSA detection): 60 subjects for overall AHI, 40 subjects for REM AHI.
- Sample Size (for Sleep Staging): A subset of 43 subjects from the AHI study, with at least 4 hours of raw X8 diagnostic recording time.
- Data Provenance: The document states "signals acquired with the X8 System concurrent to polysomnography (PSG)." This implies the data was prospectively collected for this evaluation, likely from a clinical setting, but the country of origin is not specified.
3. Number of Experts and Qualifications for Ground Truth (Test Set)
- Number of Experts:
- For AHI/OSA detection: "one rater per study" for PSG results.
- For Sleep Staging: "weighted majority agreement of three raters" for the 43-subject subset.
- Qualifications of Experts: Not explicitly stated beyond being "rater(s)" for PSG and "expert scoring" for sleep staging. Standard practice for such studies would imply board-certified sleep technologists or physicians experienced in PSG scoring.
4. Adjudication Method for the Test Set
- For AHI/OSA detection: "one rater per study." This implies no formal adjudication/consensus process among multiple independent reviewers, as only a single rater's PSG results were used as ground truth for each case.
- For Sleep Staging: "weighted majority agreement of three raters." This indicates an adjudication method where the consensus of three experts established the ground truth.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No MRMC comparative effectiveness study is explicitly described where human readers' performance with and without AI assistance is compared. The study primarily focuses on the standalone performance of the Sleep Profiler software against expert-scored PSG.
6. Standalone Performance Study
- Yes, a standalone study was performed. The "Sleep Profiler software accuracy was clinically validated with signals acquired with the X8 System concurrent to polysomnography (PSG)." The results presented for AHI and sleep staging are for the algorithm's performance without human intervention, compared to human-scored PSG.
7. Type of Ground Truth Used
- Expert Consensus / Human Scoring: The primary ground truth for both AHI detection and sleep staging was derived from Polysomnography (PSG) results scored by human experts/raters. For sleep staging, it was specifically based on the "weighted majority agreement of three raters."
8. Sample Size for the Training Set
- The document does not explicitly state the sample size used for the training set of the Sleep Profiler software. It describes the clinical validation study (test set) but not the development data.
9. How Ground Truth for the Training Set Was Established
- The document does not provide details on how the ground truth for the training set was established. Typically, for such devices, training data would also be derived from expert-scored PSG, similar to the test set, but this information is not included in the 510(k) summary.
Ask a specific question about this device
(105 days)
Sleep Profiler is intended for the diagnostic evaluation by a physician to assess sleep quality in adults only. The Sleep Profiler is a software-only device to be used under the supervision of a clinician to analyze physiological signals and automatically score sleep study results, including the staging of sleep, detection of arousals and snoring.
The Sleep Profiler is a software application that analyzes previously recorded physiological signals obtained during sleep. The Sleep Profiler software can analyze any EDF files meeting defined specifications, including signals acquired with the Advanced Brain Monitoring X4 System. Automated algorithms are applied to the raw signals in order to derive additional signals and interpret the raw and derived signal information. The software automates recognition of: a) sleep stage, b) snoring frequency and severity, c) pulse rate, d) cortical (EEG), sympathetic (pulse) and behavioral (actigraphy and snoring) arousals. A single channel of electrocardiography, electrooculargraphy, electromyography, or electroencephalography can be optionally presented for visual inspection and interpretation. The software identifies and rejects periods with poor electroencephalography signal quality. The full disclosure recording of derived signals and automated analyses can be visually inspected and edited prior to the results being integrated into a sleep study report. Medical and history information can be input from a questionnaire. Responses are analyzed to provide a pre-test probability of Obstructive Sleep Apnea (OSA) (a condition that cannot be diagnosed with Sleep Profiler) so an appropriate referral to a sleep physician is made. The automated analyses of physiological data are integrated with the questionnaire data, medical and history information to provide a comprehensive report. Several report formats are available depending on whether the user has acquired more than one night of data, wishes to obtain a narrative summary report or provide patient reports. The capability to enter or edit patient information, call the application to generate a study report, and/or download a report is provided using either the desktop PC application or in a web-based module which emulates the desktop functionality. The same analysis and report generation software is used for both the desktop and web-portal applications.
The provided text describes a 510(k) submission for the Advanced Brain Monitoring, Inc. Sleep Profiler (K130007). This submission is for modifications to an already cleared device (K120450) to introduce a web-based module for patient information entry and report generation/download. The core analysis and report generation software remains unchanged from the predicate device.
Therefore, the acceptance criteria and study information provided in this document focus exclusively on the non-clinical testing performed to demonstrate that the new web-based module functions equivalently to the desktop application. There is no clinical study described here that would involve human readers, ground truth establishment through expert consensus or pathology, or outcome data for performance metrics like sensitivity, specificity, etc.
Here's a breakdown of the requested information based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
| Acceptance Criteria (Key Metric for Software Verification) | Reported Device Performance |
|---|---|
| Confirmation of identical performance using either the desktop or portal for key functions: | The results of the verification and validation activities demonstrate that the software meets requirements for safety, function, and intended use, including: |
| - Enter questionnaire responses | - Performance is identical for entering questionnaire responses via desktop or portal. |
| - Edit study data | - Performance is identical for editing study data via desktop or portal. |
| - Initiate generation of a study report | - Performance is identical for initiating generation of a study reports via desktop or portal. |
| - Download a study report | - Performance is identical for downloading study reports via desktop or portal. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: Not explicitly stated. The verification and validation activities tested the functionality of the web-based module against the desktop application, indicating a comparative test of functionalities. It does not refer to a "test set" in the context of clinical data.
- Data Provenance: Not applicable in the context of clinical data, as this was a non-clinical software verification study. The tests would have involved functional verification of the software.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Not applicable. This submission focuses on software functionality verification, not clinical performance requiring expert ground truth.
4. Adjudication Method for the Test Set
- Not applicable. This was a non-clinical software verification, not a clinical study requiring adjudication of expert interpretations.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No. An MRMC study was not conducted or described in this document. The submission explicitly states that "The modifications ... did not require clinical studies to support substantial equivalence."
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Yes, indirectly. The Sleep Profiler is described as a "software-only device" that "automates recognition of: a) sleep stage, b) snoring frequency and severity, c) pulse rate, d) cortical (EEG), sympathetic (pulse) and behavioral (actigraphy and snoring) arousals." The verification discussed here specifically confirms that the web-based module performs these functions identically to the pre-existing desktop version, which does perform these analyses automatically without continuous human intervention during the analysis phase. However, it's intended to be "used under the supervision of a clinician" and the reporting can be "visually inspected and edited prior to the results being integrated into a sleep study report," implying a human-in-the-loop for final interpretation. The performance of the automated algorithm itself was established in the predicate device (K120450) and is not being re-evaluated for K130007.
7. The Type of Ground Truth Used
- Not applicable for clinical performance. For the software verification described, the "ground truth" was the expected functional output and behavior of the established desktop application (K120450). The web-based module was verified to produce identical results.
8. The Sample Size for the Training Set
- Not provided/not applicable. This submission does not discuss algorithmic development or training sets. It is a modification to an already existing software application. The predicate device (K120450) would have had this information.
9. How the Ground Truth for the Training Set Was Established
- Not provided/not applicable. As above, this information relates to the original algorithmic development, not the current submission for a web-based module.
Ask a specific question about this device
(218 days)
Sleep Profiler is intended for the diagnostic evaluation by a physician to assess sleep quality in adults only. The Sleep Profiler is a software-only device to be used under the supervision of a clinician to analyze physiological signals and automatically score sleep study results, including the staging of sleep, detection of arousals and snoring.
The Sleep Profiler is a software application that analyzes previously recorded physiological signals obtained during sleep. The Sleep Profiler software can analyze any EDF files meeting defined specifications, including signals acquired with the Advanced Brain Monitoring X4 System which is the subject of a separate 510(k). Automated algorithms are applied to the raw signals in order to derive additional signals and interpret the raw and derived signal information. The software automates recognition of: a) sleep stage, b) snoring frequency and severity, c) pulse rate, d) cortical (EEG), sympathetic (pulse) and behavioral (actigraphy and snoring) arousals. A single channel of electrocardiography, electrooculargraphy, electromyography, or electroencephalography can be optionally presented for visual inspection and interpretation. The software identifies and rejects periods with poor electroencephalography signal quality. The full disclosure recording of derived signals and automated analyses can be visually inspected and edited prior to the results being integrated into a sleep study report. Medical and history information can be input from a questionnaire. Responses are analyzed to provide a pre-test probability of Obstructive Sleep Apnea (OSA) (a condition that cannot be diagnosed with Sleep Profiler) so an appropriate referral to a sleep physician is made. The automated analyses of physiological data are integrated with the questionnaire data, medical and history information to provide a comprehensive report. Several report formats are available depending on whether the user has acquired more than one night of data, wishes to obtain a narrative summary report or provide patient reports.
Here's a breakdown of the acceptance criteria and the study details for the Sleep Profiler device, based on the provided 510(k) summary:
Acceptance Criteria and Device Performance
The acceptance criteria are implied by the comparison to a predicate device, MICHELE (K112102). The goal is to demonstrate "similar" performance. The specific metrics are overall percent agreement and agreement for each sleep stage.
Table 1: Sleep Profiler Performance vs. Predicate Device Performance (Sleep Staging)
| Metric | Sleep Profiler Performance Data (from 44 subjects) | Predicate Device (MICHELE, K112102) Performance Data (from its study) |
|---|---|---|
| Overall % Agreement | Not explicitly stated as an overall value, but individual positive and negative agreements are provided. | 82.6% |
| Agreement by Sleep Stage (Positive Agreement / Sensitivity) | ||
| Wake | 0.79 | 89.9% |
| N1 | 0.40 | 50.4% |
| N2 | 0.80 | 82.9% |
| N3 | 0.76 | 82.9% |
| REM | 0.72 | 89.8% |
| Agreement by Sleep Stage (Negative Agreement / Specificity) | ||
| Wake | 0.95 | 96.4% |
| N1 | 0.91 | 94.7% |
| N2 | 0.83 | 89.6% |
| N3 | 0.97 | 97.5% |
| REM | 0.97 | 98.5% |
The summary states, "The positive and negative percent agreement obtained during clinical validation of the Sleep Profiler are similar to that obtained by the predicate device, MICHELE (K112102), which was validated using a different data set."
Study Details
-
Sample Size used for the test set and the data provenance:
- Test Set Sample Size: 44 subjects.
- Data Provenance: Not explicitly stated, but it's a "clinical validation" comparing to manual observation. It doesn't state whether it's retrospective or prospective, or the country of origin.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: Three raters.
- Qualifications: "either sleep technicians or physicians."
-
Adjudication method for the test set:
- The table indicates a "No-Consensus" row for the experts, implying adjudication by consensus was used. Specifically, the "Epochs assigned by Expert Scoring" includes a "No-Consensus" category (653 epochs out of 39191 total), suggesting that if the three raters did not agree, those epochs were excluded from the primary agreement calculations for individual stages. The main performance metrics are likely based on epochs where there was full consensus (3 out of 3, or potentially 2 out of 3 if that's what "consensus" meant here, although the "No Consensus" row suggests agreement was required).
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study was not done. The study's purpose was to validate the "sleep staging algorithms by comparison to sleep staging made by manual observation by three raters." This is a standalone algorithm performance study compared to human experts as ground truth, not a study evaluating human performance with or without AI assistance.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone algorithm performance study was done. The results presented in the table ("Epochs assigned by Sleep Profiler" vs. "Epochs assigned by Expert Scoring") directly report the algorithm's performance without human interaction or modification. The description also states the software "automates recognition" and that the "full disclosure recording of derived signals and automated analyses can be visually inspected and edited prior to the results being integrated into a sleep study report," but the presented validation data is for the automated algorithm's output.
-
The type of ground truth used:
- Expert Consensus. The ground truth for the test set was established by the "manual observation by three raters who were either sleep technicians or physicians."
-
The sample size for the training set:
- Not specified. The document does not provide details about the training set size or how it was established. It only discusses the clinical validation (test set) of the software.
-
How the ground truth for the training set was established:
- Not specified. Since details about the training set are not provided, how its ground truth was established is also not mentioned.
Ask a specific question about this device
Page 1 of 1