Search Results
Found 2 results
510(k) Data Aggregation
(324 days)
The Happy Health Home Sleep Test is a Software as a Medical Device that uses data from wearable devices to record, analyze, display, export, and store biophysical parameters to aid in evaluating sleep‐related breathing disorders of adult patients suspected of sleep apnea. The device is intended for use on individuals who are 22 years of age or older in clinical and home settings under the direction of a trained healthcare provider.
The Happy Health Home Sleep Test is a Software as a Medical Device that uses data from wearable devices to record, analyze, display, export, and store biophysical parameters to aid in evaluating sleep-related breathing disorders of adult patients suspected of sleep apnea.
The device is intended for use on individuals who are 22 years of age or older in clinical and home settings under the direction of a trained healthcare provider. The device is intended to process input data streams received from an external hardware device (i.e., a smart ring, K240236) and uses these signals to determine various sleep parameters that may be used and interpreted by a clinician in diagnosing sleep disorders such as sleep apnea.
The input physiologic signals from the external device are:
- Acceleration / Movement
- Photoplethysmography (PPG)
The external hardware device (K240236) includes a PPG sensor and accelerometer embedded within a housing to capture the above physiological signals. The K240236 device is worn on the finger and is indicated for continuous data collection of the above signals. Data from the external hardware device is transmitted over a secure API to the subject device for analysis.
The device then uses a set of algorithms to compute the following outputs:
- Happy Health Apnea Hypopnea Index (hAHI)
- Total Sleep Time
The outputs are available for a clinician to review as a report, accessible through a web-based viewer application.
The provided FDA 510(k) clearance letter and summary for the Happy Health Home Sleep Test give a good overview of the device's performance testing. Here's a structured breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document doesn't explicitly list "acceptance criteria" as a separate section with specific thresholds that were agreed upon before the study. Instead, it presents performance metrics of the Happy Health Home Sleep Test and compares them to the predicate and reference devices, implying these metrics are the basis for demonstrating substantial equivalence. For clarity, I'm inferring the acceptance criteria from the "Equivalent" column in the comparison tables and the detailed performance results.
| Metric (Inferred Acceptance Criteria) | Happy Health Home Sleep Test Reported Performance | Justification for Acceptance (from document) |
|---|---|---|
| hAHI Regression Slope (Regression: PSG_AHI = Slope * hAHI + Intercept, Slope between 0.9 and 1.1) | 0.98 [0.91, 1.06] | "Equivalent - both subject and predicate devices demonstrate strong correlation with manually scored AHI, each with a regression slope between 0.9 and 1.1 and intercept between -5 and 5." |
| hAHI Regression Intercept (Intercept between -5 and 5) | 0.81 [-0.35, 1.91] | "Equivalent - both subject and predicate devices demonstrate strong correlation with manually scored AHI, each with a regression slope between 0.9 and 1.1 and intercept between -5 and 5." |
| hAHI Bland-Altman Mean Bias (Not explicitly quantified as a criterion, but a low bias is desired) | 0.5 [-0.1, 1.1] events/hr | Demonstrates low systematic difference from PSG AHI. |
| hAHI Bland-Altman Limits of Agreement (LOA) (Comparable to predicate/reference, generally aiming for tighter LOA) | Lower LOA: -9.8 [-10.6, -9] events/hrUpper LOA: 10.7 [-9.9, 11.5] events/hr | "Equivalent - both subject and predicate devices demonstrate strong correlation with manually scored AHI..." (Implied that these LOA are acceptable/comparable to predicate when predicate's full data is considered). |
| Total Sleep Time (TST) Mean Absolute Difference (Comparable to predicate/reference, around 30 minutes or less) | 24.9 minutes (SD 32.6 minutes) | "Equivalent - both subject and reference devices demonstrate strong correlation with manually scored AHI, each with a mean absolute difference of around 30 minutes or less." |
2. Sample Size and Data Provenance
- Test Set Sample Size: 90 subjects.
- Data Provenance:
- Country of Origin: Not explicitly stated, but the study was conducted at "two sleep labs", implying a clinical setting within the country of submission (likely USA, given FDA submission).
- Retrospective or Prospective: The wording "Data from a total of 90 subjects referred to the sleep clinic by a physician was manually scored" suggests the data was collected prospectively for the purpose of the study. The phrasing "A clinical study was performed to evaluate the performance..." also indicates a planned, prospective study.
3. Number of Experts and Qualifications for Ground Truth
- Number of Experts: Not explicitly stated how many experts were involved in the manual scoring. The text only mentions "manually scored in accordance with the American Academy of Sleep Medicine (AASM) guidelines."
- Qualifications of Experts: Not explicitly stated, but implied to be qualified sleep technicians/physicians capable of AASM-compliant scoring.
4. Adjudication Method for the Test Set
The adjudication method for reconciling discrepant manual scores (if multiple scorers were used) is not specified in the provided text. It simply states "manually scored." If only one scorer per patient, no adjudication would be needed.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
There is no indication that a multi-reader multi-case (MRMC) comparative effectiveness study was done to evaluate how human readers improve with AI vs. without AI assistance. The study focuses solely on the device's standalone performance compared to manual PSG scoring.
6. Standalone (Algorithm Only) Performance
Yes, a standalone performance study was done. The entire clinical testing section details the performance of the "Happy Health Home Sleep Test" algorithm (hAHI and TST) compared to manually scored Polysomnography (PSG) data, without human-in-the-loop assistance.
7. Type of Ground Truth Used
The primary ground truth used was expert consensus / manual scoring of Polysomnography (PSG) data in accordance with American Academy of Sleep Medicine (AASM) guidelines. This is the gold standard for sleep studies.
8. Sample Size for the Training Set
The sample size for the training set is not provided in this document. The clinical study details describe the test set used for validation.
9. How the Ground Truth for the Training Set Was Established
The document does not provide information on how the ground truth for the training set was established. It only discusses the ground truth for the clinical validation (test) set.
Ask a specific question about this device
(111 days)
The TipTraQ is a wearable device intended for aiding in sleep apnea evaluation/diagnosis for adult patients suspected of sleep apnea in both home-based and clinical-use environments.
TipTraQ is a prescription-only medical device aiding in sleep apnea evaluation/diagnosis comprising a fingertip wearable device, a companion mobile app, a cloud-based AI analysis and an information display system. It collects essential physiological waveform information, including Photoplethysmogram (PPG) and accelerometer data. The TipTraQ Algorithm System determines Total Sleep Time (TST), Total REM Time (TREMT), Oxygen Desaturation Index (ODI), and Apnea-Hypopnea Index (AHI) from the recorded waveform signals.
TipTraQ Sensor includes a PPG sensor, which uses light sources (green, red, and infrared) and two photodetectors to measure the blood volume changes in the microvascular bed of tissue. Additionally, the TipTraQ Sensor utilizes an Inertial Measurement Unit (IMU) with three-axis accelerometers (device-function) and three-axis gyroscopes (non-device function), monitoring movement and actigraphy to assess sleep stages. The TipTraQ Expert Panel can display recorded physiological waveforms, while the TipTraQ API System generates output for essential physiological parameters analyzed by the TipTraQ Algorithm System.
The TipTraQ Sensor is rechargeable and designed to provide at least 12 hours of usage after fully charged. The TipTraQ Charging Case, which is also rechargeable, provides a battery capacity of eight full charges of the TipTraQ Sensor for user convenience. The TipTraQ Charging Case can be powered and charged using an external USB-C.
Here's a breakdown of the acceptance criteria and the study details for the TipTraQ (TTQ001) device, based on the provided document:
1. Table of Acceptance Criteria and Reported Device Performance
Note: The document presents the performance results directly without explicitly stating the "acceptance criteria" as a separate, quantified threshold. However, for the SpO2 validation, it states "met the predefined acceptance criteria <3.5". For the sleep validation study, it mentions "met all the predetermined acceptance criteria (set by prior studies and reference literatures)," implying these are the results shown in the table.
| Measure | Metric | Acceptance Criteria (Implied / Stated) | Reported Device Performance |
|---|---|---|---|
| SpO2 Accuracy | Arms (Artificial Root Mean Square) | < 3.5% (stated) | 1.41% (Overall) |
| SpO2 Range 70-100% | N/A (implied by meeting overall Arms) | 1.41 | |
| SpO2 Range 60-70% | N/A | 1.55 | |
| SpO2 Range 70-80% | N/A | 1.7 | |
| SpO2 Range 80-90% | N/A | 1.35 | |
| SpO2 Range 90-100% | N/A | 1.11 | |
| Pulse Rate (PR) Accuracy | Arms (Artificial Root Mean Square) | N/A | 1.64 |
| PR Range | N/A | 56-118 bpm | |
| Total Sleep Time | Pearson Correlation Coefficient (PCC) | N/A | 0.786 |
| Epoch-wise Sleep Stage (Wake) | Sensitivity | N/A | 0.655 |
| Specificity | N/A | 0.901 | |
| Accuracy | N/A | 0.837 | |
| Epoch-wise Sleep Stage (REM) | Sensitivity | N/A | 0.713 |
| Specificity | N/A | 0.930 | |
| Accuracy | N/A | 0.905 | |
| Epoch-wise Sleep Stage (NREM) | Sensitivity | N/A | 0.824 |
| Specificity | N/A | 0.738 | |
| AHI based on 1a rule | Overall Accuracy | N/A | 0.791 |
| AHI cutoff 5 sensitivity | N/A | 0.868 | |
| AHI cutoff 5 specificity | N/A | 0.741 | |
| AHI cutoff 15 sensitivity | N/A | 0.876 | |
| AHI cutoff 15 specificity | N/A | 0.755 | |
| AHI cutoff 30 sensitivity | N/A | 0.806 | |
| AHI cutoff 30 specificity | N/A | 0.905 | |
| AHI based on 1b rule | AHI cutoff 5 sensitivity | N/A | 0.924 |
| AHI cutoff 5 specificity | N/A | 0.801 | |
| AHI cutoff 15 sensitivity | N/A | 0.909 | |
| AHI cutoff 15 specificity | N/A | 0.908 | |
| AHI cutoff 30 sensitivity | N/A | 1.000 | |
| AHI cutoff 30 specificity | N/A | 0.933 |
2. Sample Size and Data Provenance for the Test Set
- SpO2 Validation Study:
- Sample Size: 12 subjects
- Data Provenance: University of California San Francisco (UCSF) - implies prospective study data from the US.
- Sleep Validation Study:
- Sample Size: 147 qualified samples
- Data Provenance: Duke University Hospital - implies prospective study data from the US.
3. Number of Experts and Qualifications for Ground Truth
The document does not explicitly state the number or specific qualifications of experts used to establish the ground truth for either the SpO2 validation study or the sleep validation study.
- SpO2 Validation: The ground truth was established by "reference arterial blood gas sampling," which is a clinical standard, not an expert panel.
- Sleep Validation: The ground truth was established by "the gold standard in-lab polysomnography (PSG)." PSG scoring typically involves trained sleep technologists and often review by a board-certified sleep physician, but the number and qualifications of these individuals are not specified in the document.
4. Adjudication Method for the Test Set
The document does not specify any adjudication method (e.g., 2+1, 3+1) for either the SpO2 validation study or the sleep validation study.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No mention of a Multi-Reader Multi-Case (MRMC) comparative effectiveness study or any effect size for human readers improving with AI assistance is present in the document. The studies described are standalone performance validations against a gold standard.
6. Standalone Performance Study (Algorithm Only)
Yes, standalone performance studies were done. Both the SpO2 validation study and the sleep validation study describe the TipTraQ device's performance (which includes its algorithm, "TipTraQ Algorithm System") compared to a "gold standard" or reference method, without human-in-the-loop assessment as part of the primary performance metrics. The stated "TipTraQ Algorithm System determines Total Sleep Time (TST), Total REM Time (TREMT), Oxygen Desaturation Index (ODI), and Apnea-Hypopnea Index (AHI) from the recorded waveform signals," clearly indicates an algorithm-only evaluation for these parameters.
7. Type of Ground Truth Used
- SpO2 Validation Study: "Reference arterial blood gas sampling." (Clinical standard measurement)
- Sleep Validation Study: "Gold standard in-lab polysomnography (PSG)." (Clinical standard physiological monitoring and expert scoring)
8. Sample Size for the Training Set
The document does not provide any information regarding the sample size for the training set used for the TipTraQ Algorithm System.
9. How the Ground Truth for the Training Set Was Established
The document does not provide any information regarding how the ground truth for the training set was established. It only describes the validation studies.
Ask a specific question about this device
Page 1 of 1