Search Filters

Search Results

Found 4 results

510(k) Data Aggregation

    K Number
    K250874
    Device Name
    Sunrise
    Manufacturer
    Date Cleared
    2025-08-29

    (158 days)

    Product Code
    Regulation Number
    868.2376
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    5101
    Belgium

    Re: K250874
    Trade/Device Name: Sunrise Air
    Regulation Number: 21 CFR 868.2376
    on mandibular movement
    Regulatory Class: II
    Product Code: QRS
    Regulation: 21 CFR 868.2376

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Sunrise Air is a non-invasive home care aid in the evaluation of obstructive sleep apnea (OSA) in patients 18 years and older with suspicions of sleep breathing disorders.

    Device Description

    The Sunrise Air consists of the Sunrise software (v1.28.00), which analyzes data from one of three compatible sensors (Sunrise sensor 1, Sunrise sensor 2, or Sunrise Air) placed on the patient's chin. Sunrise sensor 1 was approved through DEN210015, while Sunrise sensor 2 was cleared through K222262. The current version of the Sunrise device introduces a new sensor, Sunrise Air. The Sunrise device is intended to detect respiratory events, identify sleep stages and position, and generate key sleep parameters—such as the apnea-hypopnea index ("Sunrise AHI") and positional states classifications. The collected data is compiled into a report for further interpretation by a healthcare provider.

    AI/ML Overview

    The provided FDA 510(k) clearance letter for the Sunrise Air device primarily focuses on demonstrating substantial equivalence to a predicate device, rather than detailing a comprehensive clinical study to prove the device meets specific acceptance criteria for its claimed indications.

    The document highlights bench testing for technical equivalence, but lacks the detailed clinical study information typically provided for direct performance claims against established ground truth. Specifically, it states that "No modifications have been made to the Sunrise algorithm used to generate sleep parameters," and that a "validation study of SpO₂ and pulse rate accuracy for the subject device was conducted using raw PPG data acquired during the clinical validation for the Sunrise sensor 2 (K222262)." This suggests reliance on prior clearances for core algorithm performance and a specific re-validation for only the PPG data processing change.

    Therefore, many of the requested details about acceptance criteria, clinical study design, and ground truth establishment for the overall device performance (e.g., AHI calculation, OSA evaluation) are not explicitly present in this summary.

    Given the information in the provided document, here's what can be extracted and inferred:

    1. A table of acceptance criteria and the reported device performance:

    Based on the information provided, the "acceptance criteria" are implied by the comparisons to the predicate and reference devices, and some specific performance metrics are given for SpO2 and pulse rate. The primary acceptance criterion for the device's main function (evaluation of OSA via AHI) is that "No modifications have been made to the Sunrise algorithm used to generate sleep parameters," implying continued equivalence to the predicate's performance.

    Performance MetricAcceptance Criteria (Implied/Direct)Reported Device Performance (Sunrise Air)
    Overall Device Performance (OSA Evaluation)Implied substantial equivalence to predicate device (Sunrise K222262) in the evaluation of OSA, as no changes were made to the core AHI algorithm."No modifications have been made to the Sunrise algorithm used to generate sleep parameters." The device generates "key sleep parameters—such as the apnea-hypopnea index ('Sunrise AHI')."
    SpO₂ AccuracyNot explicitly stated but inferred from previous predicate's clearance (K222262). Common standards are often <3.0% RMS.1.91% RMS over the range of 70-100%
    Pulse Rate AccuracyNot explicitly stated but inferred from previous predicate's clearance (K222262). Common standards are often within 5 bpm or <5% RMS.2.73 beats per minute (bpm) RMS for a claimed measurement range of 51 to 104 bpm
    Accelerometer and Gyroscope SignalsTechnical equivalence to predicate device.Signals measured by subject and predicate devices found to be equivalent.
    Thermistor Signal (Breathing Patterns)Equivalent performance to oronasal thermal airflow sensor of reference device.Equivalent performance in capturing breathing patterns demonstrated.
    Microphone Signal (Snoring)Comparable performance to microphone of reference device (Somno HD).Comparable performance observed; sound patterns visually similar, synchronized transitions, comparable noise variations.

    Study that Proves the Device Meets Acceptance Criteria:

    The document describes a combination of bench testing and reliance on prior clinical validation for specific components. There isn't a single, new "study" designed to prove the overall device meets a set of clinical acceptance criteria for OSA evaluation, but rather, individual tests to establish equivalence of components or re-validate specific algorithm changes.

    2. Sample size used for the test set and the data provenance:

    • Overall Device (for AHI/OSA evaluation): Not explicitly stated for a new study. The document states that "No modifications have been made to the Sunrise algorithm used to generate sleep parameters." This implies reliance on the clinical validation data from the predicate device (Sunrise K222262). The original K222262 submission would contain this information.
    • SpO₂ and Pulse Rate Accuracy:
      • Sample Size: Not explicitly stated. The study was conducted using "raw PPG data acquired during the clinical validation for the Sunrise sensor 2 (K222262)." The sample size for that original clinical validation would be the relevant number.
      • Data Provenance: Retrospective, as it used data from a previous clinical validation study (for Sunrise sensor 2, cleared under K222262). The country/region of origin of this data is not specified in this document.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Overall Device (for AHI/OSA evaluation): Not specified in this document, as the core algorithm relies on prior validation. For the original K222262 clearance, ground truth would typically be established by a consensus of sleep experts (e.g., board-certified sleep physicians or registered polysomnographic technologists (RPSGTs)).
    • SpO₂ and Pulse Rate: Ground truth for these parameters is typically established through a co-oximeter or arterial blood gas analysis, not necessarily by "experts" in the human sense, but by a gold-standard measurement device.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    • Not specified within this 510(k) summary for any new studies. For the original clinical validation of the AHI algorithm, an adjudication method (such as independent scoring by multiple qualified technologists/physicians with consensus or a tie-breaker) would typically be employed for the polysomnography (PSG) ground truth.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No, an MRMC comparative effectiveness study involving human readers and AI assistance is not described in this document. The device is for "aiding in the evaluation" and generates parameters; it is not presented as an AI-assisted diagnostic tool for human interpretation improvement in this summary.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • Yes, the device's capability to "detect respiratory events, identify sleep stages and position, and generate key sleep parameters" and a "Sunrise AHI" implies a standalone algorithmic performance in generating these outputs from the sensor data. The statement "No modifications have been made to the Sunrise algorithm used to generate sleep parameters" means that the standalone performance of the algorithm itself is considered validated based on its prior clearance. The SpO₂ and pulse rate accuracy also represent standalone algorithm performance.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • Overall Device (for AHI/OSA evaluation): Not explicitly stated, but for sleep apnea diagnostic devices, the ground truth is overwhelmingly polysomnography (PSG) scored by qualified experts (e.g., according to AASM guidelines). This would have been the ground truth for the predicate device's (K222262) clearance.
    • SpO₂ and Pulse Rate: The ground truth for SpO₂ accuracy is typically established using a reference pulse oximeter or co-oximeter (invasive arterial blood gas analysis may be used for a subset of the data if required for the specific accuracy claims and range). For pulse rate, a simultaneous ECG or the reference oximeter's heart rate measurement.

    8. The sample size for the training set:

    • Not specified in this document. As the core algorithm is unchanged from the predicate, its training data would have been part of the K222262 submission.
    • The document mentions "cloud-based algorithm (Sunrise PPG algorithm)" as a change for PPG data processing, but it does not specify the training set size for this particular component, only that its validation was done on existing test data.

    9. How the ground truth for the training set was established:

    • Not specified in this document, as the core algorithm leverages prior clearance. For the predicate device, ground truth for training data would have broadly been established in the same manner as the test set: expert-scored polysomnography (PSG) data. However, the specific details (e.g., single expert vs. consensus) are not provided here.
    Ask a Question

    Ask a specific question about this device

    K Number
    K222262
    Device Name
    Sunrise
    Manufacturer
    Date Cleared
    2022-12-22

    (147 days)

    Product Code
    Regulation Number
    868.2376
    Reference & Predicate Devices
    Predicate For
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Marche 598/02 Namur, 5101 Belgium

    Re: K222262

    Trade/Device Name: Sunrise Regulation Number: 21 CFR 868.2376
    apnea testing based on mandibular movement Regulatory Class: II Product Code: QRS Regulation: 21 CFR 868.2376

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Sunrise device is a non-invasive home care aid in the evaluation of obstructive sleep apnea (OSA) in patients 18 years and older with suspicions of sleep breathing disorders.

    Device Description

    The Sunrise device is a cloud-based software device that analyzes data from a sensor (Sunrise sensor 1 or Sunrise sensor 2) placed on the patient's chin. The device detects respiratory events, identifies sleep stages and position. The device generates sleep parameters, e.g. apnea hypopnea index "Sunrise AHI", and position discrete states. Data collected by the device is integrated in a report for further interpretation by the healthcare provider.

    AI/ML Overview

    The provided text details the performance data for the Sunrise device to support its substantial equivalence determination. However, it does not explicitly state "acceptance criteria" in a表格 format as requested. Instead, it describes performance metrics (e.g., median measurement bias and LOA, sensitivity, specificity, global accuracy, and RMS values) for various parameters against pre-determined thresholds of clinical acceptability or against a gold standard (PSG).

    Based on the provided information, I will infer the acceptance criteria from the reported performance, as these are the values the device did achieve and were deemed sufficient for substantial equivalence.

    Here's a breakdown of the requested information:

    1. Table of Acceptance Criteria and Reported Device Performance

    As explicit acceptance criteria thresholds are not stated, the "Acceptance Criteria" column will reflect the reported performance that was deemed acceptable for substantial equivalence. The "Reported Device Performance" will reiterate these values.

    ParameterAcceptance Criteria (Inferred from Reported Performance)Reported Device Performance
    Study 1 (Belgium, n=289)
    TST Median Bias & LOAMedian bias within -4.50 min and LOA of -41.74 to +35.67-4.50 min (-41.74 to +35.67)
    AHI Median Bias & LOAMedian bias within -0.46 event/h and LOA of -13.52 to +9.00-0.46 event/h (-13.52 to +9.00)
    ORDI Median Bias & LOAMedian bias within +0.15 event/h and LOA of -10.70 to +10.12+0.15 event/h (-10.70 to +10.12)
    Sensitivity (AHI>=5)>= 0.990.99
    Sensitivity (AHI>=15)>= 0.920.92
    Sensitivity (AHI>=30)>= 0.810.81
    Specificity (AHI>=5)>= 0.860.86
    Specificity (AHI>=15)>= 0.940.94
    Specificity (AHI>=30)>= 0.990.99
    Study 2 (France, n=31)
    TST Median Bias & LOAMedian bias within -10.50 min and LOA of -37.42 to +25.79-10.50 min (-37.42 to +25.79)
    AHI Median Bias & LOAMedian bias within +0.20 event/h and LOA of -12.30 to +6.30+0.20 event/h (-12.30 to +6.30)
    ORDI Median Bias & LOAMedian bias within +1.01 event/h and LOA of -11.24 to +6.21+1.01 event/h (-11.24 to +6.21)
    Sensitivity (AHI>=5)>= 1.001.00
    Sensitivity (AHI>=15)>= 0.940.94
    Sensitivity (AHI>=30)>= 0.870.87
    Specificity (AHI>=5)>= 0.750.75
    Specificity (AHI>=15)>= 1.001.00
    Specificity (AHI>=30)>= 1.001.00
    Study 3 (Belgium, n=10)
    Position Discrete States Global Accuracy>= 93%93%
    Study 4 (SpO2 & Pulse Rate Accuracy)
    SpO2 Accuracy (RMS)<= 2.70% (over range 70-100%)2.70% (over range of 70-100%)
    Pulse Rate Accuracy (RMS)<= 1.95 bpm (over range 51-104 bpm)1.95 bpm (for a range of 51 to 104 bpm)
    Thermistor Ability to Capture AirflowPerformance equivalent to PSG oronasal thermal airflow sensorEquivalent to an oronasal thermal airflow sensor used in PSG

    2. Sample Sizes Used for the Test Set and Data Provenance

    • Clinical Study 1: 289 patients, retrospective, comparative, open study. Performed in Belgium.
    • Clinical Study 2: 31 patients, retrospective, comparative, open study. Performed in France.
    • Clinical Study 3: 10 patients, retrospective, comparative, open study. Performed in Belgium.
    • Clinical Study 4 (SpO2 & Pulse Rate): Not explicitly stated, but validated in accordance with ISO 80601-2-61:2019 and FDA guidance. This is typically a controlled bench study with a human subject population, but the document does not break down the sample size for this specific validation.
    • Thermistor Validation: Not explicitly stated (the text mentions "a validation study was conducted").

    All mentioned clinical studies are described as retrospective, comparative, and open studies.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The document does not specify the number or qualifications of experts used to establish the ground truth (PSG data). It only refers to "the gold-standard PSG" as the comparison. In typical PSG studies, the PSG data is scored by trained sleep technologists and sometimes reviewed by a sleep physician, but this detail is not provided.

    4. Adjudication Method for the Test Set

    The document does not describe any specific adjudication method for the test set. The ground truth is stated to be "the gold-standard PSG."

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance

    No MRMC study comparing human readers with and without AI assistance is mentioned. The studies focus on the performance of the device (algorithm) itself against PSG.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    Yes, the clinical studies describe the performance of the "algorithm" and the "device" against PSG, indicating a standalone (algorithm only) evaluation. The text states: "The algorithm was used to analyze sensor data and evaluate the performance of the device compared to PSG."

    7. The Type of Ground Truth Used

    The primary ground truth used for OSA parameters (TST, AHI, ORDI, OSA severity) and position discrete states was Polysomnography (PSG), referred to as the "gold-standard PSG." For SpO2 and pulse rate accuracy, the ground truth was established in accordance with ISO 80601-2-61:2019 and relevant FDA guidance, which typically involves comparison against a reference oximeter or validated measurement system. For the thermistor, it was compared to an "oronasal thermal airflow sensor used in PSG."

    8. The Sample Size for the Training Set

    The document does not explicitly state the sample size for the training set. The clinical studies mentioned (n=289, n=31, n=10) are described as performance evaluation studies for the device, not necessarily for training. It states the "Sunrise algorithm... has been updated," implying a development process that would include training, but the specifics of the training dataset are not provided.

    9. How the Ground Truth for the Training Set Was Established

    The document does not provide information on how the ground truth for any potential training set was established. It focuses solely on the performance evaluation of the device against the "gold-standard PSG" for its validation studies.

    Ask a Question

    Ask a specific question about this device

    K Number
    DEN210015
    Manufacturer
    Date Cleared
    2022-01-07

    (280 days)

    Product Code
    Regulation Number
    868.2376
    Type
    Direct
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    NEW REGULATION NUMBER: 21 CFR 868.2376

    CLASSIFICATION: Class II

    PRODUCT CODE: QRS

    BACKGROUND

    Code: QRS Device Type: Device for sleep apnea testing based on mandibular movement Regulation Number: 868.2376

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Sunrise SDDA device is a non-invasive home care aid in the evaluation of obstructive sleep apnea (OSA) in patients 18 years and older with suspicions of sleep breathing disorders.

    Device Description

    The Sunrise SDDA device consists of a Sunrise sensor and a cloud-based software device that analyzes data from the sensor when placed on the patient's mandible. The device also includes a mobile application to record patient's responses to questions about their sleep quality and transfer sensor data to the cloud. By analyzing patient's mandibular movements, the device also detects obstructive respiratory disturbances, identifies sleep states, notifies about the Obstructive Sleep Apnea (OSA) severity in a categorical format (non-OSA, mild-OSA, moderate-OSA, severe-OSA), generates sleep structure information (namely, total sleep time, sleep onset latency, wake after sleep onset, sleep efficiency, arousal index) and head position discrete states. Data collected by the device is integrated in a report for further interpretation and diagnosis by the healthcare provider.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the Sunrise Sleep Disorder Diagnostic Aid (SDDA), based on the provided text:

    Acceptance Criteria and Reported Device Performance

    Assessment MetricAcceptance Criteria (Implied)Reported Device Performance (as stated in the text)
    OSA Severity OutputThe clinical data must demonstrate output consistency and compare device performance with a clinical comparator device (polysomnography). Diagnostic metrics (sensitivity, specificity) for different ORDI cut-offs should be presented and deemed acceptable.Second Study (France): - Sensitivity: (b)(4) - Specificity: (b)(4) *For ORDI cut-offs: ORDI> (b)(4), ORDI> (b)(4), and ORDI> (b)(4) events/h, respectively. * (Specific values for each cutoff are redacted as (b)(4)). Third Study (Belgium): - Sensitivity: (b)(4) - Specificity: (b)(4) *For ORDI cut-offs: ORDI>= (b)(4), ORDI>= (b)(4), and ORDI>= (b)(4) events/h, respectively. * (Specific values for each cutoff are redacted as (b)(4)).
    Sleep Structure Parameters (TST, SOL, WASO, SE, ArI) Including Total Sleep Time (TST), Sleep Onset Latency (SOL), Wake After Sleep Onset (WASO), Sleep Efficiency (SE), Arousal Index (ArI)The clinical data must demonstrate output consistency and compare device performance with a clinical comparator device (polysomnography). Performance should be quantified by Root Mean Square Error (RMSE) and confidence intervals (CIs).First Study (Belgium - Retrospective): - RMSE: (b)(4) (CI (b)(4)) for TST, SOL, WASO, SE, and ArI respectively. (Specific values for each are redacted as (b)(4)). Second Study (France - Prospective): - RMSE: (b)(4) (CI (b)(4)) for TST, SOL, WASO, SE, and ArI respectively. (Specific values for each are redacted as (b)(4)). Third Study (Belgium - Retrospective): - RMSE: (b)(4) (CI (b)(4)) for TST, SOL, WASO, SE, and ArI respectively. (Specific values for each are redacted as (b)(4)).
    BiocompatibilityDemonstrate that patient-contacting components are biocompatible.Test articles (skin adhesive and film dressing) found non-cytotoxic, non-sensitizing, and non-irritating per ISO 10993-5 and ISO 10993-10.
    Electromagnetic Compatibility & Electrical SafetyPerformance data must be provided to demonstrate EMC and electrical, mechanical, and thermal safety.IEC 60601-1 and IEC 60601-1-2 testing performed; results support electrical safety and electromagnetic compatibility.
    Software ValidationAppropriate documentation to support validation for a Moderate Level of Concern, including algorithms, hardware characteristics, and mitigations for subsystem failures. Cybersecurity measures also addressed."Appropriate documentation" provided per FDA's 2005 (Software) and 2014 (Cybersecurity) guidance documents, including workflow, handling of errors, and algorithm development steps.
    Human Factors/UsabilityUsability engineering testing in accordance with IEC 62366-1:2015 should demonstrate that safety-related tasks can be successfully performed.Formative usability testing conducted in Belgium with adult users (tech-savvy & non-tech-savvy). Majority of participants completed all tasks correctly. Outcome assessed as satisfactory, providing "adequate assurance that all tasks linked to a safety mitigation could be successfully performed." No critical tasks were identified that could result in serious harm if performed incorrectly.
    Packaging and Shelf LifePackaging and labeling should withstand anticipated shipping conditions and preserve functionality. Shelf-life determined and supported.Drop testing, resistance to rain/humidity, and label integrity evaluations demonstrated appropriate protection. Shelf-life of 2 years determined based on adhesive shelf-life.

    Study Details

    The sponsor provided three clinical studies to support the safety and effectiveness of the Sunrise SDDA device.

    1. First Clinical Study (Retrospective)

    • Sample Size: Not explicitly stated for this particular study, but described as "patients."
    • Data Provenance: Belgium, retrospective.
    • Number of Experts for Ground Truth: One experienced sleep technician.
    • Qualifications of Experts: "Experienced sleep technician."
    • Adjudication Method for Test Set: None explicitly mentioned as a multi-expert adjudication process. The PSG data was visually scored by a single experienced sleep technician.
    • MRMC Comparative Effectiveness Study: No. This study focused on algorithm performance against PSG.
    • Standalone Performance: Yes, the Sunrise Machine Learning algorithms analyzed sensor data to evaluate the device performance for sleep structure parameters compared to in-lab PSG.
    • Type of Ground Truth: Expert-scored Polysomnography (PSG) by an experienced sleep technician,
      following 2012 AASM recommendations, and blinded to the study protocol.
    • Sample Size for Training Set: Not explicitly stated, however, the text mentions that "the same datasets were used for both optimizing diagnostic thresholds (training) and performance evaluation (validation)," suggesting this study may have contributed to or been part of the training data.
    • How Ground Truth for Training Set was Established: PSG data visually scored by an experienced sleep technician according to 2012 AASM recommendations.

    2. Second Clinical Study (Prospective)

    • Sample Size: Not explicitly stated, described as "patients."
    • Data Provenance: France, prospective.
    • Number of Experts for Ground Truth: Not explicitly stated beyond "experienced sleep technicians."
    • Qualifications of Experts: "Experienced sleep technicians from two different sleep centers (Université Grenoble Alpes, Grenoble, France and Imperial College London, London, United Kingdom)."
    • Adjudication Method for Test Set: Not explicitly stated as a formal adjudication protocol (e.g., 2+1), but PSG data was scored by "experienced sleep technicians from two different sleep centers," suggesting independent scoring, though not necessarily an adjudication to resolve discrepancies.
    • MRMC Comparative Effectiveness Study: No. This study focused on algorithm performance against PSG.
    • Standalone Performance: Yes, the device's performance for all output parameters (OSA severity and sleep structure) was evaluated compared to ambulatory at-home PSG.
    • Type of Ground Truth: Expert-scored Polysomnography (PSG) by experienced sleep technicians from two different sleep centers, following 2012 AASM recommendations.
    • Sample Size for Training Set: Not mentioned as contributing to the training set. This was an independent prospective study.
    • How Ground Truth for Training Set was Established: Not applicable; this study was for validation.

    3. Third Clinical Study (Retrospective)

    • Sample Size: Not explicitly stated, described as "patients."
    • Data Provenance: Belgium, retrospective.
    • Number of Experts for Ground Truth: One experienced sleep technician.
    • Qualifications of Experts: "Experienced sleep technician."
    • Adjudication Method for Test Set: None explicitly mentioned as a multi-expert adjudication process. The PSG data was visually scored by a single experienced sleep technician.
    • MRMC Comparative Effectiveness Study: No. This study focused on algorithm performance against PSG.
    • Standalone Performance: Yes, the Sunrise SDDA algorithms analyzed sensor data to evaluate the performance of the device compared to in-lab PSG.
    • Type of Ground Truth: Expert-scored Polysomnography (PSG) by an experienced sleep technician,
      following 2012 AASM recommendations, and blinded to the study protocol.
    • Sample Size for Training Set: Not explicitly stated. The study is described as "independent clinical study with similar design as the first one," but doesn't mention its role in training.
    • How Ground Truth for Training Set was Established: Not applicable; this study was for validation.

    Summary of Training and Validation Data Distinction:

    • The First Clinical Study was noted to have used "the same datasets... for both optimizing diagnostic thresholds (training) and performance evaluation (validation)," which was deemed insufficient on its own for demonstrating reasonable assurance of safety and effectiveness.
    • The Second and Third Clinical Studies appear to serve as independent validation studies, utilizing similar methodologies but without the noted confounding factor of using the same data for training and testing. The second study used the final Sunrise sensor and was prospective, while the third was retrospective like the first.

    Key takeaway on training data: While specific training set sizes are not provided, the first study implicitly indicates that some of its data (or data from a similar source) was used for "optimizing diagnostic thresholds (training)." The methods for establishing ground truth for any training data would align with the method used for the first study's ground truth, i.e., expert-scored PSG.

    Ask a Question

    Ask a specific question about this device

    K Number
    K081485
    Device Name
    SOMNOWATCH
    Manufacturer
    Date Cleared
    2008-09-18

    (113 days)

    Product Code
    Regulation Number
    868.2375
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    200 Alexandria, Virginia 22314

    Re: K081485

    Trade/Device Name: SOMNOwatch Regulation Number: 21 CFR 868.2376

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The SOMNOwatch is a non-life-supporting portable physiological signal recording device intended to be used for testing adult patients suspected of having movement-correlated sleep disturbances.

    Device Description

    The SOMNOwatch is a small, portable physiological signal recording system intended to be used to record, display, monitor, print and store biophysical events to aid in the diagnosis of sleep disorders. The device is intended to be prescribed for use by a physician in the office, sleep laboratory or patient's home.

    The SOMNOwatch is a small, typically wrist-worn activity monitor. The device is intended to be used to analyze circadian rhythms, automatically collect and score data for sleep parameters. These parameters, representing the number and intensity of limb movements, are directly associated to movement-correlated sleep disturbances. The unit can also be used to assess activity in any instance where quantifiable analysis of physical motion is desired. For PLM-detection two identical SOMNOwatches may be affixed to the patients legs, one to each leg.

    AI/ML Overview

    The SOMNOwatch 510(k) Premarket Application (K081485) describes the device, its intended use, and a summary of nonclinical and clinical testing. However, it does not explicitly define acceptance criteria in terms of specific performance metrics (e.g., sensitivity, specificity, accuracy) or provide a detailed study that proves the device meets such criteria. Instead, the submission focuses on establishing substantial equivalence to predicate devices primarily through nonclinical performance testing and a "clinical comparison" study, without providing quantitative results against predefined thresholds.

    Based on the provided text, here's a breakdown of the requested information:

    1. A table of acceptance criteria and the reported device performance

    The provided document does not contain explicit numerical acceptance criteria or quantifiable performance metrics (like sensitivity, specificity, or accuracy for diagnosing sleep disturbances) for the SOMNOwatch itself. The "reported device performance" is broadly stated as:

    Acceptance Criteria (Explicitly Stated)Reported Device Performance
    Compliance to device specificationsAll functions were verified to operate as designed.
    Compliance to international standards for electrical safety and electromagnetic compatibilityFound to be compliant with the requirements of these standards for its intended use.
    Safety and effectiveness outcomes substantially equivalent to predicate devicesClinical comparison studies found the subject device can be expected to provide safety and effectiveness outcomes substantially equivalent to the predicates.

    2. Sample sized used for the test set and the data provenance

    • Test Set Sample Size: Not specified. The document only mentions "clinical comparison studies" without detailing the number of participants or the data size.
    • Data Provenance: Not specified. It's unclear if the data was retrospective or prospective, or the country of origin.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    Not specified. The document does not describe how ground truth was established for any clinical comparison.

    4. Adjudication method for the test set

    Not specified. There is no mention of an adjudication method.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No MRMC study was described. The SOMNOwatch is an activity recording device that provides raw data and analysis software (DOMINOlight) but the document does not discuss human reader performance improvement with or without AI assistance. The focus is on the device's ability to record and score data for sleep parameters related to movement-correlated sleep disturbances.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The device is intended to "automatically collect and score data for sleep parameters," implying a standalone algorithmic analysis. The "DOMINOlight software retrieves the data from the SOMNOwatch, displays the data, and can store data for future reference and comparison. DOMINOlight also allows automatic analysis of all signals including the body position." However, no specific standalone performance metrics (e.g., for automated PLM detection accuracy) are provided in this summary. The evaluation focuses on equivalence to predicate devices rather than direct algorithmic performance.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    Not specified. The document does not describe the specific type of ground truth used in its "clinical comparison studies." Given the device's function to "automatically collect and score data for sleep parameters" related to movement, it's possible that the comparison was against similar data from predicate devices or established polysomnography (PSG) techniques, but this is not explicitly stated.

    8. The sample size for the training set

    Not applicable/Not specified. The document does not mention a training set, as it does not describe the development or validation of a new AI/ML algorithm through traditional training/testing splits with a separate training set. The clinical evaluation mentioned is a "comparison study" to predicate devices for substantial equivalence.

    9. How the ground truth for the training set was established

    Not applicable/Not specified as no training set was described.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1