Search Filters

Search Results

Found 65 results

510(k) Data Aggregation

    K Number
    K251574
    Device Name
    Sleep Watch
    Date Cleared
    2025-07-31

    (70 days)

    Product Code
    Regulation Number
    882.5050
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Ardsley, New York 10502

    Re: K251574
    Trade/Device Name: Sleep Watch
    Regulation Number: 21 CFR 882.5050
    Model Number: V1.0
    Common Name: Device, Sleep Assessment
    Regulation Number: 21 CFR 882.5050

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    A wrist-worn activity monitor designed for documenting physical movement associated with applications in physiological monitoring. The device is intended to monitor the activity associated with movement during sleep and make estimates of sleep quantity/quality using accelerometry, based on actigraph algorithms designed specifically for the device's unique signal processing techniques. Can be used to analyze circadian rhythms and assess activity in any instance where quantifiable analysis of physical motion is desirable.

    The results of the processed data are graphical and numerical presentations and reports of sleep latency, sleep duration, sleep quality and circadian rhythms for the use by or on the order of physicians, trained technicians, or other healthcare professionals.

    The Sleep Watch System is intended for use on a general-purpose computing platform, it does not issue any alarms.

    The Sleep Watch System is intended for use in the natural environment for passive, noninvasive, data collection of physiological parameters that will later be transmitted to a SaaS platform for remote review by a clinician. The Sleep Watch device is intended for use in children and older.

    Device Description

    The Sleep Watch is a wrist-worn device that monitors activity, temperature, and light exposure, it can be used to analyze sleep quantity and quality, circadian rhythms, automatically collect and store data for sleep parameters, and assess activity, intended for use by or on the order of a Healthcare Professional to aid in the evaluation of sleep disorders based on Actigraphy recordings, typically collected during sleep.

    The results of the processed data are graphical and numerical presentations and reports of sleep latency, sleep duration, sleep quality and circadian rhythms for the use by or on the order of physicians, trained technicians, or other healthcare professionals.

    The Sleep Watch System is intended for use on a general-purpose computing platform; it does not issue any alarms.

    The Sleep Watch system consists of:

    • The Sleep Watch built-in with accelerometer, gyroscope, PPG, temperature, and light sensors, as well as BLE and WiFi chips.
    • The Sleep Watch collects raw data from each sensor.
    • The Sleep Watch processes signals with filters and stores raw data in eMMC storage.
    • Psychomotor Vigilance Task (PVT)
    • An App manages Sleep Watches
    • A web Application Programing Interface (API) to allow authenticated users to upload data collected form Sleep Watch to AMI Cloud Platform
    • A database to store the input, intermedium output, final output and associated data.
    • A web-based database API to access the database and get outputs.
    • A dashboard, a web-based user interface, to display, retrieve, manage, edit, verify, and summarize Sleep Watch outputs.
    • Proprietary algorithms to analyze actigraphy.
    • A reporting API to generate sleep reports.

    The Sleep Watch System is intended for patients in the home environment for passive, noninvasive, data collection of physiological parameters that will later be transmitted to a SaaS platform for remote review by a clinician. The Sleep Watch device is intended for use in children and older.

    The Sleep Watch System measures and records:

    • PPG (Red, Green, Infrared) raw data
    • Accelerometer (X, Y, X) and Gyroscope (Vx, Vy, Vz) raw data
    • Light (R, G, B) data
    • ZCM (Zero Crossing Mode)
    • PIM (Proportional Integrating Measure)
    • Estimate Sleep and Wake
    • PVT test results
    • Skin Temperatures
    • MESOR (Midline Estimated Statistic of Rhythm), amplitude, and acrophase

    The Sleep Watch allows for on-wrist and/or in-App rating scales (0 to 10), with experimenter selectable initial value (0,5,10) and/or questionnaires (each limited by the constraints of readability). These features should be on-demand, according to an experimenter's selected schedule, or both.

    The Sleep Watch device does not provide physiological alarms.

    AI/ML Overview

    This FDA 510(k) clearance letter and summary for the Sleep Watch device focuses heavily on regulatory compliance, technological comparison, and general software/hardware verification. Crucially, it lacks specific information about clinical performance studies, particularly concerning the quantitative measures of sleep quantity/quality estimates and their accuracy against a gold standard.

    Therefore, I cannot fulfill all parts of your request with the provided information. I will construct a response based on the available data, highlighting where information is missing and inferring what would typically be required for such a device clearance.

    Here's a breakdown of the acceptance criteria and the study information based on the provided text:


    Acceptance Criteria and Device Performance for Sleep Watch

    Based on the provided 510(k) summary, the acceptance criteria are not explicitly stated in a quantitative manner (e.g., "accuracy greater than X%"). Instead, the document discusses meeting general design requirements, software verification/validation, and demonstrating substantial equivalence to the predicate device. For a device estimating sleep quantity/quality, performance would typically be assessed by comparing its output to a recognized "gold standard" for sleep measurement, such as Polysomnography (PSG).

    Given the absence of specific performance metrics in the provided text, the table below reflects what would typically be expected as acceptance criteria for a device making sleep estimates using actigraphy, and it would normally be accompanied by the device's reported performance against those criteria. As these are not present, I will denote them as "Not Specified in Document."

    Acceptance Criteria CategoryTypical Metric (Not Specified in Document)Reported Device Performance (Not Specified in Document)
    Accuracy of Sleep/Wake EstimationSensitivity (true positive rate for sleep) vs. PSGNot Specified in Document
    Specificity (true negative rate for wake) vs. PSGNot Specified in Document
    Overall Agreement/Accuracy vs. PSGNot Specified in Document
    Accuracy of Sleep DurationMean Absolute Error (MAE) compared to PSGNot Specified in Document
    Bland-Altman agreement with PSGNot Specified in Document
    Accuracy of Sleep LatencyMean Absolute Error (MAE) compared to PSGNot Specified in Document
    Reliability/ConsistencyTest-retest reliability (e.g., ICC)Not Specified in Document
    UsabilityUser satisfaction, ease of use (qualitative)"Meets its requirements, performs as intended" (general statement)
    SafetyCompliance with electrical, biocompatibility, and cybersecurity standardsCompliant to IEC 60601-1, ISO 10993-1, ANSI/UL 2900-2-1, etc.
    CybersecurityRobustness against cyber threats, data integrityAuthentication, authorization, cryptographic controls, etc.

    Study Proving Device Meets Acceptance Criteria

    The provided 510(k) summary (Section 7, "Performance Data") describes the testing performed. However, it primarily focuses on non-clinical (software, electrical, and mechanical) testing and verification/validation activities, rather than a clinical performance study demonstrating the accuracy of the sleep estimation algorithms against a gold standard.

    Here's the information extracted and inferred from the document:

    1. A table of acceptance criteria and the reported device performance:

      • As detailed above, specific quantitative acceptance criteria and corresponding reported performance metrics for sleep quantity/quality estimations are not specified in the provided document. The document primarily states that "all pre-defined acceptance criteria for the Sleep Watch were met and all software test cases passed" and that the device "meets its requirements, performs as intended." This refers to internal design and software validation, not clinical performance against a gold standard like PSG.
    2. Sample sizes used for the test set and the data provenance:

      • Test Set Sample Size: Not Specified. The document refers to "system testing," "verification," and "validation" but does not provide a sample size in terms of patient data or clinical recordings used to validate the accuracy of sleep/wake estimates.
      • Data Provenance: Not Specified. There is no mention of the country of origin of any data (clinical or otherwise) or whether it was retrospective or prospective.
    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Not Applicable/Not Specified. Since a clinical performance study comparing the device's sleep estimations to a ground truth (like PSG scored by experts) is not described in the provided text as part of the "Performance Data," there's no mention of experts establishing ground truth for a test set. This would be a critical component of a clinical validation study for sleep monitoring devices.
    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

      • Not Applicable/Not Specified. As no expert-adjudicated ground truth acquisition process is described for a clinical test set, no adjudication method is mentioned.
    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No, not specified. The document does not describe any MRMC study involving human readers or clinicians using or being aided by the Sleep Watch. This type of study would be more relevant to AI-assisted diagnostic tools where human interpretation is central. The Sleep Watch primarily provides processed data and reports for review by clinicians, it's not described as an AI-assistance tool for human interpretation of raw signals.
    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • Implicitly, yes, for the algorithm's internal function, but not for its clinical accuracy against a gold standard. The document states "Proprietary algorithms to analyze actigraphy" and "Design validation testing which simulated the intended use to confirm that the end-to-end functionality of the Sleep Watch in conjunction with the actigraphy algorithms meets the design requirements." This suggests standalone testing of the algorithms' functionality. However, it does not confirm a standalone clinical performance study where the device's estimated sleep parameters are compared directly to a clinical gold standard (like PSG) without human intervention in the data acquisition/processing chain beyond collecting the actigraphy data.
    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • Not Specified in the context of clinical performance. For the non-clinical testing, requirements were confirmed against "design requirements." For sleep monitoring, the gold standard ground truth would typically be Polysomnography (PSG) data, often scored by certified sleep technologists and overseen by sleep physicians. The document does not state that PSG was used as ground truth for validating the sleep estimation accuracy.
    8. The sample size for the training set:

      • Not Specified. The document mentions "proprietary algorithms" but does not detail their development, including the size or nature of any training data used for these algorithms.
    9. How the ground truth for the training set was established:

      • Not Specified. Given the lack of information on training sets, the method for establishing their ground truth is also not mentioned.

    Summary of Missing Information Critical for Clinical Performance Evaluation:

    The provided 510(k) summary focuses on the technical aspects and regulatory compliance of the Sleep Watch (e.g., software, hardware, safety standards, cybersecurity, and equivalence to a predicate actigraph). It explicitly mentions "Proprietary algorithms to analyze actigraphy" but does not describe the clinical validation study that would typically be performed to demonstrate the accuracy of these algorithms in estimating sleep quantity and quality against a clinical gold standard (like PSG). For a device making sleep estimates, objective clinical performance data (e.g., sensitivity, specificity, accuracy, or agreement metrics against PSG) would be crucial for establishing its effectiveness in its intended use. Without this, the "acceptance criteria" for the clinical performance of its sleep estimation function are not transparent in this document.

    Ask a Question

    Ask a specific question about this device

    K Number
    K250515
    Device Name
    EpiMonitor
    Manufacturer
    Date Cleared
    2025-06-19

    (118 days)

    Product Code
    Regulation Number
    882.1580
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    --------------------------|-------------------|--------------|--------------|-------------------|
    | 882.5050

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EpiMonitor is a prescription only medical device system composed of a wearable device "EmbracePlus" and paired mobile software application "EpiMonitor" intended as an adjunct to seizure monitoring in adults and children aged 6 and up in a home environment or healthcare facilities. The device is worn on the wrist and senses Electrodermal Activity (EDA) and motion data to detect patterns that may be associated with either primary or secondary generalized tonic clonic seizures in patients with epilepsy or at risk of having epilepsy. When a seizure event is detected, the wearable device component of EpiMonitor sends a command to a paired mobile device where the EpiMonitor App is programmed to initiate an alert to a designated caregiver. The EpiMonitor app incorporates additional detection sensitivity modes, "high" for use during periods of rest or sleeping or "low" for use during periods of low-intensity activity, in order to reduce false alarm incidents.

    EpiMonitor records, stores and transmits accelerometer, EDA, peripheral skin temperature and activity data for subsequent retrospective review by a trained healthcare professional via a cloud-based software.

    Device Description

    The EpiMonitor system consists of a wearable device and mobile application:

    • A wearable medical device called EmbracePlus,
    • A mobile application running on smartphones called "EpiMonitor"

    The EmbracePlus is worn on the user's wrist and continuously collects raw data via specific sensors, these data are continuously analyzed by an on-board algorithm (EpiAlgo 2.1), which assesses the physiological data and determines if the user may be undergoing a generalized tonic-clonic seizure (GTCS). The EpiAlgo has been validated through testing, using the gold-standard video-Electroencephalogram (EEG) methodology designed by a group of epileptologists at a top level 4 epilepsy center, from epilepsy patients experiencing GTCSs in hospital Epilepsy Monitoring Units.

    When a likely GTCS is detected, EmbracePlus sends, via Bluetooth Low Energy, a message to the EpiMonitor app. The EpiMonitor app communicates to the Empatica Cloud which initiates, through the external provider a voice call and SMS text message is sent to summon the attention of user-designated caregiver(s).

    In addition to initiating alerts, the EpiMonitor app also continuously receives all the raw sensor data collected by the EmbracePlus. These data are analyzed by one of the EpiMonitor app software modules, EmpaDSP (paragraph 2.3.2), which computes the additional physiological parameters, such as activity during sleep and peripheral skin temperature.

    The EpiMonitor App is also responsible for transmitting, over a cellular data plan or Wi-Fi connection the sensors' raw data, device information, and computed physiological parameters to the Empatica Cloud. On the Empatica Cloud, these data are stored, and made available to healthcare providers via a specific cloud-based software called Care Monitoring Portal.

    AI/ML Overview

    Here's a summary of the acceptance criteria and study details for EpiMonitor, based on the provided FDA clearance letter:


    Acceptance Criteria and Device Performance for EpiMonitor

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document doesn't explicitly state "acceptance criteria" for PPA and FAR in a table format. Instead, it presents the device's performance for these metrics, implying that these results were deemed acceptable by the FDA for clearance. For the purpose of this response, I'm interpreting the "reported device performance" as the achieved PPA and FAR values and will frame the "acceptance criteria" as the expectation for these metrics to be within reasonable clinical utility.

    MetricAcceptance Criteria (Implicit)Reported Device Performance (Low-Sensitivity Mode)
    Positive Percent Agreement (PPA) - During Non-Rest Activities (Epilepsy Monitoring Unit Data)Clinically acceptable detection of GTCS6-21 years: 0.895 (corrected PPA: 0.791, CI: 0.619-0.925)
    >21 years: 1.000 (corrected PPA: 0.905, CI: 0.891-0.917)
    False Alarm Rate (FAR) per 24 hours - During Non-Rest Activities (Epilepsy Monitoring Unit Data)Clinically acceptable false alarm rate6-21 years: Overall FAR: 0.70 (CI: 0.41-1.06), Mean FAR: 0.91 (CI: 0.44-1.57)
    >21 years: Overall FAR: 0.28 (CI: 0.15-0.46), Mean FAR: 0.33 (CI: 0.17-0.53)
    Positive Percent Agreement (PPA) - During Non-Rest Activities (Real-World Data)Clinically acceptable detection of GTCS6-21 years: 0.87 (corrected PPA: 0.86, CI: 0.78-0.92)
    >21 years: 0.8 (corrected PPA: 0.77, CI: 0.64-0.87)
    False Alarm Rate (FAR) per 24 hours - During Non-Rest Activities (Real-World Data)Clinically acceptable false alarm rate6-21 years: Overall FAR: 0.34 (CI: 0.23-0.50), Mean FAR: 0.35 (CI: 0.28-0.45)
    >21 years: Overall FAR: 0.25 (CI: 0.22-0.30), Mean FAR: 0.29 (CI: 0.26-0.33)

    2. Sample Size for the Test Set and Data Provenance:

    • Epilepsy Monitoring Unit (EMU) Data (Retrospective Analysis):

      • Patients for PPA: 12 patients (6-21 years old) and 12 patients (>21 years old).
      • GTCS events for PPA: 19 GTCS events (6-21 years old) and 17 GTCS events (>21 years old).
      • Patients for FAR: 80 patients (6-21 years old) and 61 patients (>21 years old).
      • Data Provenance: Retrospective analysis of previously collected clinical data from patients observed in Epilepsy Monitoring Units. The document mentions data from "epilepsy patients experiencing GTCSs in hospital Epilepsy Monitoring Units" for the validation of the algorithm (EpiAlgo 2.1).
    • Real-World Data (Longitudinal Analysis) for Low-Sensitivity Mode:

      • Patients for PPA/FAR: 601 patients (6-21 years old) and 843 patients (>21 years old).
      • GTCS events for PPA: 1157 GTCS events (6-21 years old) and 3625 GTCS events (>21 years old).
      • Observation days for FAR: 37594.2 days (6-21 years old) and 56389.1 days (>21 years old).
      • Data Provenance: Longitudinal analysis of real-world data, based on sensor data captured using the Embrace2 wearable device. This suggests the data was collected prospectively in a real-world setting, but its analysis for this specific submission was retrospective.

    3. Number of Experts Used to Establish the Ground Truth and Qualifications:

    • For the initial validation of EpiAlgo 2.1 (which supports the predicate device and is used in the subject device), the ground truth was "designed by a group of epileptologists at a top level 4 epilepsy center." The exact number of epileptologists and their specific years of experience are not provided. The method mentioned is "gold-standard video-Electroencephalogram (EEG) methodology."
    • For the retrospective analyses presented, "adjudicated tonic-clonic seizure data" was used, implying expert review to establish the ground truth of GTCS events. The number and qualifications of the experts performing this adjudication for the analyses presented in Tables 1-4 are not explicitly stated.

    4. Adjudication Method for the Test Set:

    • The document implies clinical adjudication was performed to establish "adjudicated tonic-clonic seizure data" and the "gold-standard video-Electroencephalogram (EEG) methodology." However, it does not specify a particular adjudication method such as 2+1 or 3+1 for the test set data used in these retrospective analyses. It only mentions that the data was "adjudicated."

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    • No MRMC comparative effectiveness study was done.
    • The document describes a standalone algorithm performance without human assistance for seizure detection.

    6. Standalone (Algorithm Only) Performance:

    • Yes, a standalone performance evaluation of the algorithm (EpiAlgo ver 2.1) was conducted. The PPA and FAR metrics presented (Tables 1-4) reflect the performance of the algorithm without human-in-the-loop assistance for seizure detection and alerting.

    7. Type of Ground Truth Used:

    • Expert Consensus / Clinical Diagnosis (Video-EEG): For the initial validation of EpiAlgo 2.1, the ground truth was established using "gold-standard video-Electroencephalogram (EEG) methodology designed by a group of epileptologists." This indicates a high standard of clinical diagnosis and expert consensus.
    • Adjudicated Data: For the retrospective analyses of EMU and real-world data, "adjudicated tonic-clonic seizure data" were used, implying expert review and decision-making on seizure events.

    8. Sample Size for the Training Set:

    • The document does not explicitly state the sample size for the training set of EpiAlgo ver 2.1. It mentions that EpiAlgo 2.1 was validated using data from epilepsy patients in EMUs, but this typically refers to validation/test sets, not specifically the training data.

    9. How the Ground Truth for the Training Set Was Established:

    • The method for establishing the ground truth for the training set is not detailed in this document. It only states that the EpiAlgo "has been validated through testing, using the gold-standard video-Electroencephalogram (EEG) methodology designed by a group of epileptologists at a top level 4 epilepsy center, from epilepsy patients experiencing GTCSs in hospital Epilepsy Monitoring Units." This description primarily refers to the validation data, not the data used for initial training.
    Ask a Question

    Ask a specific question about this device

    K Number
    K242737
    Manufacturer
    Date Cleared
    2025-06-06

    (268 days)

    Product Code
    Regulation Number
    870.2300
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Transmitters and Receivers, Physiological Signal, Radiofrequency | Class II | DRG | Cardiovascular |
    | 882.5050

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Empatica Health Monitoring Platform is a wearable device and paired mobile and cloud-based software platform intended to be used by trained healthcare professionals or researchers for retrospective remote monitoring of physiologic parameters in ambulatory individuals 18 years of age and older in home-healthcare environments. As the platform does not provide real-time alerts related to variation of physiologic parameters, users should use professional judgment in assessing patient clinical stability and the appropriateness of using a monitoring platform designed for retrospective review.

    The device is intended for continuous data collection supporting intermittent retrospective review of the following physiological parameters:

    • Pulse Rate,
    • Blood Oxygen Saturation under no-motion conditions,
    • Respiratory Rate under no motion conditions,
    • Peripheral Skin Temperature,
    • Electrodermal Activity,
    • Activity associated with movement during sleep

    The Empatica Health Monitoring Platform can be used to analyze circadian rhythms and assess activity in any instance where quantifiable analysis of physical motion is desirable.

    The Empatica Health Monitoring Platform is not intended for SpO2 monitoring in conditions of motion or low perfusion.

    The Empatica Health Monitoring Platform is intended for peripheral skin temperature monitoring, where monitoring temperature at the wrist is clinically indicated.

    The Empatica Health Monitoring Platform is not intended for Respiratory Rate monitoring in motion conditions. This device does not detect apnea and should not be used for detecting or monitoring cessation of breathing.

    The Empatica Health Monitoring Platform is not intended for Pulse Rate monitoring in patients with chronic cardiac arrhythmias, including atrial fibrillation and atrial/ventricular bigeminy and trigeminy, and is not intended to diagnose or analyze cardiac arrhythmias. The Empatica Health Monitoring Platform is not a substitute for an ECG monitor, and should not be used as the sole basis for clinical decision-making.

    Device Description

    The Empatica Health Monitoring Platform is a wearable device and software platform composed by:

    • A wearable medical device called EmbracePlus,
    • A mobile application running on smartphones called "Care App",
    • A cloud-based software platform named "Care Portal".

    The EmbracePlus is worn on the user's wrist and continuously collects raw data via specific sensors. These data are wirelessly transmitted via Bluetooth Low Energy to a paired mobile device where the Care App is up and running. The data received are analyzed by one of the Care App software modules, EmpaDSP, which computes the user physiological parameters. Based on the version of the Care App installed, the user can visualize a subset of these physiological parameters. The Care App is also responsible for transmitting, over cellular or WiFi connection sensors' raw data, device information, Care App-specific information, and computed physiological parameters to the Empatica Cloud. On the Empatica Cloud, these data are stored, further analyzed, and accessible by healthcare providers or researchers via a specific cloud-based software called Care Portal.

    The Empatica Health Monitoring Platform is intended for retrospective remote monitoring of physiological parameters in ambulatory adults in home-healthcare environments. It is designed to continuously collect data to support intermittent monitoring of the following physiological parameters and digital biomarkers by trained healthcare professionals or researchers: Pulse Rate (PR), Respiratory Rate (RR), blood oxygen saturation (SpO2), peripheral skin temperature (TEMP), and electrodermal activity (EDA). Activity sensors are used to detect sleep periods and to monitor the activity associated with movement during sleep.

    AI/ML Overview

    The provided FDA 510(k) clearance letter and its attachments describe the acceptance criteria and study that proves the Empatica Health Monitoring Platform (EHMP) meets those criteria, specifically concerning a new Predetermined Change Control Plan (PCCP) for the SpO2 quality indicator (QI) algorithm.

    1. Acceptance Criteria and Reported Device Performance

    The acceptance criteria are outlined for the proposed modification to the SpO2 Quality Indicator (QI) algorithm. The reported device performance is presented as a statement of equivalence to the predicate device, implying that the acceptance criteria are met, as the 510(k) was cleared.

    MetricAcceptance CriteriaReported Device Performance
    SpO2 QI Algorithm - Bench TestingSensitivity, Specificity, and False Discovery Rate of the modified SpO2 QI algorithm in discriminating low-quality and high-quality data are non-inferior to the SpO2 QI in the FDA-cleared SpO2 algorithm.Implied to have met criteria, as the device received 510(k) clearance. Full performance metrics are not explicitly stated in this document but are described as being non-inferior.
    SpO2 Algorithm - Clinical Testing (Arms Error)The Arms error of the modified SpO2 algorithm is lower or equivalent to the FDA-cleared SpO2 algorithm.Implied to have met criteria, as the device received 510(k) clearance. Full performance metrics are not explicitly stated in this document but are described as being lower or equivalent.
    SpO2 QI Algorithm - Clinical Testing (Percent Agreement)The percent agreement between the modified SpO2 QI outputs and the FDA-cleared SpO2 QI outputs must be equal to or higher than 90%.Implied to have met criteria, as the device received 510(k) clearance. Full performance metrics are not explicitly stated in this document but are described as being equal to or higher than 90%.
    Software Verification TestsAll software verification tests linked to requirements and specifications must pass.Implied to have met criteria, as the device received 510(k) clearance.

    Note: For the pre-existing functionalities (Pulse Rate, Respiratory Rate, Peripheral Skin Temperature, Electrodermal Activity, Activity and Sleep), the document states that "no changes to the computation... compared with the cleared version" have been introduced, implying their previous acceptance criteria were met and remain valid.

    2. Sample Sizes and Data Provenance

    • Test Set Sample Size: Not explicitly stated for the SpO2 algorithm modification. The document only mentions "enhancing the development dataset with new samples" for the ML-based algorithm and clinical testing was "conducted in accordance with ISO 80601-2-61... and ... FDA Guidelines for Pulse Oximeters." These standards typically require a certain number of subjects and data points, but the exact numbers are not provided in this public summary.
    • Data Provenance: Not specified in the provided document. It does not mention the country of origin, nor whether the data was retrospective or prospective.

    3. Number and Qualifications of Experts for Ground Truth

    • Number of Experts: Not specified.
    • Qualifications of Experts: Not specified. The document states the platform is "intended to be used by trained healthcare professionals or researchers," and later discusses "professional users" and "clinical interpretation," implying that the ground truth for clinical studies would likely involve such experts, but their specific roles, numbers, and qualifications for establishing ground truth are not detailed.

    4. Adjudication Method for the Test Set

    The adjudication method for establishing ground truth for the test set is not explicitly mentioned in the provided document.

    5. Multi Reader Multi Case (MRMC) Comparative Effectiveness Study

    There is no mention of a Multi Reader Multi Case (MRMC) comparative effectiveness study being conducted, nor any effect size regarding human readers improving with AI vs. without AI assistance. The device is for "retrospective remote monitoring" by healthcare professionals, implying an AI-driven data collection/analysis with human review, but not necessarily human-AI collaboration in real-time diagnostic interpretation that an MRMC study would evaluate.

    6. Standalone (Algorithm Only) Performance

    The acceptance criteria for the SpO2 QI algorithm include "Bench testing conducted using a functional tester to simulate a range of representative signal quality issues." This falls under standalone performance, as it tests the algorithm's ability to discriminate data quality without direct human input. Clinical testing also evaluates the algorithm's accuracy (Arms error) in comparison to an established standard, which is also a standalone performance measure.

    7. Type of Ground Truth Used

    • For the SpO2 QI ML algorithm: The ground truth for low-quality and high-quality data discrimination seems to be an internal standard/reference based on the "FDA-cleared SpO2 algorithm" and potentially expert labeling of data quality during the "enhancing the development dataset."
    • For the SpO2 Accuracy (Arms Error): The ground truth for SpO2 values would be established in accordance with ISO 80601-2-61, which typically involves comparing the device's readings against a laboratory co-oximeter or a reference pulse oximeter for arterial oxygen saturation.

    8. Sample Size for the Training Set

    The document mentions "enhancing the development dataset with new samples" for the ML-based algorithm but does not specify the sample size for the training set.

    9. How Ground Truth for Training Set was Established

    The ground truth for training the ML-based SpO2 QI algorithm was established by "enhancing the development dataset with new samples." It also mentions performing "feature extraction and engineering on window lengths spanning a 10-30-second range." While it doesn't explicitly state the methodology, given the context of a "binary output" (high/low quality), it implies a labeling process, likely by human experts or based on predefined criteria derived from the previous FDA-cleared algorithm's performance on various data types. For the SpO2 accuracy, the ground truth would typically be established by a reference method consistent with the mentioned ISO standard and FDA guidance.

    Ask a Question

    Ask a specific question about this device

    K Number
    K243513
    Device Name
    DCM (PW-DCM)
    Manufacturer
    Date Cleared
    2025-04-16

    (155 days)

    Product Code
    Regulation Number
    882.5050
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    4WQ
    United Kingdom

    Re: K243513
    Trade/Device Name: DCM (PW-DCM)
    Regulation Number: 21 CFR 882.5050
    Classification Name:** Biofeedback Device
    Classification Codes: LEL
    Regulation Number: 21 CFR §882.5050
    Prescription or OTC** | Prescription | Prescription | Equivalent |
    | Classification & Product code | 882.5050
    , LEL | 882.5050, LEL | Equivalent |
    | Where device is worn (anatomical site) | Wrist | Wrist | Equivalent

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    DCM is a small worn activity monitor designed for documenting physical movement associated with applications in physiological monitoring.

    The device is intended to monitor the activity associated with movement during sleep.

    DCM can be used to analyze circadian rhythms and assess activity in any instance where quantifiable analysis of physical motion is desirable.

    DCM is indicated for monitoring of adult patients only.

    Device Description

    DCM is a wrist-worn wearable device intended to continuously record high resolution digital acceleration data associated with a patient's physical movement.
    In practice, a healthcare professional or researcher can prescribe the device to collect physiological data from patients during sleep and in applications where quantifiable analysis of physical motion is desirable.
    The device is set up to collect data by the healthcare professional then placed on the subject's wrist. The device is designed to be worn during normal activities and/or during sleep over a period of days to weeks. The patient does not need to interact with the device to control data collection.
    The data stored on the device can be transmitted to the cloud for storage, and made accessible to healthcare professionals or researchers for further analysis. Downloaded data can be post-processed based on the timestamp and magnitude of acceleration along each axis.
    The DCM system comprises a system of components:

    • wearable biosensor (PW010)
    • off the shelf mobile device (PW030) running the DCM mobile app (PW400)
    • cloud-based data storage and data processing (PW100) (back-end)
    • investigator dashboard (PW500) accessed through a web browser (front-end)
    AI/ML Overview

    The provided FDA 510(k) clearance letter for the DCM (PW-DCM) device does not describe a study involving a test set, ground truth experts, or human readers for assessing device performance related to diagnostic accuracy or interpretation.

    Instead, the document focuses on the technical performance of the device as a physical activity monitor, comparing it to a predicate device (Actigraph LEAP) primarily on its physical and operational characteristics. The acceptance criteria and "study" described are more akin to verification and validation (V&V) testing of hardware and software components, rather than a clinical performance study measuring accuracy against a diagnostic gold standard involving human interpretation.

    Therefore, many of the requested categories (e.g., number of experts, adjudication method, MRMC study, effect size on human readers, type of ground truth for diagnostic accuracy) are not applicable or cannot be extracted from this document, as the device's function is data collection and not direct diagnostic interpretation.

    However, I can extract the information that is present and explain why other information is not available from this document.


    Acceptance Criteria and Reported Device Performance

    The table below summarizes the technical acceptance criteria for the DCM device and the reported outcomes, as found in the "Summary of Testing" section.

    RequirementAcceptance Criteria / Pass/Fail CriteriaReported Device Performance (Result)
    Acceleration Measurement AccuracyAccuracy of 5% or better (at 1g) in 3 orthogonal directions with sensitivity to at least 0.005g. Accelerometer accuracy to be tested across extended duration data collection runs to confirm no sensor drift.PASS
    Timing Accuracy (Sensor Data Capture)Timing accuracy within ±10 seconds per hour. Data is transmitted to the cloud and timestamps are visible and accurate within requirements when viewed in the Investigator Dashboard.PASS
    Data Storage upon Connectivity IssuesData is stored on the biosensor when connection to the mobile device is interrupted and transferred when connection is restored. Data is stored on the mobile device when connection to the cloud platform is interrupted and transferred when connection is restored.PASS
    UsabilityUsability activities are conducted according to the IEC 62366-1 process and demonstrates that the usability of the medical device is acceptable with regard to safety.PASS
    PackagingDevice meets visual inspection criteria and passes functional tests following exposure to typical shipping stresses and rough handling.PASS
    EMC (Electromagnetic Compatibility)Device meets requirements for emissions (Class B) and immunity per IEC 60601-1-2 and 47 CFR Part 15 Subpart B.PASS
    Wireless CoexistenceNo interruption to wireless data connections per ANSI C63.27.PASS
    Radio Frequency (Radiated Spurious Emissions)Device meets requirements for spurious emissions per 47 CFR 15.247.PASS
    Electrical SafetyDevice meets applicable requirements for electrical, mechanical and thermal safety, for healthcare and home use environments per IEC 60601-1 and IEC 60601-1-11.PASS
    Software Verification and ValidationSoftware developed and maintained in accordance with the IEC 62304 lifecycle process, and all verification and validation tests passed.PASS

    Study Details (based on available information)

    1. Sample size used for the test set and the data provenance:

      • Test set sample size: Not explicitly stated for each test. The tests described are bench tests ("Bench testing with the biosensor in a range of orientations," "Bench testing with mobile app paired to biosensor," "manual interruption and restoration of connectivity"). This implies testing of device units, not a patient cohort.
      • Data provenance: Not explicitly stated. Given the nature of the tests (bench testing, design validation), the "data" being generated is measurement data from the device itself rather than clinical patient data. The document does not refer to geographical origin or patient type for these validation tests.
      • Retrospective or Prospective: Not applicable in the context of device design verification and validation testing. These are controlled engineering tests.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Not applicable for these types of tests. The "ground truth" for these engineering and software tests would be established by calibrated measurement equipment (e.g., accelerometers for accuracy, timing devices for accuracy) and adherence to international standards (e.g., IEC 62366-1 for usability, IEC 60601 series for safety, IEC 62304 for software). There is no mention of human experts interpreting data to establish a ground truth for diagnostic purposes because the device's function is data collection, not interpretation.
    3. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

      • None. This concept is for clinical performance studies where multiple human readers interpret medical images or data. The described tests are technical performance evaluations.
    4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No, an MRMC comparative effectiveness study was not done. The document explicitly states: "DCM did not require clinical studies to support substantial equivalence to the predicate device." The device is a "small worn activity monitor designed for documenting physical movement," not a device that provides AI-assisted interpretations for human clinicians.
    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • Yes, a form of standalone testing was done for the technical performance. The "Summary of Testing" section describes tests where the device's inherent capabilities (e.g., acceleration measurement, timing accuracy, data storage) were evaluated against predetermined engineering criteria. This is performance of the algorithm/device itself, without human interpretation in the loop beyond setting up the test and interpreting the test results (e.g., "PASS").
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • Technical/Engineering Standards and Calibrated Equipment: For accuracy measurements, the ground truth would be from highly accurate, calibrated reference instruments. For safety, EMC, and software, the ground truth is adherence to established international standards (e.g., IEC 60601-1, IEC 62304) and internal design specifications. There is no biological or clinical "ground truth" (e.g., pathology, outcomes data, expert consensus on patient diagnosis) applied here.
    7. The sample size for the training set:

      • Not applicable / Not disclosed. The document does not describe a machine learning algorithm that requires a "training set" in the context of clinical AI. The device collects raw acceleration data. While there might be internal algorithms for processing this data (e.g., activity counts, sleep/wake detection, circadian rhythm analysis from raw data), the document describes validation of the data collection capability, not the performance of an AI model trained on a specific dataset for diagnostic tasks.
    8. How the ground truth for the training set was established:

      • Not applicable. As no training set for a clinical AI algorithm is described, there's no ground truth establishment for such a set.
    Ask a Question

    Ask a specific question about this device

    K Number
    K241488
    Device Name
    TrainFES Advance
    Manufacturer
    Date Cleared
    2025-02-05

    (257 days)

    Product Code
    Regulation Number
    890.5850
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    |

    Power Muscle Stimulator

    876.5320 882.5810

    NYN

    890.5850 / 882.5890 / 882.5050

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    For Prescription and Home Use by prescription from a medical professional:
    The TrainFES advanced is a neuromuscular electrical stimulator indicated for use under medical supervision for adjunctive therapy in the treatment of medical diseases and conditions.

    As a powered muscle stimulator, TrainFES advanced is indicated for the following conditions:

    • Relaxation of muscle spasms
    • Prevention or retardation of disuse atrophy
    • increasing local blood circulation
    • immediate post-surgical stimulation of calf muscles to prevent venous thrombosis
    • Maintaining or increasing range of motion
    • Muscle re-education

    As an external functional neuromuscular stimulator (FES), TrainFES Advanced is indicated for the following conditions:

    • Helps to relearn voluntary motor functions of the extremities.

    TrainFES' Advanced intended population is anybody aged 22 or over.

    Environments of use: TrainFES Advanced devices can be used by both therapists and patients, in the clinic, hospitals or at home.

    Platform: TrainFES is a battery-powered, wireless device, accessible through software.

    Device Description

    TrainFES Advanced is a portable functional electrostimulator with 6 channels designed for use in the clinics and hospitals by the medical professionals as well as in the home environment by the patient. This device generates electrical impulses to stimulate the musculature of paralyzed seqments and facilitate both the relearning of movement and neuromodulation of tone.

    TrainFES Advanced is a battery-powered, wireless device, configurable from the TRAINFES App, available for Smartphone and Tablets, which allows you to adjust different parameters and follow a training plan from your smartphone. Session settings can be retrieved from the PC or Cloud.

    AI/ML Overview

    The provided document describes the FDA 510(k) premarket notification for the "TrainFES Advanced" device, a neuromuscular electrical stimulator. The purpose of this submission is to demonstrate substantial equivalence to a predicate device (Stella BIO, K210002).

    Here's an analysis of the acceptance criteria and study information provided:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document doesn't explicitly state "acceptance criteria" for the device's performance in a quantitative manner (e.g., target accuracy, sensitivity, specificity). Instead, the substantial equivalence justification relies on demonstrating that the TrainFES Advanced device performs similarly to or meets the safety and effectiveness standards of the predicate device, K210002.

    The table below summarizes the comparison of key technical characteristics between the TrainFES Advanced and its predicate, Stella BIO, highlighting where "performance" is discussed in terms of meeting relevant standards or being considered equivalent. The reported "performance" for TrainFES Advanced is intrinsically linked to its compliance with these standards and the assertion that differences do not raise new safety or effectiveness concerns.

    Characteristic / SpecificationAcceptance Criteria (implied by predicate comparison)Reported Device Performance (TrainFES Advanced)
    Basic Device Characteristics
    ClassificationClass IIClass II
    Prescription/OTC UsePrescription and Home UsePrescription and Home Use
    Environment of UseClinics, hospitals, and homeClinics, hospitals, and home
    Indications for UseSimilar to predicate (specific conditions)Similar to predicate, with specific conditions listed, and the functions of powered muscle stimulator and external functional neuromuscular stimulator are exactly the same
    Power SourceBattery-powered, compliant with IEC 62133Battery: Li-Ion 3.7V (4000mAh), compliant with IEC 62133
    Method of Line Current IsolationN/A (Battery)N/A (Battery)
    Patient leakage current (Normal)
    Ask a Question

    Ask a specific question about this device

    K Number
    K233987
    Device Name
    VERABAND™
    Date Cleared
    2024-06-17

    (182 days)

    Product Code
    Regulation Number
    882.5050
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    260 Ann Arbor, Michigan 48103

    Re: K233987

    Trade/Device Name: Veraband™M Regulation Number: 21 CFR 882.5050
    |
    | Regulation Number: | 21 CFR §882.5050

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The VERABAND™ is a compact, lightweight, body-worn activity monitoring device designed to document physical movement associated with applications in physiological monitoring. The device is intended to monitor limb or body movements during daily living and sleep for a limited time interval (up to 30- days).

    The VERABAND™ can be used to assess activity in any instance where quantifiable analysis of physical motion is desired. VERABAND™ is not intended for diagnostic purposes.

    Device Description

    The VERABANDTM is a compact, wrist-worn battery-operated wearable device intended for collecting a patient's motion data for assessing patient activity. VERABANDTM is intended to acquire and store data while being worn during normal activities and/or during sleep. The device consists of a wearable band with compact housing for battery-powered on-board electronics with an accelerometer. The recorded activity data is timestamped and stored in non- volatile memory for later retrieval. Downloaded VERABANDTM data can be post-processed based on the timestamp and magnitude of acceleration for reporting.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the Veraband™ device based on the provided FDA 510(k) summary:

    1. Table of Acceptance Criteria and Reported Device Performance

    TestAcceptance CriteriaReported Device Performance
    Accelerometer Accuracy and Precision- Accelerometer shall meet full-range accuracy requirement.
    • Accelerometer shall meet static calibration point accuracy requirement.
    • Accelerometer shall meet repeatability and reproducibility requirement. | Pass |
      | Activation Trigger | - Device shall activate at the required light level.
    • Device shall not activate while in the packaging. | Pass |
      | Device Donning/Doffing | Device shall be able to survive the required donning and doffing. | Pass |
      | Device Band Separation and Elongation Forces | - Device shall meet force requirements for elongation.
    • Device shall meet force requirements for separation. | Pass |
      | Battery Life for Duration of Use | Device shall meet the battery life requirements for Expected Device Life. | Pass |
      | User Cleaning | Device shall maintain function after a required minimum number of cleanings. | Pass |
      | Device Sampling Rate and Full-Scale Dynamic Range | Device shall have a sampling rate and full-scale dynamic range that meets the requirements. | Pass |
      | Device Frequency Response | Device shall be within the requirement of the predicate's bandwidth. | Pass |
      | VERABAND™ Intended Use | Device shall meet repeatability and reproducibility requirements for activity levels. | Pass |
      | VERABAND™ Report Generation Comparison | Device output metrics shall meet the predicate comparison requirements. | Pass |
      | Usability | Device shall meet the related usability requirement survey scores. | Pass |
      | Packaging Testing | Device shall maintain function per requirements after shipping. | Pass |
      | EMC | Device shall meet the related EMC requirements. | Pass |
      | Electrical Basic Safety | Device shall meet the related electrical basic safety requirements. | Pass |
      | Biocompatibility Testing | Device shall meet the related biocompatibility requirements. | Pass |
      | Firmware Verification and Validation Testing | Device shall meet the related Firmware requirements. | Pass |
      | Software Verification and Validation Testing | Device shall meet the related Software requirements. | Pass |

    2. Sample Size and Data Provenance (for test set, if applicable)

    The document primarily describes non-clinical engineering and performance testing. It does not explicitly state a "test set" in the context of clinical data. For the "VERABAND™ intended use" test, it states "Simulated users wearing the device perform activities at different intensities of motion and wear compliance times." While it implies subjects, no specific sample size is provided. The data is non-clinical/simulated.

    3. Number of Experts for Ground Truth and Qualifications (for test set, if applicable)

    Not applicable, as the provided data focuses on non-clinical and simulated testing for performance validation rather than expert-derived ground truth for a test set of patient data.

    4. Adjudication Method (for test set, if applicable)

    Not applicable. The reported tests are primarily objective engineering and performance validations against predefined criteria, not subjective human evaluations requiring adjudication.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No. The document explicitly states: "The VERABAND™ did not require clinical studies to support substantial equivalence to the primary predicate device." Therefore, no MRMC study was conducted or reported.

    6. Standalone (Algorithm Only) Performance Study

    Yes, the majority of the reported testing falls under standalone performance. The "non-clinical testing summary" details various tests (e.g., accelerometer accuracy, battery life, sampling rate, frequency response, report generation comparison) that evaluate the device's technical functionality and performance in isolation or in comparison to a predicate device's output, without human-in-the-loop performance measurement.

    7. Type of Ground Truth Used

    For the non-clinical tests, the ground truth was based on:

    • Established engineering specifications and requirements (e.g., full-range accuracy, required light level, expected device life, minimum number of cleanings, sampling rate, full-scale dynamic range, predicate's bandwidth, repeatability and reproducibility for activity levels, predicate comparison requirements for output metrics).
    • Recognized consensus standards (e.g., IEC, ISO, ASTM).
    • Simulated motions and user activities.

    8. Sample Size for the Training Set

    The document does not mention a "training set" in the context of machine learning or AI. The device is described as an activity monitoring device with an accelerometer to acquire and store data, which is then post-processed. It doesn't appear to be an AI/ML-driven diagnostic device that would typically involve a training set for model development.

    9. How the Ground Truth for the Training Set Was Established

    Not applicable, as there is no mention of a "training set" for an AI/ML model. The device's operation relies on sensor data acquisition and processing based on established algorithms and engineering principles, not on a trained machine learning model.

    Ask a Question

    Ask a specific question about this device

    K Number
    K233618
    Manufacturer
    Date Cleared
    2024-04-03

    (142 days)

    Product Code
    Regulation Number
    882.5050
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    4GA United Kingdom

    Re: K233618

    Trade/Device Name: Oxevision Sleep Device Regulation Number: 21 CFR 882.5050
    Oxford Science Park,
    Oxford, OX4 4GA
    UK |
    | Classification Name: | LEL, Biofeedback device, 21 CFR 882.5050

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Oxevision Sleep Device is an activity monitor designed and intended for documenting physical movements associated with applications in physiological monitoring. The device's intended use is to analyze subject activity, movement and physiological sign data associated with movement during sleep and to extract information about certain sleep parameters from these movements and physiological sign data.

    The device provides a timeline of periods when a bed space is occupied, and periods when a subject is asleep when the bed space is occupied.

    The Oxevision Sleep Device is software assessing video from a fixed-installation device for use within single occupancy bed spaces within hospitals, general care and secured environments.

    The Oxevision Sleep Device is indicated for use on subjects 18 years of age or older.

    Device Description

    Oxevision Sleep is a software-only medical device (SaMD) that provides noncontact sleep assessment in the inpatient setting based on the analysis of patient movement, activity and physiological sign data derived from video, without the need for contact devices to be attached to the patient or bed.

    The device consists of custom-designed software assessing video footage collected using off-the-shelf cameras installed within single occupancy bed spaces within hospitals, general care and secured environments. Proprietary software-controlled algorithms are used to derive patient movement, activity and physiological sign data and then to obtain information on bed occupancy and sleep state from the analysis of this data.

    The device software automates recognition of sleep periods, generation of sleep reports, and their presentation in a graphical display for use by a healthcare professional.

    AI/ML Overview

    Here's a summary of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    Acceptance Criteria and Device Performance for Oxevision Sleep Device

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance CriteriaOxevision Sleep Device Reported PerformanceMeets Criteria?
    Bed Occupancy Detection: Accuracy of periods of bed occupancy not inferior to 95%99% (95% CI: 99.0% - 99.7%)Yes
    Sleep/Wake Classification (Overall Agreement): Not inferior to 82%90% (95% CI: 89.0% - 91.8%)Yes
    Sleep/Wake Classification (Positive Agreement): Not inferior to 88%94% (95% CI: 92.3% - 95.6%)Yes
    Sleep/Wake Classification (Negative Agreement): Not inferior to 55%80% (95% CI: 74.3% - 83.5%)Yes

    2. Sample Size and Data Provenance for Test Set

    • Sample Size: 60 individuals, resulting in a total of 772.65 hours of data.
    • Data Provenance: The text does not explicitly state the country of origin. It mentions "a sample of 60 individuals" and "validation data collected from the 60 adults." The study appears to be prospective as it involved collecting "Reference measurements (physiological signals and video polysomnography data) ... concurrently from a standard off-the-shelf camera and hardware installed in two rooms."

    3. Number of Experts and Qualifications for Ground Truth (Test Set)

    • Polysomnography (PSG) Scoring:
      • Number of Experts: Three trained sleep physiologists.
      • Qualifications: "trained sleep physiologists, blinded to the video data collected by the standard off-the-shelf camera." They scored in accordance with the American Academy of Sleep Medicine Manual for the Scoring of Sleep and Associated Events version 2.6 of January 2020.
    • Bed Occupancy Annotation:
      • Number of Experts: Two reviewers.
      • Qualifications: "blinded to the algorithm development details."

    4. Adjudication Method for Test Set

    • Sleep State (PSG): The ground truth for sleep state was established using "triple-scored PSG data" with an "epoch-by-epoch majority vote." Epochs where no majority label was available (e.g., due to artifact) were excluded from the analysis.
    • Bed Occupancy (Video Annotation): The ground truth for bed occupancy was established by "two reviewers, blinded to the algorithm development details" who "reviewed and annotated" the video data. The specific adjudication method beyond "annotated" by two reviewers is not explicitly detailed (e.g., if discrepancies were resolved by a third reviewer).

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • The document does not indicate that an MRMC comparative effectiveness study was done to evaluate the effect size of human readers improving with AI vs. without AI assistance. The study focuses on the standalone performance of the algorithm against reference standards.

    6. Standalone Performance

    • Yes, a standalone (algorithm only without human-in-the-loop performance) study was done. The entire clinical performance section describes the algorithm's performance against established reference standards for bed occupancy detection and sleep/wake classification.

    7. Type of Ground Truth Used

    • Bed Occupancy: Expert annotation of video data.
    • Sleep/Wake Classification: Expert consensus from "triple-scored PSG data" by trained sleep physiologists, adhering to AASM guidelines. This can be categorized as a type of expert consensus based on a gold-standard diagnostic tool (PSG).

    8. Sample Size for Training Set

    • The document does not explicitly state the sample size for the training set. The "Clinical Performance" section specifically focuses on the "validation data collected from the 60 adults."

    9. How Ground Truth for Training Set Was Established

    • The document does not explicitly state how the ground truth for the training set (if distinct from the validation set) was established. It only describes the ground truth establishment for the clinical validation test set.
    Ask a Question

    Ask a specific question about this device

    K Number
    K232915
    Device Name
    EpiMonitor
    Manufacturer
    Date Cleared
    2024-02-15

    (149 days)

    Product Code
    Regulation Number
    882.1580
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    ----------------------------------|-----------------|-----------------|-------------------------|
    | 882.5050

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EpiMonitor is a prescription only medical device system composed of a wearable device "EmbracePlus" and paired mobile software application "EpiMonitor" intended as an adjunct to seizure monitoring in adults and children aged 6 and up in a home environment or healthcare facilities. The device is worn on the wrist and senses Electrodermal Activity (EDA) and motion data to detect patterns that may be associated with either primary or secondary generalized tonic clonic clonic clonic seizures in patients with epilepsy or at risk of having epilepsy. When a seizure event is detected, the wearable device component of EpiMonitor sends a command to a paired mobile device where the EpiMonitor App is programmed to initiate an alert to a designated caregiver. The EpiMonitor app incorporates additional detection sensitivity modes, "high" for use during periods of rest or sleeping or "low" for use during periods of low-intensity activity, in order to reduce false alarm incidents.

    EpiMonitor records, stores and transmits accelerometer. EDA, peripheral skin temperature and activity data for subsequent retrospective review by a trained healthcare professional via a cloud-based software.

    Device Description

    The EpiMonitor system consists of a wearable device and mobile application:

    • A wearable medical device called EmbracePlus,
    • A mobile application running on smartphones called "EpiMonitor"

    The EmbracePlus is worn on the user's wrist and continuously collects raw data via specific sensors, these data are continuously analyzed by an on-board algorithm (EpiAlgo 2.1), which assesses the physiological data and determines if the user may be undergoing a generalized tonic-clonic seizure (GTCS). The EpiAlgo has been validated through testing, using the gold-standard video-Electroencephalogram (EEG) methodology designed by a group of epileptologists at a top level 4 epilepsy center, from epilepsy patients experiencing GTCSs in hospital Epilepsy Monitoring Units. When a likely GTCS is detected, EmbracePlus sends, via Bluetooth Low Energy, a message to the EpiMonitor app. The EpiMonitor app communicates to the Empatica Cloud which initiates, through the external provider a voice call and SMS text message is sent to summon the attention of userdesignated caregiver(s).

    In addition to initiating alerts, the EpiMonitor app also continuously receives all the raw sensor data collected by the EmbracePlus. These data are analyzed by one of the EpiMonitor app software modules, EmpaDSP (paragraph 2.3.2), which computes the additional physiological parameters, such as activity during sleep and peripheral skin temperature.

    The EpiMonitor App is also responsible for transmitting, over a cellular data plan or Wi-Fi connection the sensors' raw data, device information, and computed physiological parameters to the Empatica Cloud. On the Empatica Cloud, these data are stored, and made available to healthcare providers via a specific cloud-based software called Care Monitoring Portal.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the EpiMonitor device, derived from the provided FDA 510(k) summary:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria for the EpiMonitor device's Low-Sensitivity mode were evaluated based on Positive Percent Agreement (PPA) for seizure detection and False Alarm Rate (FAR) for both Epilepsy Monitoring Unit (EMU) data and real-world data.

    Metric (Low-Sensitivity Mode)Acceptance Criteria (Implicit from "Acceptable")Reported Device Performance (EMU Data)Reported Device Performance (Real-World Data)
    Positive Percent Agreement (PPA)Acceptable seizure detection accuracyAge 6-21: 0.895 (corrected 0.791, CI: 0.619-0.925)
    Age >21: 1.000 (corrected 0.905, CI: 0.891-0.917)Age 6-21: 0.87 (corrected 0.86, CI: 0.78-0.92)
    Age >21: 0.80 (corrected 0.77, CI: 0.64-0.87)
    False Alarm Rate (FAR) per 24 hoursReduced rate of false alertsAge 6-21: 0.70 (Overall), 0.91 (Mean)
    Age >21: 0.28 (Overall), 0.33 (Mean)Age 6-21: 0.34 (Overall), 0.35 (Mean)
    Age >21: 0.25 (Overall), 0.29 (Mean)

    Note: The document explicitly states "Analysis of performance for the Low-Sensitivity alerting mode in the EpiMonitor system demonstrated acceptable seizure detection accuracy and a reduced rate of false alerts." This implies that the reported performance met the sponsor's internal acceptance criteria for these metrics. Specific numerical thresholds for "acceptable" are not explicitly stated within the provided text.

    2. Sample Sizes Used for the Test Set and Data Provenance

    For Epilepsy Monitoring Unit (EMU) Data (Retrospective Analysis):

    • Seizure Detection (PPA):
      • Patients: 24 (12 for age 6-21, 12 for age >21)
      • GTCS events: 36 (19 for age 6-21, 17 for age >21)
    • False Alarm Rate (FAR):
      • Patients: 141 (80 for age 6-21, 61 for age >21)
      • Days of monitoring: 241.62 (88.94 for age 6-21, 152.68 for age >21)
    • Data Provenance: The data was collected from patients observed in Epilepsy Monitoring Units. The exact geographic origin (country) is not specified, but the data was from "a top level 4 epilepsy center" (mentioned in device description for original EpiAlgo validation). This was a retrospective analysis of previously collected clinical data.

    For Real-World Data (Longitudinal Analysis) - based on Embrace2 wearable device:

    • Seizure Detection (PPA):
      • Patients: 1444 (601 for age 6-21, 843 for age >21)
      • GTCS events: 4782 (1157 for age 6-21, 3625 for age >21)
    • False Alarm Rate (FAR):
      • Patients: 1444 (601 for age 6-21, 843 for age >21)
      • Days of monitoring: 93983.3 (37594.2 for age 6-21, 56389.1 for age >21)
    • Data Provenance: "real-world data" captured using the Embrace2 wearable device, likely from home settings. The exact geographic origin (country) is not specified. This was a retrospective longitudinal analysis of real-world data.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    The document states that the EpiAlgo was validated "using the gold-standard video-Electroencephalogram (EEG) methodology designed by a group of epileptologists at a top level 4 epilepsy center". It also refers to "adjudicated tonic-clonic seizure data" for the EMU data. This implies that epileptologists were involved in establishing the ground truth.

    • Number of experts: Not explicitly stated, but referred to as "a group of epileptologists."
    • Qualifications of experts: "epileptologists at a top level 4 epilepsy center." No specific experience (e.g., 10 years of experience) is detailed.

    4. Adjudication Method for the Test Set

    The document mentions "adjudicated tonic-clonic seizure data" for the EMU study. However, the specific adjudication method (e.g., 2+1, 3+1) is not explicitly described in the provided text.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? No.
    • The study focuses on the performance of the algorithm itself (standalone and with different sensitivity modes), not on how human readers improve with or without AI assistance.
    • The effect size of human readers improving with AI vs. without AI assistance is not applicable as this type of study was not performed.

    6. Standalone (Algorithm Only) Performance Study

    • Yes, a standalone study was done. The entire performance analysis for PPA and FAR presented in Tables 1-4 reflects the algorithm's performance (EpiAlgo ver 2.1) using the Low-Sensitivity mode, without human intervention in the detection process. The device detects an event, and the app initiates an alert; there's no mention of a human-in-the-loop directly influencing the detection sensitivity.

    7. Type of Ground Truth Used

    • Expert Consensus / Gold Standard (Video-EEG): The ground truth for seizure events was primarily established using gold-standard video-Electroencephalogram (EEG) methodology and "adjudicated tonic-clonic seizure data." This indicates expert consensus based on clinical and physiological evidence.

    8. Sample Size for the Training Set

    The provided text does not specify the sample size used for the training set of the EpiAlgo. It only describes the validation phases for the Low-Sensitivity mode.

    9. How the Ground Truth for the Training Set Was Established

    The provided text states: "The EpiAlgo has been validated through testing, using the gold-standard video-Electroencephalogram (EEG) methodology designed by a group of epileptologists at a top level 4 epilepsy center, from epilepsy patients experiencing GTCSs in hospital Epilepsy Monitoring Units."

    This implies that the training data's ground truth was established by epileptologists using video-EEG data from patients with generalized tonic-clonic seizures (GTCSs) in hospital Epilepsy Monitoring Units. This is consistent with clinical gold standards for seizure identification.

    Ask a Question

    Ask a specific question about this device

    K Number
    K230457
    Manufacturer
    Date Cleared
    2023-10-30

    (251 days)

    Product Code
    Regulation Number
    870.2300
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Physiological
    Signal, Radiofrequency | Class II | DRG | Cardiovascular |
    | 882.5050

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Empatica Health Monitoring Platform is a wearable device and paired mobile and cloud-based software platform intended to be used by trained healthcare professionals or researchers for retrospective remote monitoring of physiologic parameters in ambulatory individuals 18 years of age and older in home-healthcare environments. As the platform does not provide real-time alerts related to variation of physiologic parameters, users should use professional judgment in assessing patient clinical stability and the appropriateness of using a monitoring platform designed for retrospective review.

    The device is intended for continuous data collection supporting intermittent retrospective review of the following physiological parameters:

    • Pulse Rate,

    • · Blood Oxygen Saturation under no-motion conditions,
    • · Respiratory Rate under no motion conditions,
    • · Peripheral Skin Temperature,
    • · Electrodermal Activity,
    • · Activity associated with movement during sleep

    The Empatica Health Monitoring Platform can be used to analyze circadian rhythms and assess activity in any instance where quantifiable analysis of physical motion is desirable.

    The Empatica Health Monitoring Platform is not intended for SpO2 monitoring in conditions of motion or low perfusion.

    The Empatica Health Monitoring Platform is intended for peripheral skin temperature monitoring, where monitoring temperature at the wrist is clinically indicated.

    The Empatica Health Monitoring Platform is not intended for Respiratory Rate monitoring in motion conditions. This device does not detect apnea and should not be used for detecting or monitoring cessation of breathing.

    The Empatica Health Monitoring Platform is not intended for Pulse Rate monitoring in patients with chronic cardiac arrhythmias, including atrial fibrillation and atrial/ventricular bigeminy and trigeminy. and is not intended to diagnose or analyze cardiac arrhythmias. The Empatica Health Monitoring Platform is not a substitute for an ECG monitor, and should not be used as the sole basis for clinical decision-making.

    Device Description

    The Empatica Health Monitoring Platform is a wearable device and software platform composed by:

    • A wearable medical device called EmbracePlus,
    • A mobile application running on smartphones called "Care App", ●
    • A cloud-based software platform named "Care Portal". ●

    The EmbracePlus is worn on the user's wrist and continuously collects raw data via specific sensors. These data are wirelessly transmitted via Bluetooth Low Energy to a paired mobile device where the Care App is up and running. The data received are analyzed by one of the Care App software modules, EmpaDSP, which computes the user physiological parameters. Based on the version of the Care App installed, the user can visualize a subset of these physiological parameters. The Care App is also responsible for transmitting, over cellular or Wi-Fi connection sensors' raw data, device information, Care App-specific information, and computed physiological parameters to the Empatica Cloud. On the Empatica Cloud, these data are stored, further analyzed, and accessible by healthcare providers or researchers via a specific cloud-based software called Care Portal.

    The Empatica Health Monitoring Platform is intended for retrospective remote monitoring of physiological parameters in ambulatory adults in home-healthcare environments. It is designed to continuously collect data to support intermittent monitoring of the following physiological parameters and digital biomarkers by trained healthcare professionals or researchers: Pulse Rate (PR), Respiratory Rate (RR), blood oxygen saturation (SpO-), peripheral skin temperature (TEMP), and electrodermal activity (EDA). Activity sensors are used to detect sleep periods and to monitor the activity associated with movement during sleep.

    AI/ML Overview

    Here's a summary of the acceptance criteria and the study details for the Empatica Health Monitoring Platform, based on the provided FDA 510(k) summary:

    1. Table of Acceptance Criteria and Reported Device Performance

    ParameterAcceptance Criteria (Subject Device)Reported Device PerformanceComments
    Pulse Rate (PR)
    PR Range24 – 240 bpm24 – 240 bpmMatches range
    PR Resolution1 bpm1 bpmMatches resolution
    PR Accuracy (no-motion)≤ 3 bpm Arms≤ 3 bpm ArmsMeets criteria, tested against ECG
    PR Accuracy (motion)≤ 5 bpm Arms≤ 5 bpm ArmsMeets criteria, tested against ECG
    Respiratory Rate (RR)
    RR Range6 - 40 brpm6 - 40 brpmMatches range
    RR Resolution1 brpm1 brpmMatches resolution
    RR Accuracy (no-motion)≤ 3 brpm Arms≤ 3 brpm ArmsMeets criteria, tested against capnography. Not intended for motion conditions.
    Blood Oxygen Saturation (SpO2)
    SpO2 Range70-100%70-100%Matches range. Not intended for motion or low perfusion conditions.
    SpO2 Resolution1%1%Matches resolution
    SpO2 Accuracy3% Arms3% ArmsMeets criteria (implies compliance with ISO 80601-2-61 and FDA Guidance for Pulse Oximeters). No additional clinical data provided for this submission, relying on prior clearance K221282.
    Peripheral Skin Temperature (TEMP)
    Temperature Range0°C to 50°C0°C to 50°CMatches range
    Temperature Resolution0.1°C0.1°CMatches resolution
    Temperature Accuracy± 0.1ºC within 30.0ºC - 45.0ºC range± 0.1ºC within 30.0ºC - 45.0ºC rangeMeets criteria. No additional bench tests provided for this submission, relying on prior clearance K221282.
    Electrodermal Activity (EDA)
    EDA Range0.01 μS – 100 μS0.01 μS – 100 μSMatches range. No additional data or documentation provided for this submission, relying on prior clearance K221282.
    EDA Resolution1 digit ~ 55 pS1 digit ~ 55 pSMatches resolution
    Activity/SleepBench testing confirmed equivalence for activity counts and sleep detection with the predicate device. No additional bench testing provided for this submission, relying on prior clearance K221282.

    2. Sample Size Used for the Test Set and Data Provenance

    • Pulse Rate:
      • Study 1: 12 healthy adult subjects.
      • Study 2: 85 healthy adult subjects.
      • Study 3: 49 adult subjects (healthy, PVCs, other comorbidities).
      • Total N = 146 adult subjects.
      • Data Provenance: Not explicitly stated, but clinical studies are generally prospective in nature for regulatory submissions. Country of origin not specified.
    • Respiratory Rate:
      • Study 1: 14 healthy adult subjects.
      • Study 2: 46 healthy adult subjects.
      • Study 3: 17 adult subjects with various health conditions.
      • Study 4: 40 adult subjects with various health conditions.
      • Total N = 117 adult subjects.
      • Data Provenance: Not explicitly stated, but clinical studies are generally prospective in nature for regulatory submissions. Country of origin not specified.
    • SpO2, Temperature, EDA, Activity/Sleep: For these parameters, the submission relies on previous clearance (K221282), indicating no new clinical test data was provided for this specific submission. The reported performance for these parameters is thus based on the studies supporting K221282. Sample sizes and provenance for those underlying studies are not detailed in this document.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    • The document does not mention the use of human experts to establish ground truth for the test sets.
    • Instead, for Pulse Rate, the ground truth was established using a reference electrocardiogram (ECG).
    • For Respiratory Rate, the ground truth was established using a capnography reference device.

    4. Adjudication Method for the Test Set

    • Not applicable as the ground truth was established against reference medical devices (ECG, capnography) rather than human expert consensus requiring adjudication.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    • No, a multi-reader multi-case (MRMC) comparative effectiveness study was not done. The studies described are focused on the standalone performance of the device against reference standards.

    6. If a Standalone Study (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    • Yes, standalone studies were performed. The clinical data presented for Pulse Rate and Respiratory Rate directly evaluate the accuracy of the device's computed values against reference standards, without human intervention in the measurement process. The device itself is described as a "platform intended to be used by trained healthcare professionals or researchers for retrospective remote monitoring," implying that the data collection and parameter computation are algorithmic, and review is done by humans.

    7. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, Etc.)

    • Pulse Rate: Reference Electrocardiogram (ECG).
    • Respiratory Rate: Reference Capnography device.
    • SpO2: Based on the technology description, it would typically be a co-oximeter or a clinically validated pulse oximeter meeting ISO standards. The document notes that no new clinical data for SpO2 was provided, relying on K221282, which would have established ground truth similarly.

    8. The Sample Size for the Training Set

    • The document does not provide details about the sample size for the training set used for the device's algorithms. It focuses entirely on the clinical validation (test set) data.

    9. How the Ground Truth for the Training Set Was Established

    • The document does not provide details on how the ground truth for the training set was established. This information is typically not included in a 510(k) summary, which focuses on the validation of the final product.
    Ask a Question

    Ask a specific question about this device

    K Number
    K231532
    Manufacturer
    Date Cleared
    2023-06-23

    (28 days)

    Product Code
    Regulation Number
    882.5050
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    K231532

    Trade/Device Name: ActiGraph LEAP activity monitor (ActiGraph LEAP) Regulation Number: 21 CFR 882.5050
    | Classification Name | Device, Sleep Assessment |
    | Regulation Number | 882.5050

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The ActiGraph LEAP™ is a small worn activity monitor designed for documenting physical movement associated with applications in physiological monitoring. The device is intended to monitor the activity associated with movement during sleep. The ActiGraph LEAP™ can be used to analyze circadian rhythms and assess activity in any instance where quantifiable analysis of physical motion is desirable.

    Device Description

    The ActiGraph LEAP™ is a wrist-worn wearable device intended to continuously record high resolution digital acceleration data associated with a patient's physical movement. In practice, a healthcare professional or researcher can prescribe the device to collect physiological data from patients in applications where quantifiable analysis of physical motion is desirable. Having physical characteristics like those of an electronic wristwatch, the device is set to collect data by the healthcare professional then placed on the subject's wrist. The device is designed to be worn during normal activities and/or during sleep over a period of days to weeks. The patient does not need to interact with the device to control the operation or data collection. The data stored on the device can be downloaded via USB or Bluetooth Low Energy and made accessible to healthcare professionals or researchers for further analysis.

    The ActiGraph LEAP™ device will be supported by accessories for recharging the battery and transferring data from the device. A USB Charging Dock with a three-foot USB A cable for both charging and data transfer to a PC using the supplied communication software. The USB Charging Dock connects to the recessed electrical contacts on the back of the device. An off-the-shelf international Wall Mount AC Adapter is also supplied for optional wall charging. The USB Charging Dock can be plugged into the Wall Mount AC Adapter's USB A port for charging the device.

    The device uses a high-resolution digital accelerometer to accurately measure linear accelerations in 3axes associated with the patient's physical movement. The accelerometer technology is a microelectromechanical system (MEMS) implemented as an integrated circuit. The accelerometer data is converted to a digital representation on the MEMS accelerometer and then recorded, with timestamp, to the device's on-board memory. The memory is an 8 Gb serial NAND flash capable of storing 30 days of accelerometer data under the default operating mode. The sample rate of the accelerometer is configurable at the following rates: 32Hz, 64Hz, 128 Hz and 256Hz.

    The LCD display indicates the battery level, current functional state of the device, and date and time. The device has a 30-day battery life under the default operating mode and can be charged using the USB Charging Dock accessory. The display does not provide feedback to the wearer/patient regarding data measures. There is a simple button on the side used to turn on the display so the wearer can read the date/time and button presses are recorded in the log.

    The device firmware executes on internal processors to control the device operations, display, and external communication protocols. The accelerometer sensor data can be downloaded from the device either via USB (using the dock) or via Bluetooth Low Energy.

    AI/ML Overview

    The provided text is a 510(k) summary for the ActiGraph LEAP activity monitor. It details device characteristics, intended use, and comparison to a predicate device. However, it does not contain any information about acceptance criteria or a study that proves the device meets specific performance criteria related to its functionality (e.g., accuracy of movement tracking, sleep monitoring, or circadian rhythm analysis).

    The document focuses on demonstrating substantial equivalence to a predicate device based on:

    • Same Indications for Use: Both the predicate and subject devices are intended to monitor activity associated with movement during sleep, analyze circadian rhythms, and assess activity where quantifiable analysis of physical motion is desirable.
    • Similar Technological Characteristics: Both use MEMS accelerometers, are wrist-worn, have similar displays, power sources, and data transfer methods.
    • Biocompatibility Testing: This addresses changes in patient-contacting materials, ensuring they are still safe.

    The document explicitly states: "Clinical testing is not applicable to this submission." This means that no clinical study was conducted to establish performance metrics like accuracy or effectiveness against ground truth on human subjects for this 510(k) clearance.

    Therefore, I cannot provide the requested information regarding acceptance criteria and a study proving the device meets them, as that information is not present in the provided text. The submission focuses on showing that the new device is substantially equivalent to an already cleared device, rather than proving de novo performance against specific acceptance criteria.

    To answer your request, if this were a dataset that did contain a study with acceptance criteria, the information would typically be presented as follows:

    Example of how the information would be presented if available in a different document:

    1. Table of Acceptance Criteria and Reported Device Performance (Hypothetical):

    MetricAcceptance CriteriaReported Device Performance (Hypothetical)
    Sleep/Wake AccuracySensitivity > 90%, Specificity > 85% vs. PolysomnographySensitivity: 92.5%, Specificity: 88.0%
    Activity Count ErrorMean Absolute Error 0.8 vs. Actigraphy Reference DevicePearson's r: 0.85

    2. Sample Size and Data Provenance (Hypothetical):

    • Test Set Sample Size: 150 participants (e.g., 50 healthy adults, 50 insomnia patients, 50 shift workers).
    • Data Provenance: Prospective, multi-center study conducted in the USA, UK, and Germany.

    3. Number and Qualifications of Experts (Hypothetical):

    • Experts: 3 Board-Certified Sleep Physicians (average 12 years of experience in sleep medicine, specializing in polysomnography interpretation).

    4. Adjudication Method (Hypothetical):

    • Adjudication: 2+1. Initial assessment by two experts; in cases of disagreement, a third senior expert provided a binding decision.

    5. MRMC Comparative Effectiveness Study (Hypothetical):

    • MRMC Study: Yes, an MRMC study was conducted comparing sleep staging performance of human experts with and without AI assistance from the ActiGraph LEAP data.
    • Effect Size: Human readers improved sleep stage classification accuracy by an average of 7% (from 82% to 89%) when assisted by the AI algorithm compared to performing the task unassisted.

    6. Standalone Performance (Hypothetical):

    • Standalone Performance: Yes, the algorithm achieved 91% accuracy in detecting sleep onset/offset events and 87% accuracy in differentiating wake, NREM, and REM sleep stages when compared to polysomnography.

    7. Type of Ground Truth (Hypothetical):

    • Ground Truth: Polysomnography (PSG) for sleep parameters, motion capture system for activity counts, and validated actigraphy devices for circadian rhythm analysis.

    8. Training Set Sample Size (Hypothetical):

    • Training Set Sample Size: 5,000 subjects.

    9. How Ground Truth for Training Set was Established (Hypothetical):

    • Training Ground Truth: Ground truth for the training set was established through a combination of expert-annotated polysomnography data from a diverse patient population, alongside simultaneously recorded high-resolution motion data from the device and other reference sensors. Annotations were initially made by trained technicians and then reviewed and confirmed by a panel of 5 board-certified sleep specialists using an iterative consensus approach.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 7