Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K250058
    Device Name
    NEAT 001
    Date Cleared
    2025-04-10

    (90 days)

    Product Code
    Regulation Number
    882.1400
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    NEAT 001

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Automatic scoring of sleep EEG data to identify stages of sleep according the American Academy of Sleep Medicine definitions, rules and guidelines. It is to be used with adult populations.

    Device Description

    The Neurosom EEG Assessment Technology (NEAT) is a medical device software application that allows users to perform sleep staging post-EEG acquisition. NEAT allows users to review sleep stages on scored MFF files and perform sleep scoring on unscored MFF files.

    NEAT software is designed in a client-server model and comprises a User Interface (UI) that runs on a Chrome web browser in the client computer and a Command Line Interface (CLI) software that runs on a Forward-Looking Operations Workflow (FLOW) server.

    The user interacts with the NEAT UI through the FLOW front-end application to initiate the NEAT workflow on unscored MFF files and visualize sleep-scoring results. Sleep stages are scored by the containerized neat-cli software on the FLOW server using the EEG data. The sleep stages are then added to the input MFF file as an event track file in XML format. Once the new event track file is created, the NEAT UI component retrieves the sleep events from the FLOW server and displays a hypnogram (visual representation of sleep stages over time) on the screen, along with sleep statistics and other subject details. Additionally, a summary of the sleep scoring is automatically generated and added to the same participant in the FLOW server in PDF format.

    AI/ML Overview

    The FDA 510(k) Clearance Letter for NEAT 001 provides information about the device's acceptance criteria and the study conducted to prove its performance.

    Acceptance Criteria and Device Performance

    The core acceptance criteria for NEAT 001, as demonstrated by the comparative clinical study, are based on its ability to classify sleep stages (Wake, N1, N2, N3, REM) with performance comparable to the predicate device, EnsoSleep, and within the variability observed among expert human raters.

    Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state pre-defined numerical "acceptance criteria" for each metric (Sensitivity, Specificity, Overall Agreement) that NEAT 001 had to meet. Instead, the approach was a comparative effectiveness study against a predicate device (EnsoSleep), with the overarching criterion being "substantial equivalence" as interpreted by performance falling within the range of differences expected among expert human raters.

    Therefore, the "acceptance criteria" are implied by the findings of substantial equivalence. The "reported device performance" is given in terms of the comparison between NEAT and EnsoSleep, and their differences relative to human agreement variability.

    Metric / Sleep StageNEAT Performance (vs. Predicate EnsoSleep)Acceptance Criteria (Implied)
    Wake (Wa)Equivalent performance (1-2% difference)Difference within range of human agreement variability
    REM (R)EnsoSleep performed better (3-4% difference)Difference within range of human agreement variability (stated as 3% for CSF dataset)
    N1 (Overall Performance)EnsoSleep better (4-7%)Difference within range of human agreement variability (only in BEL data set was this difference bigger than human agreement)
    N1 (Sensitivity)NEAT substantially better (8-20%)Not a primary equivalence metric, but noted as an area where NEAT excels.
    N1 (Specificity)EnsoSleep better (5-9%)Not a primary equivalence metric, but noted.
    N2 (Overall Performance)EnsoSleep marginally better (5%) for BEL data setDifference within range of human agreement variability
    N2 (Sensitivity)EnsoSleep more sensitive (22%)Not a primary equivalence metric, but noted.
    N2 (Specificity)EnsoSleep less specific (9-11%)Not a primary equivalence metric, but noted.
    N3 (Overall Performance)Equivalent (1% difference overall)Difference within range of human agreement variability
    N3 (Sensitivity)NEAT substantially better (15-39%)Not a primary equivalence metric, but noted as an area where NEAT excels.
    N3 (Specificity)EnsoSleep marginally better (3-4%)Not a primary equivalence metric, but noted.
    General ConclusionStatistically significant differences, but practically within the range of differences expected among expert human raters.Substantial equivalence to predicate device.

    Study Details

    Here's a breakdown of the study details based on the provided text:

    1. Sample Size and Data Provenance

    • Test Set Sample Size: The exact number of participants or EEG recordings in the test set is not explicitly stated. The document refers to "two data sets" (referred to as "BEL data set" and "CSF data set") used for testing both NEAT and EnsoSleep. The large resampling number (R=2000 resamples for bootstrapping) suggests a dataset size sufficient to yield small confidence intervals.
    • Data Provenance:
      • Country of Origin: Not explicitly stated.
      • Retrospective or Prospective: Not explicitly stated, but the mention of "All data files were scored by EnsoSleep" and "All data files were scored by NEAT" implies these were pre-existing datasets, making them retrospective.

    2. Number of Experts and Qualifications for Ground Truth

    • Number of Experts: Not explicitly stated. The study refers to "established gold standard" and "human agreement variability" among "expert human raters," implying multiple experts.
    • Qualifications of Experts: Not explicitly stated beyond "expert human raters." No details are provided regarding their specific medical background (e.g., neurologists, sleep specialists), years of experience, or board certifications.

    3. Adjudication Method for the Test Set

    • Adjudication Method: Not explicitly stated. The document simply refers to "the established gold standard." It does not mention whether this gold standard was derived from a single expert, consensus among multiple experts, or a specific adjudication process (like 2+1 or 3+1).

    4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? A direct MRMC comparative effectiveness study involving human readers assisting with AI vs. without AI assistance was not explicitly described. The study primarily focuses on comparing the standalone performance of NEAT (the AI) against the standalone performance of the predicate device (EnsoSleep), and then interpreting these differences in the context of human-to-human agreement variability.
    • Effect Size of Human Reader Improvement: Since a direct MRMC study with human readers assisting AI was not detailed, there is no information provided on the effect size of how much human readers improve with AI vs. without AI assistance.

    5. Standalone Performance (Algorithm Only)

    • Was a standalone study done? Yes. The study evaluated the "segment-by-segment" performance of NEAT and EnsoSleep algorithms directly against the "established gold standard." This is a measure of the algorithm's standalone performance without human input during the scoring process.

    6. Type of Ground Truth Used

    • Type of Ground Truth: The ground truth for the test set was based on an "established gold standard" for sleep stage classification. This strongly implies expert consensus or expert scoring of the EEG data according to American Academy of Sleep Medicine definitions, rules, and guidelines. Pathology or outcomes data were not used for sleep staging ground truth.

    7. Training Set Sample Size

    • Training Set Sample Size: The sample size for the training set is not explicitly stated in the provided document.

    8. How Ground Truth for Training Set Was Established

    • How Ground Truth for Training Set Was Established: The document states that neat-cli "leverages Python libraries for identifying stages of sleep on MFF files using Machine Learning (ML)." However, it does not explicitly describe how the ground truth for the training set was established. Typically, for ML models, the training data's ground truth would also be established by expert annotation or consensus, similar to the test set ground truth, but this is not confirmed in the provided text.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1