Search Results
Found 438 results
510(k) Data Aggregation
(108 days)
Florida 32615
Re: K251480
Trade/Device Name: PV01 PVDF Effort Sensor
Regulation Number: 21 CFR 882.1400
Polysomnograph with Electroencephalograph
FDA Product Code: SFK
CFR References: 21 CFR 882.1400
a new product code under the same CFR Reference as the Predicate Device. |
| CFR Reference | 21 CFR 882.1400
- Electroencephalograph | 21 CFR 882.1400 - Electroencephalograph | 21 CFR 868.2375 - Breathing frequency
The PVDF Effort Sensor is intended to measure and output respiratory effort signals from a patient for archival in a sleep study. The sensor is an accessory to a polysomnography system which records and conditions the physiological signals for analysis and display, such that the data may be analyzed by a qualified sleep clinician to aid in the diagnosis of sleep disorders.
The PVDF Effort Sensor is intended for use on both adults and children by healthcare professionals within a hospital, laboratory, clinic, or nursing home, or outside of a medical facility under the direction of a medical professional.
The PVDF Effort Sensor does not include or trigger alarms, and is not intended to be used alone as, or a critical component of,
- an alarm or alarm system;
- an apnea monitor or apnea monitoring system; or
- life monitor or life monitoring system.
The PV01 PVDF Effort Sensor is a respiratory effort monitoring accessory designed for use during sleep studies to assess breathing patterns by measuring chest and abdominal wall movement. The device functions as an accessory to polysomnography (PSG) systems, enabling qualified sleep clinicians to analyze respiratory data for the diagnosis of sleep disorders.
The sensor consists of two main components: a PVDF (polyvinylidene fluoride) sensor module and an elastic belt. The sensor module contains two plastic enclosures connected by a piezoelectric PVDF sensing element encased in a silicone laminate. The PVDF material generates a tiny voltage that is output through the lead wire to the sleep amplifier. The change in voltage as the tension on the PVDF film fluctuates corresponds to the breathing of the patient. Since the PVDF material generates voltage, the sensor does not require a battery or power from the amplifier. The output signal is processed by the sleep recording system for monitoring and post-study analysis.
The PV01 PVDF Effort Sensor is intended for prescription use only by healthcare professionals in hospitals, sleep laboratories, clinics, nursing homes, or in home environments under medical professional direction. The device is designed for use on both adult and children participating in sleep disorder studies. The sensor is intended to be worn over clothes and not directly on the patient's skin.
The 510(k) clearance letter for the PV01 PVDF Effort Sensor does not contain the specific details required to fully address all aspects of your request regarding acceptance criteria and the study proving the device meets them. This document is a regulatory approval letter, summarizing the basis for clearance, not a detailed study report.
However, based on the provided text, here's an attempt to extract and infer the information:
Overview of Device Performance Study
The PV01 PVDF Effort Sensor underwent "comprehensive verification and validation testing" including "functional and performance evaluations" and "validation studies" to confirm it meets design specifications and is safe and effective. Additionally, "comparative testing against the Reference Device" was performed.
This suggests that the performance evaluation primarily focused on:
- Safety Tests: Compliance with UL 60601-1 standards to ensure electrical and liquid ingress safety.
- Usability and Validation Test: Assessment of user experience and comfort during a simulated sleep study.
- Performance Comparison Test: Electrical signal output comparison to a legally marketed predicate device under simulated breathing conditions.
- Temperature Range Test: Verification of signal output performance at extreme operating temperatures.
Acceptance Criteria and Reported Device Performance
Based on the "Summary of Tests Performed" section, the following can be inferred:
Acceptance Criteria Category | Specific Test / Method | Acceptance Criteria (Inferred from "Results" column) | Reported Device Performance |
---|---|---|---|
Safety | UL 60601-1 Dielectric Strength | Device must pass dielectric strength tests per standard. | Passed: "All tests passed" |
Safety | UL 60601-1 Ingress of Liquids | Device must pass ingress of liquids tests per standard. | Passed: "All tests passed" |
Safety | UL 60601-1 Patient Leads | Device must pass patient lead tests per standard. | Passed: "All tests passed" |
Usability/User Experience | Usability and Validation Test (Survey) | Participants to rate ease-of-use and comfort highly; no reports of use errors or adverse events. | Met: "All participants rated the sensor high for ease-of-use and comfort. There were no reports of use errors nor adverse events." |
Functional Performance | Performance Comparison Test (Simulated breathing) | Output signals must be very similar to the Reference Device and clearly show breathing and cessation of breathing. | Met: "The output signals were very similar and clearly showed breathing and the cessation of breathing." |
Environmental Performance | Temperature Range Test (Operating temperature verification) | Output signal must meet all requirements at low and high operating temperatures. | Met: "The output signal met all requirements at both temperatures." |
Missing Information and Limitations:
The provided FDA 510(k) clearance letter is a high-level summary and does not contain the granular details typically found in a full study report. Therefore, most of the following requested information cannot be extracted directly from this document.
-
Sample size used for the test set and data provenance:
- Test Set Size: Not specified for any of the performance tests. For the usability test, it mentions "Participants" (plural), but no number. For the performance comparison test, it states "Both devices were placed on a rig," implying a comparison, but no human subject or case count.
- Data Provenance: Not specified (e.g., country of origin, retrospective/prospective). The usability test mentions "participants," potentially implying prospective data collection, but this is a broad inference.
-
Number of experts used to establish the ground truth for the test set and their qualifications:
- Not Applicable/Not Specified: The device is a "PVDF Effort Sensor" that measures and outputs respiratory effort signals. Its purpose is to provide raw physiological data for a "qualified sleep clinician to aid in the diagnosis of sleep disorders." The device itself does not provide a diagnosis or interpretation that would require expert ground truth labeling in the traditional sense of an AI diagnostic device (e.g., image-based AI). The performance assessment appears to be against expected signal characteristics and comparison to a known device, not against clinical ground truth established by experts.
-
Adjudication method for the test set:
- Not Applicable/Not Specified: Given the nature of the device (a sensor outputting physiological signals) and the described tests, a formal adjudication process (like for interpreting medical images) is not mentioned or implied.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and the effect size of how much human readers improve with AI vs. without AI assistance:
- No: This type of study (MRMC for AI assistance) is not mentioned. The device is a sensor, not an AI interpretative tool designed to assist human readers directly. It provides raw data for clinicians to analyze.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Partially Yes (for the sensor itself): The "Performance Comparison Test" and "Temperature Range Test" assess the device's signal output performance independently without a human in the loop for interpretation. The "Safety Tests" are also standalone tests on the device's physical and electrical properties.
-
The type of ground truth used:
- Physiological Simulation / Device Output Comparison: For the "Performance Comparison Test," the ground truth was essentially the simulated breathing patterns produced by a "rig" and the expected output signals of a known predicate/reference device.
- User Feedback / Self-Reported Metrics: For the "Usability and Validation Test," the ground truth was the participants' subjective feedback on comfort and ease-of-use, and the absence of reported use errors or adverse events.
- Compliance with Standards: For "Safety Tests," the ground truth was compliance with the specified clauses of the UL 60601-1 standard.
-
The sample size for the training set:
- Not Applicable/Not Specified: The PV01 PVDF Effort Sensor is described as a passive hardware sensor ("generates a tiny voltage," "does not require a battery or power from the amplifier") that measures physical movement. It is not an AI/ML algorithm that requires a "training set" in the computational sense.
-
How the ground truth for the training set was established:
- Not Applicable: As stated above, there is no mention or implication of a training set as this is a hardware sensor, not an AI/ML algorithm.
In summary, the provided document gives a high-level overview of the acceptance criteria met for regulatory clearance, primarily focusing on safety, basic functional performance relative to another device, and usability. It does not delve into the detailed statistical methodology and independent ground truth establishment typical of AI/ML device studies.
Ask a specific question about this device
(315 days)
. §882.1400 Electroencephalograph
21 C.F.R. §870.1100 alarm, blood-pressure
21 C.F.R. §870.1425 Programmable
The monitor B105M, B125M, B155M, B105P and B125P are portable multi-parameter patient monitors intended to be used for monitoring, recording, and to generate alarms for multiple physiological parameters of adult, pediatric, and neonatal patients in a hospital environment and during intra-hospital transport.
The monitor B105M, B125M, B155M, B105P and B125P are intended for use under the direct supervision of a licensed health care practitioner.
The monitor B105M, B125M, B155M, B105P and B125P are not Apnea monitors (i.e., do not rely on the device for detection or alarm for the cessation of breathing). These devices should not be used for life sustaining/supporting purposes.
The monitor B105M, B125M, B155M, B105P and B125P are not intended for use during MRI.
The monitor B105M, B125M, B155M, B105P and B125P can be stand-alone monitors or interfaced to other devices via network.
The monitor B105M, B125M, B155M, B105P and B125P monitor and display: ECG (including ST segment, arrhythmia detection, ECG diagnostic analysis and measurement), invasive blood pressure, heart/pulse rate, oscillometric non-invasive blood pressure (systolic, diastolic and mean arterial pressure), functional oxygen saturation (SpO2) and pulse rate via continuous monitoring (including monitoring during conditions of clinical patient motion or low perfusion), temperature with a reusable or disposable electronic thermometer for continual monitoring Esophageal/Nasopharyngeal/Tympanic/Rectal/Bladder/Axillary/Skin/Airway/Room/Myocardial/Core/Surface temperature, impedance respiration, respiration rate, airway gases (CO2, O2, N2O, anesthetic agents, anesthetic agent identification and respiratory rate), Cardiac Output (C.O.), Entropy, neuromuscular transmission (NMT) and Bispectral Index (BIS).
The monitor B105M, B125M, B155M, B105P and B125P are able to detect and generate alarms for ECG arrhythmias: Asystole, Ventricular tachycardia, VT>2, Ventricular Bradycardia, Accelerated Ventricular Rhythm, Ventricular Couplet, Bigeminy, Trigeminy, "R on T", Tachycardia, Bradycardia, Pause, Atrial Fibrillation, Irregular, Multifocal PVCs, Missing Beat, SV Tachy, Premature Ventricular Contraction (PVC), Supra Ventricular Contraction (SVC) and Ventricular fibrillation.
The proposed monitors B105M, B125M, B155M, B105P and B125P are new version of multi-parameter patient monitors developed based on the predicate monitors B105M, B125M, B155M, B105P and B125P (K213490) to provide additional monitored parameter Bispectral Index (BIS) by supporting the additional optional E-BIS module (K052145) which used in conjunction with Covidien BISx module (K072286).
In addition to the added parameter, the proposed monitors also offer below several enhancements:
- Provided data connection with GE HealthCare anesthesia devices to display the parameters measured from anesthesia devices (Applicable for B105M, B125M and B155M).
- Modified Early Warning Score calculation provided.
- Separated low priority alarms user configurable settings from the combined High/Medium/Low priority options.
- Provided additional customized notification tool to allow clinician to configure the specific notification condition of one or more physiological parameters measured by the monitor. (Applicable for B105M, B125M and B155M).
- Enhanced User Interface in Neuromuscular Transmission (NMT), Respiration Rate and alarm overview.
- Provided Venous Stasis to assist venous catheterization with NIBP cuff inflation.
- Supported alarm light brightness adjustment.
- Supported alarm audio pause by gesture (Not applicable for B105M and B105P).
- Supported automatic screen brightness adjustment.
- Supported network laser printing.
- Continuous improvements in cybersecurity
The proposed monitors B105M, B125M, B155M, B105P and B125P retain equivalent hardware design based on the predicate monitors and removal of the device Trim-knob to better support cleaning and disinfecting while maintaining the same primary function and operation.
Same as the predicate device, the five models (B105M, B125M, B155M, B105P and B125P) share the same hardware platform and software platform to support the data acquisition and algorithm modules. The differences between them are the LCD screen size and configuration options. There is no change from the predicate in the display size.
As with the predicate monitors B105M, B125M, B155M, B105P and B125P (K213490), the proposed monitors B105M, B125M, B155M, B105P and B125P are multi-parameter patient monitors, utilizing an LCD display and pre-configuration basic parameters: ECG, RESP, NIBP, IBP, TEMP, SpO2, and optional parameters which include CO2 and Gas parameters provided by the E-MiniC module (K052582), CARESCAPE Respiratory modules E-sCO and E-sCAiO (K171028), Airway Gas Option module N-CAiO (K151063), Entropy parameter provided by the E-Entropy module (K150298), Cardiac Output parameter provided by the E-COP module (K052976), Neuromuscular Transmission (NMT) parameter provided by E-NMT module (K051635) and thermal recorder B1X5-REC.
The proposed monitors B105M, B125M, B155M, B105P and B125P are not Apnea monitors (i.e., do not rely on the device for detection or alarm for the cessation of breathing). These devices should not be used for life sustaining/supporting purposes. Do not attempt to use these devices to detect sleep apnea.
As with the predicate monitors B105M, B125M, B155M, B105P and B125P (K213490), the proposed monitors B105M, B125M, B155M, B105P and B125P also can interface with a variety of existing central station systems via a cabled or wireless network which implemented with identical integrated WiFi module. (WiFi feature is disabled in B125P/B105P).
Moreover, same as the predicate monitors B105M, B125M, B155M, B105P and B125P (K213490), the proposed monitors B105M, B125M, B155M, B105P and B125P include features and subsystems that are optional or configurable, and it can be mounted in a variety of ways (e.g., shelf, countertop, table, wall, pole, or head/foot board) using existing mounting accessories.
The provided FDA 510(k) clearance letter and summary for K242562 (Monitor B105M, Monitor B125M, Monitor B155M, Monitor B105P, Monitor B125P) do not contain information about specific acceptance criteria, reported device performance metrics, or details of a study meeting those criteria for any of the listed physiological parameters or functionalities (e.g., ECG or arrhythmia detection).
Instead, the documentation primarily focuses on demonstrating substantial equivalence to a predicate device (K213490) by comparing features, technology, and compliance with various recognized standards and guidance documents for safety, EMC, software, human factors, and cybersecurity.
The summary explicitly states: "The subject of this premarket submission, the proposed monitors B105M/B125M/B155M/B105P/B125P did not require clinical studies to support substantial equivalence." This implies that the changes introduced in the new device versions were not considered significant enough to warrant new clinical performance studies or specific quantitative efficacy/accuracy acceptance criteria beyond what is covered by the referenced consensus standards.
Therefore, I cannot provide the requested information from the given text:
- A table of acceptance criteria and the reported device performance: This information is not present. The document lists numerous standards and tests performed, but not specific performance metrics or acceptance thresholds.
- Sample size used for the test set and the data provenance: Not explicitly stated for performance evaluation, as clinical studies were not required. The usability testing mentioned a sample size of 16 US clinical users, but this is for human factors, not device performance.
- Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable, as detailed performance studies requiring expert ground truth are not described.
- Adjudication method (e.g. 2+1, 3+1, none) for the test set: Not applicable.
- If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: Not applicable. This device is a patient monitor, not an AI-assisted diagnostic tool that would typically involve human readers.
- If a standalone (i.e. algorithm only without human-in-the loop performance) was done: The document describes "Bench testing related to software, hardware and performance including applicable consensus standards," which implies standalone testing against known specifications or simulated data. However, specific results or detailed methodologies for this type of testing are not provided beyond the list of standards.
- The type of ground truth used (expert consensus, pathology, outcomes data, etc.): Not explicitly stated for performance assessment. For the various parameters (ECG, NIBP, SpO2, etc.), it would typically involve reference equipment or validated methods as per the relevant IEC/ISO standards mentioned.
- The sample size for the training set: Not applicable, as this is not an AI/ML device that would require explicit training data in the context of this submission.
- How the ground truth for the training set was established: Not applicable.
In summary, the provided document focuses on demonstrating that the new monitors are substantially equivalent to their predicate through feature comparison, adherence to recognized standards, and various non-clinical bench tests (e.g., hardware, alarms, EMC, environmental, reprocessing, human factors, software, cybersecurity). It does not contain the detailed performance study results and acceptance criteria typically found for novel diagnostic algorithms or AI-driven devices.
Ask a specific question about this device
(116 days)
California 94303
Re: K250239
Trade/Device Name: NeuroMatch
Regulation Number: 21 CFR 882.1400
Software For Full-Montage Electroencephalograph
Classification Name: Electroencephalograph (21CFR 882.1400
-----------------------------|--------------------------------------|
| Classification | 21 CFR§882.1400
, Electroencephalograph | 21 CFR§882.1400, Electroencephalograph | 21 CFR§882.1400, Electroencephalograph
-
LVIS NeuroMatch Software is intended for the review, monitoring and analysis of electroencephalogram (EEG) recordings made by EEG devices using scalp electrodes and to aid neurologists in the assessment of EEG. The device is intended to be used by qualified medical practitioners who will exercise professional judgement in using the information.
-
The Seizure Detection component of LVIS NeuroMatch is intended to mark previously acquired sections of adult EEG recordings from patients greater than or equal to 18 years old that may correspond to electrographic seizures, in order to assist qualified medical practitioners in the assessment of EEG traces. EEG recordings should be obtained with a full scalp montage according to the electrodes from the International Standard 10-20 placement.
-
The Spike Detection component of LVIS NeuroMatch is intended to mark previously acquired sections of adult EEG recordings from patients ≥18 years old that may correspond to spikes, in order to assist qualified medical practitioners in the assessment of EEG traces. LVIS NeuroMatch Spike Detection performance has not been assessed for intracranial recordings.
-
LVIS NeuroMatch includes the calculation and display of a set of quantitative measures intended to monitor and analyze EEG waveforms. These include Artifact Strength, Asymmetry Spectrogram, Autocorrelation Spectrogram, and Fast Fourier Transform (FFT) Spectrogram. These quantitative EEG measures should always be interpreted in conjunction with review of the original EEG waveforms.
-
LVIS NeuroMatch displays physiological signals such as electrocardiogram (ECG/EKG) if it is provided in the EEG recording.
-
The aEEG functionality included in LVIS NeuroMatch is intended to monitor the state of the brain.
-
LVIS NeuroMatch Artifact Reduction (AR) is intended to reduce muscle and eye movements, in EEG signals from the International Standard 10-20 placement. AR does not remove the entire artifact signal and is not effective for other types of artifacts. AR may modify portions of waveforms representing cerebral activity. Waveforms must still be read by a qualified medical practitioner trained in recognizing artifacts, and any interpretation or diagnosis must be made with reference to the original waveforms.
-
LVIS NeuroMatch EEG source localization visualizes brain electrical activity on a 3D idealized head model. LVIS NeuroMatch source localization additionally calculates and displays summary trends based on source localization findings over time.
-
This device does not provide any diagnostic conclusion about the patient's condition to the user.
NeuroMatch is a cloud-based software as a medical device (SaMD) intended to review, monitor, display, and analyze previously acquired and/or near real-time electroencephalogram (EEG) data from patients greater than or equal to 18 years old. The device is not intended to substitute for real-time monitoring of EEG. The software includes advanced algorithms that perform artifact reduction, seizure detection, and spike detection.
The subject device is identical to the NeuroMatch device cleared under K241390, with exception of the following additional features:
- Source localization;
- Source localization trends;
Source localization and source localization trends are substantially equivalent to the Epilog PreOp (K172858). Apart from the proposed additional software changes and associated changes to the Indications for Use and labeling there are no changes to the intended use or to the software features that were previously cleared. Below is a description of the software functions that will be added to the cleared NeuroMatch Device.
1. Source Localization
The NeuroMatch Source Localization visualization feature is used to visualize recorded EEG activity from the scalp in an idealized 3D model of the brain. The idealized brain model is based on template MR images. Each single sample of EEG-measured brain activity corresponds to a single point/pixel referred to as a source localization node (i.e., "node"). Together, the source localization nodes form a 3D cartesian grid where EEG signals with higher standardized current density are depicted in red and signals with lower standardized current density are depicted in blue. Source localization can be performed for any selected segment of the EEG data. The maximum and minimum of the source localization values are the absolute maximum and minimum values across the selected EEG signal, respectively. Users can also set an absolute threshold for the minimum value of the source localization outputs.
2. Source Localization Trends
NeuroMatch provides three automatic source localization trends to assist physicians investigating the amplitude and the frequency of the signal of interest (e.g. seizure onset) at the source space. Two of the trends provide simple 3D views of the sources of the high amplitude / high frequency across the signal of interest. The third trend provides a similar 3D view of the high frequency source movement across time.
-
Maximum Amplitude Projection (MAP): This metric allows clinicians to readily determine which brain regions are active and have high amplitude source localization results. The metric is determined by iterating through each node within a specified analysis time window and outputting the maximum source localization amplitude at that node within the specified analysis time window. No value is reported for nodes which have not been identified as maximum at any time during the specified window. This metric can help show brain regions that have high amplitude during a seizure.
-
Node Visit Frequency (NVF): This metric is reported as the number of times that a node has been labeled as maximum over time. This metric can help clinicians identify which brain regions are frequently active during a seizure.
-
Node Transition Frequency (NTF): This metric allows clinicians to determine which brain regions are active in consecutive time frames over a selected time period. A node transition is defined as a transition from one maximum point to another over time, and the node transition frequency is calculated by iterating through all time points for a specified analysis window, counting the number of times a transition between two points occurs over that time, and then dividing it by the time window of analysis. This metric can help identify pairs of brain regions that are frequently active in sequential order.
Here's an analysis of the acceptance criteria and study details for the NeuroMatch device, based on the provided FDA 510(k) clearance letter:
1. Table of Acceptance Criteria and Reported Device Performance
The FDA letter does not explicitly state "acceptance criteria" in the traditional sense of pre-defined thresholds for performance metrics. Instead, the study's primary objective for Source Localization was to demonstrate non-inferiority to a reference device (CURRY) and comparable performance to a predicate device (Epilog PreOp). Therefore, the "acceptance criteria" can be inferred from the study's conclusions regarding non-inferiority and comparability.
For Source Localization Trends, the acceptance criterion was functional correctness and clinician understanding.
Feature / Metric | Acceptance Criteria (Inferred) | Reported Device Performance |
---|---|---|
Source Localization | ||
Non-Inferiority to CURRY (Reference Device) | Lower bound of one-sided 95% CI of success rate difference (NeuroMatch - CURRY) > pre-specified non-inferiority margin. | NeuroMatch success rate: 90.7% (39/43 concordant patients) |
CURRY success rate: 86% (37/43 concordant patients) | ||
Lower bound of one-sided 95% CI of success rate difference: -4.65% (greater than pre-specified non-inferiority margin). | ||
This established non-inferiority. | ||
Comparability to Epilog PreOp (Predicate Device) | Comparable success rate and 95% CI overlap. | NeuroMatch success rate: 91.7% (95% CI: 79.16, 100) |
Epilog PreOp success rate: 91.7% (95% CI: 79.16, 100) | ||
This indicates comparable performance. | ||
Consistency across Gender (Source Localization) | No considerable gender-related differences, consistently non-inferior to CURRY. | Male: CURRY 81.3%, NeuroMatch 87.5% |
Female: CURRY 88.9%, NeuroMatch 92.6% | ||
Observation suggests no considerable gender-related differences. | ||
Consistency across Age Groups (Source Localization) | Comparable performance to CURRY consistently across age groups. | Age [18, 30): CURRY 81.8%, NeuroMatch 81.8% |
Age [30, 40): CURRY 91.7%, NeuroMatch 91.7% | ||
Age [40, 50): CURRY 85.7%, NeuroMatch 92.9% | ||
Age [50, 75): CURRY 83.3%, NeuroMatch 100.0% | ||
Results suggest comparable performance across age groups. | ||
Source Localization Trends | Functional correctness (passes all test cases). | |
Clinician understanding and perceived clinical utility. | All test cases passed, confirming trends functioned as intended and yielded expected results. | |
Clinical survey of 15 clinicians showed they were able to understand the function of each trend and provided information regarding clinical utility in their workflow. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size (Source Localization Test Set): 43 patients.
- Data Provenance: Collected from three independent and geographically diverse medical institutions:
- Two institutions in the United States.
- One institution in South Korea.
- The study utilized retrospective data, as it focused on "previously acquired sections" of EEG recordings and "normalized post-operative MRIs with distinctive resection regions," indicating these were historical cases with established outcomes.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: Three (3) US board-certified epileptologists.
- Qualifications: "US board-certified epileptologists." (Specific years of experience are not mentioned, but board certification implies a high level of expertise in the field).
4. Adjudication Method for the Test Set
- Adjudication Method: The three board-certified epileptologists independently completed a survey. They were presented with source localization results from each device (NeuroMatch, CURRY, PreOp) and normalized post-operative MRIs with resection regions.
- Ground Truth Establishment: Each physician independently determined the resection region at the sublobar level and then assessed whether the SL output of each device had any overlap with this determined resection region. For each patient and device, they responded to a "Yes/No" question asking about concordance. The method doesn't explicitly mention a consensus or adjudication process between the three experts for the final ground truth, but rather their individual assessments were used to determine the concordance rate. However, implying the "resected brain areas" as the primary ground truth, their task was to evaluate if the SL output agreed with this established ground truth from the MRIs. The "concordance" was then aggregated across their individual assessments against the known resection region.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Yes, a form of MRMC study was done, but not in the traditional sense of measuring human reader improvement with AI assistance.
- The study involved multiple readers (3 epileptologists) assessing multiple cases (43 patients).
- However, the comparison was between AI algorithms (NeuroMatch vs. CURRY vs. PreOp), with the human readers acting as independent evaluators to establish concordance with a post-operative ground truth (resected brain areas).
- Effect Size of Human Reader Improvement with AI vs. Without AI Assistance: This specific metric was not assessed or reported. The study evaluated the standalone AI performance of NeuroMatch compared to other AI devices, using human experts to determine the "correctness" of the AI's output in relation to surgical outcomes. It did not measure how human readers' diagnostic accuracy or efficiency changed when using NeuroMatch as an aid.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study
- Yes, a standalone study was done for the Source Localization feature. The study directly compared the performance of the NeuroMatch algorithm against the CURRY reference device and the PreOp predicate device. The output was a "Yes/No" concordance with the resected brain area, as assessed by the experts. The experts evaluated the device's output, not their own performance using the device.
7. Type of Ground Truth Used
- Source Localization: The ground truth used was the resected brain areas as identified on normalized post-operative MRIs. This is a form of outcomes data combined with anatomical pathology (surgical intervention). The epileptologists were tasked with identifying whether the source localization output (from the algorithms) "overlapped" with these resected regions.
- Source Localization Trends: For the trends (MAP, NVF, NTF), the ground truth for functional correctness was EEG datasets with known solutions (i.e., simulated or carefully crafted data where the expected output of the algorithms was precisely predictable). For clinical utility, the ground truth was clinical feedback and perceived understanding from the 15 clinicians.
8. Sample Size for the Training Set
- The document does not specify the sample size for the training set for any of the algorithms. It only details the test set used for validation.
9. How the Ground Truth for the Training Set Was Established
- The document does not specify how the ground truth for the training set was established. Since the training set size is not provided, this information is also absent.
Ask a specific question about this device
(90 days)
200
Eugene, Oregon 97401
Re: K250058
Trade/Device Name: NEAT 001
Regulation Number: 21 CFR 882.1400
Electroencephalograph |
| Classification Name, Number & Product Code: | Electroencephalograph, 21 CFR 882.1400
Predicate Device |
|----------------|------------|------------------|
| Regulation number | 21 CFR 882.1400
| 21 CFR 882.1400 |
| Software only | Yes | Yes |
| Indication for use | Automatically scoring of sleep
Automatic scoring of sleep EEG data to identify stages of sleep according the American Academy of Sleep Medicine definitions, rules and guidelines. It is to be used with adult populations.
The Neurosom EEG Assessment Technology (NEAT) is a medical device software application that allows users to perform sleep staging post-EEG acquisition. NEAT allows users to review sleep stages on scored MFF files and perform sleep scoring on unscored MFF files.
NEAT software is designed in a client-server model and comprises a User Interface (UI) that runs on a Chrome web browser in the client computer and a Command Line Interface (CLI) software that runs on a Forward-Looking Operations Workflow (FLOW) server.
The user interacts with the NEAT UI through the FLOW front-end application to initiate the NEAT workflow on unscored MFF files and visualize sleep-scoring results. Sleep stages are scored by the containerized neat-cli software on the FLOW server using the EEG data. The sleep stages are then added to the input MFF file as an event track file in XML format. Once the new event track file is created, the NEAT UI component retrieves the sleep events from the FLOW server and displays a hypnogram (visual representation of sleep stages over time) on the screen, along with sleep statistics and other subject details. Additionally, a summary of the sleep scoring is automatically generated and added to the same participant in the FLOW server in PDF format.
The FDA 510(k) Clearance Letter for NEAT 001 provides information about the device's acceptance criteria and the study conducted to prove its performance.
Acceptance Criteria and Device Performance
The core acceptance criteria for NEAT 001, as demonstrated by the comparative clinical study, are based on its ability to classify sleep stages (Wake, N1, N2, N3, REM) with performance comparable to the predicate device, EnsoSleep, and within the variability observed among expert human raters.
Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state pre-defined numerical "acceptance criteria" for each metric (Sensitivity, Specificity, Overall Agreement) that NEAT 001 had to meet. Instead, the approach was a comparative effectiveness study against a predicate device (EnsoSleep), with the overarching criterion being "substantial equivalence" as interpreted by performance falling within the range of differences expected among expert human raters.
Therefore, the "acceptance criteria" are implied by the findings of substantial equivalence. The "reported device performance" is given in terms of the comparison between NEAT and EnsoSleep, and their differences relative to human agreement variability.
Metric / Sleep Stage | NEAT Performance (vs. Predicate EnsoSleep) | Acceptance Criteria (Implied) |
---|---|---|
Wake (Wa) | Equivalent performance (1-2% difference) | Difference within range of human agreement variability |
REM (R) | EnsoSleep performed better (3-4% difference) | Difference within range of human agreement variability (stated as 3% for CSF dataset) |
N1 (Overall Performance) | EnsoSleep better (4-7%) | Difference within range of human agreement variability (only in BEL data set was this difference bigger than human agreement) |
N1 (Sensitivity) | NEAT substantially better (8-20%) | Not a primary equivalence metric, but noted as an area where NEAT excels. |
N1 (Specificity) | EnsoSleep better (5-9%) | Not a primary equivalence metric, but noted. |
N2 (Overall Performance) | EnsoSleep marginally better (5%) for BEL data set | Difference within range of human agreement variability |
N2 (Sensitivity) | EnsoSleep more sensitive (22%) | Not a primary equivalence metric, but noted. |
N2 (Specificity) | EnsoSleep less specific (9-11%) | Not a primary equivalence metric, but noted. |
N3 (Overall Performance) | Equivalent (1% difference overall) | Difference within range of human agreement variability |
N3 (Sensitivity) | NEAT substantially better (15-39%) | Not a primary equivalence metric, but noted as an area where NEAT excels. |
N3 (Specificity) | EnsoSleep marginally better (3-4%) | Not a primary equivalence metric, but noted. |
General Conclusion | Statistically significant differences, but practically within the range of differences expected among expert human raters. | Substantial equivalence to predicate device. |
Study Details
Here's a breakdown of the study details based on the provided text:
1. Sample Size and Data Provenance
- Test Set Sample Size: The exact number of participants or EEG recordings in the test set is not explicitly stated. The document refers to "two data sets" (referred to as "BEL data set" and "CSF data set") used for testing both NEAT and EnsoSleep. The large resampling number (R=2000 resamples for bootstrapping) suggests a dataset size sufficient to yield small confidence intervals.
- Data Provenance:
- Country of Origin: Not explicitly stated.
- Retrospective or Prospective: Not explicitly stated, but the mention of "All data files were scored by EnsoSleep" and "All data files were scored by NEAT" implies these were pre-existing datasets, making them retrospective.
2. Number of Experts and Qualifications for Ground Truth
- Number of Experts: Not explicitly stated. The study refers to "established gold standard" and "human agreement variability" among "expert human raters," implying multiple experts.
- Qualifications of Experts: Not explicitly stated beyond "expert human raters." No details are provided regarding their specific medical background (e.g., neurologists, sleep specialists), years of experience, or board certifications.
3. Adjudication Method for the Test Set
- Adjudication Method: Not explicitly stated. The document simply refers to "the established gold standard." It does not mention whether this gold standard was derived from a single expert, consensus among multiple experts, or a specific adjudication process (like 2+1 or 3+1).
4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was an MRMC study done? A direct MRMC comparative effectiveness study involving human readers assisting with AI vs. without AI assistance was not explicitly described. The study primarily focuses on comparing the standalone performance of NEAT (the AI) against the standalone performance of the predicate device (EnsoSleep), and then interpreting these differences in the context of human-to-human agreement variability.
- Effect Size of Human Reader Improvement: Since a direct MRMC study with human readers assisting AI was not detailed, there is no information provided on the effect size of how much human readers improve with AI vs. without AI assistance.
5. Standalone Performance (Algorithm Only)
- Was a standalone study done? Yes. The study evaluated the "segment-by-segment" performance of NEAT and EnsoSleep algorithms directly against the "established gold standard." This is a measure of the algorithm's standalone performance without human input during the scoring process.
6. Type of Ground Truth Used
- Type of Ground Truth: The ground truth for the test set was based on an "established gold standard" for sleep stage classification. This strongly implies expert consensus or expert scoring of the EEG data according to American Academy of Sleep Medicine definitions, rules, and guidelines. Pathology or outcomes data were not used for sleep staging ground truth.
7. Training Set Sample Size
- Training Set Sample Size: The sample size for the training set is not explicitly stated in the provided document.
8. How Ground Truth for Training Set Was Established
- How Ground Truth for Training Set Was Established: The document states that
neat-cli
"leverages Python libraries for identifying stages of sleep on MFF files using Machine Learning (ML)." However, it does not explicitly describe how the ground truth for the training set was established. Typically, for ML models, the training data's ground truth would also be established by expert annotation or consensus, similar to the test set ground truth, but this is not confirmed in the provided text.
Ask a specific question about this device
(310 days)
Re: K241589*
Trade/Device Name: Ceribell Seizure Detection Software
Regulation Number: 21 CFR 882.1400
Software For Full-Montage Electroencephalograph
Classification Name: Electroencephalograph (21CFR 882.1400
The Ceribell Seizure Detection Software is intended to mark previously acquired sections of EEG recordings in patients greater or equal to 1 year of age that may correspond to electrographic seizures in order to assist qualified clinical practitioners in the assessment of EEG traces. The Seizure Detection Software also provides notifications to the user when detected seizure prevalence is "Frequent", "Abundant", or "Continuous, per the definitions of the American Clinical Neurophysiology Society Guideline 14. Delays of up to several minutes can occur between the beginning of a seizure and when the Seizure Section notifications will be shown to a user.
The Ceribell Seizure Detection Software does not provide any diagnostic conclusion about the subject's condition and Seizure Detection notifications cannot be used as a substitute for real time monitoring of the underlying EEG by a training expert.
The Ceribell Seizure Detection Software is a software-only device that is intended to mark previously acquired sections of EEG recordings that may correspond to electrographic seizures in order to assist qualified clinical practitioners in the assessment of EEG traces.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) clearance letter:
1. Table of Acceptance Criteria & Reported Device Performance
Metric / Category | Acceptance Criteria (95% CI) | Reported Device Performance (95% Confidence Interval) | Pass / Fail |
---|---|---|---|
Positive Percent Agreement (PPA) | Lower bound ≥ 70% PPA for each threshold | ||
Seizure Burden ≥10% (Frequent) | |||
Ages 1-11 | Lower bound ≥ 70% | 96.12% [88.35, 99.28] | Pass |
Ages 12-17 | Lower bound ≥ 70% | 87.01% [73.16, 93.55] | Pass |
Ages 18+ | Lower bound ≥ 70% | 95.71% [91.30, 98.43] | Pass |
Overall | Lower bound ≥ 70% | 93.93% [90.03, 96.52] | Pass |
Seizure Burden ≥50% (Abundant) | |||
Ages 1-11 | Lower bound ≥ 70% | 96.67% [87.50, 100.00] | Pass |
Ages 12-17 | Lower bound ≥ 70% | 95.45% [73.33, 100.00] | Pass |
Ages 18+ | Lower bound ≥ 70% | 96.72% [88.37, 100.0] | Pass |
Overall | Lower bound ≥ 70% | 96.50% [92.12, 98.77] | Pass |
Seizure Burden ≥90% (Continuous) | |||
Ages 1-11 | Lower bound ≥ 70% | 92.59% [76.00, 100] | Pass |
Ages 12-17 | Lower bound ≥ 70% | 100.0% [100, 100] | Pass |
Ages 18+ | Lower bound ≥ 70% | 93.55% [78.26, 100.0] | Pass |
Overall | Lower bound ≥ 70% | 94.12% [85.45, 98.48] | Pass |
False Positive rate per hour (FP/hr) | Upper bound ≤ 0.446 FP/hr for each threshold | ||
Seizure Burden ≥10% (Frequent) | |||
Ages 1-11 | Upper bound ≤ 0.446 | 0.2700 [0.2445, 0.2986] | Pass |
Ages 12-17 | Upper bound ≤ 0.446 | 0.2141 [0.1920, 0.2394] | Pass |
Ages 18+ | Upper bound ≤ 0.446 | 0.1343 [0.1250, 0.1445] | Pass |
Overall | Upper bound ≤ 0.446 | 0.1763 [0.1670, 0.1859] | Pass |
Seizure Burden ≥50% (Abundant) | |||
Ages 1-11 | Upper bound ≤ 0.446 | 0.1561 [0.1369, 0.1772] | Pass |
Ages 12-17 | Upper bound ≤ 0.446 | 0.0921 [0.0776, 0.1082] | Pass |
Ages 18+ | Upper bound ≤ 0.446 | 0.0547 [0.0480, 0.0615] | Pass |
Overall | Upper bound ≤ 0.446 | 0.08180 [0.0754, 0.0885] | Pass |
Seizure Burden ≥90% (Continuous) | |||
Ages 1-11 | Upper bound ≤ 0.446 | 0.0843 [0.0697, 0.1006] | Pass |
Ages 12-17 | Upper bound ≤ 0.446 | 0.0399 [0.0301, 0.0511] | Pass |
Ages 18+ | Upper bound ≤ 0.446 | 0.0249 [0.0204, 0.0299] | Pass |
Overall | Upper bound ≤ 0.446 | 0.03951 [0.0351, 0.0443] | Pass |
2. Sample Size and Data Provenance for the Test Set
- Sample Size for Test Set:
- Total Number of Patients: 1701
- Ages 1-11: 450 patients
- Ages 12-17: 392 patients
- Ages 18+: 859 patients
- Total Number of Patients: 1701
- Data Provenance: The EEG recordings dataset used for performance validation was gathered from real-world clinical usage of the Ceribell Pocket EEG Device. The specific country of origin is not explicitly stated, but it's implied to be from acute care hospital settings where the predicate device (Ceribell Pocket EEG Device) is used. The data is retrospective as it was previously acquired. There were no patient inclusion or exclusion criteria applied, indicating a representative sample of the intended patient population.
3. Number of Experts and Qualifications for Ground Truth
- Number of Experts: More than three expert neurologists (implied by "a two-thirds majority agreement was required"). The tables later specify "3 expert reviewers" for the seizure burden distribution, suggesting at least 3, and possibly more given the "two-thirds majority" rule.
- Qualifications of Experts: Fellowship trained in epilepsy or neurophysiology. No specific years of experience are mentioned.
4. Adjudication Method for the Test Set
- Adjudication Method: A two-thirds majority agreement among the expert neurologists was required to establish the ground truth for seizures. This implies a method of consensus. The experts were fully blinded to the outputs of the Seizure Detection Software.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was an MRMC study done? No. The documentation describes a standalone algorithm performance study, not a comparative effectiveness study involving human readers with and without AI assistance.
- Effect Size: Not applicable, as no MRMC study was performed.
6. Standalone Algorithm Performance
- Was a standalone study done? Yes. The study directly evaluates the "performance of the Seizure Detection algorithm" by comparing its output (algorithm marks/notifications) against the expert-established ground truth. The algorithm's PPA and FP/hr metrics are presented, which are standard for standalone AI performance.
7. Type of Ground Truth Used
- Type of Ground Truth: Expert consensus (specifically, a two-thirds majority agreement among fellowship-trained neurologists reviewing EEG recordings). This is clinical expert ground truth based on visual review of EEG.
8. Sample Size for the Training Set
- Sample Size for Training Set: Not explicitly stated. The document only mentions that "none of the data in the validation dataset were used for training of the Seizure Detection algorithm; the validation dataset is completely independent." This ensures the integrity of the test set but does not provide information about the training set size.
9. How Ground Truth for the Training Set Was Established
- Ground Truth Establishment for Training Set: Not explicitly stated. The document focuses exclusively on the validation dataset's ground truth methodology. However, given the nature of deep learning models, it's highly probable that the training data also underwent a rigorous ground truth labeling process, likely similar to (or potentially identical in methodology to) the validation set, though this is not detailed here.
Ask a specific question about this device
(126 days)
Norway
Re: K243743
Trade/Device Name: autoSCORE (V 2.0.0)
Regulation Number: 21 CFR 882.1400
Usual Name:** autoSCORE
Classification Name and Regulation Number: Electroencephalograph, 21 CFR 882.1400
• autoSCORE is intended for the review, monitoring and analysis of EEG recordings made by electroencephalogram (EEG) devices using scalp electrodes and to aid neurologists in the assessment of EEG. This device is intended to be used by qualified medical practitioners who will exercise professional judgment in using the information.
• The spike detection component of autoSCORE is intended to mark previously acquired sections of the patient's EEG recordings that may correspond to spikes, in order to assist qualified clinical practitioners in the assessment of EEG traces. The spike detection component is intended to be used in patients at least three months old for EEGs 4 hours. The autoSCORE component has not been assessed for intracranial recordings.
• autoSCORE is intended to assess the probability that previously acquired sections of EEG recordings contain abnormalities, and classifies these into pre-defined types of abnormalities, including epileptiform and non-epileptiform abnormalities. autoSCORE does not have a user interface. autoSCORE sends this information to the EEG reviewing software to indicate where markers indicating abnormality are to be placed in the EEG. autoSCORE also provides the probability that EEG recordings include abnormalities and the type of abnormalities. The user is required to review the EEG and exercise their clinical judgement to independently make a conclusion supporting or not supporting brain disease.
• This device does not provide any diagnostic conclusion about the patient's condition to the user. The device is not intended to detect or classify seizures.
autoSCORE is a software only device.
autoSCORE is an AI model that has been trained with standard deep learning principles using a large training dataset. The model will be locked in the field, so it cannot learn from data to which it is exposed when in use. It can only be used with a compatible electroencephalogram (EEG) reviewing software, which acquires and displays the EEG. The model has no user interface. The form of the visualization of the annotations is determined and provided by the EEG reviewing software.
autoSCORE has been trained to identify and then indicate to the user sections of EEG which may include abnormalities and to provide the level of probability of the presence of an abnormality. The algorithm also provides categorization of identified areas of abnormality into the four predefined types of abnormalities, again including a probability of that predefined abnormality type. This is performed by identifying epileptiform abnormalities/spikes (Focal epileptiform and generalised epileptiform) as well identifying non-epileptiform abnormalities (Focal non-epileptiform and Diffuse Non-Epileptiform).
This data is then provided by the algorithm to the EEG reviewing software, for it to display as part of the EEG output for the clinician to review. autoSCORE does not provide any diagnostic conclusion about the patient's condition nor treatment options to the user, and does not replace visual assessment of the EEG by the user. This device is intended to be used by qualified medical practitioners who will exercise professional judgment in using the information.
Acceptance Criteria and Study for autoSCORE (V 2.0.0)
This response outlines the acceptance criteria for autoSCORE (V 2.0.0) and the study conducted to demonstrate the device meets these criteria, based on the provided FDA 510(k) clearance letter.
1. Table of Acceptance Criteria and Reported Device Performance
The FDA clearance document does not explicitly present a table of predefined acceptance criteria (e.g., minimum PPV of X%, minimum Sensitivity of Y%). Instead, the regulatory strategy appears to be a demonstration of substantial equivalence through comparison to predicate devices and human expert consensus. The "Performance Validation" section (Section 7) outlines the metrics evaluated, and the "Validation Summary" (Section 7.2.6) states the conclusion of similarity.
Therefore, the "acceptance criteria" are implied to be that the device performs similarly to the predicate devices and/or to human experts, particularly in terms of Positive Predictive Value (PPV), as this was deemed clinically critical.
Here’s a table summarizing the reported device performance, which the manufacturer concluded met the implicit "acceptance criteria" by demonstrating substantial equivalence:
Performance Metric (Category) | autoSCORE V2 (Reported Performance) | Primary Predicate (encevis) (Reported Performance) | Secondary Predicate (autoSCORE V1.4) (Reported Performance) | Note on Comparison & Implied Acceptance |
---|---|---|---|---|
Recording Level - Accuracy (Abnormal) | 0.912 (0.850, 0.963) | - | 0.950 (0.900, 0.990) | AutoSCORE v2 comparable to autoSCORE v1.4. encevis not provided for "Abnormal." |
Recording Level - Sensitivity (Abnormal) | 0.926 (0.859, 0985) | - | 1.000 (1.000, 1.000) | autoSCORE v2 slightly lower than v1.4, but still high. |
Recording Level - Specificity (Abnormal) | 0.833 (0.583, 1.000) | - | 0.884 (0.778, 0.974) | autoSCORE v2 comparable to v1.4. |
Recording Level - PPV (Abnormal) | 0.969 (0.922, 1.000) | - | 0.920 (0.846, 0.983) | autoSCORE v2 high PPV, comparable to v1.4. |
Recording Level - Accuracy (IED) | 0.875 (0.800, 0.938) | 0.613 (0.500, 0.713) | IED not provided for v1.4 | IED (Interictal Epileptiform Discharges) combines Focal Epi and Gen Epi. autoSCORE v2 significantly higher accuracy than encevis. |
Recording Level - Sensitivity (IED) | 0.939 (0.864, 1.000) | 1.000 (1.000, 1.000) | IED not provided for v1.4 | autoSCORE v2 high Sensitivity, similar to encevis. |
Recording Level - Specificity (IED) | 0.774 (0.618, 0.914) | 0.000 (0.000, 0.000) | IED not provided for v1.4 | autoSCORE v2 significantly higher Specificity than encevis (encevis had 0.000 specificity for IED). |
Recording Level - PPV (IED) | 0.868 (0.769, 0.952) | 0.613 (0.500, 0.713) | IED not provided for v1.4 | autoSCORE v2 significantly higher PPV than encevis (considered a key clinical metric). |
Marker Level - PPV (Focal Epi) | 0.560 (0.526, 0.594) | - | 0.626 (0.616, 0.637) (Part 1) / 0.716 (0.701, 0.732) (Part 5) | autoSCORE v2 PPV slightly lower than v1.4 in some instances, but within general range. Comparison is against earlier validation parts of autoSCORE v1.4. |
Marker Level - PPV (Gen Epi) | 0.446 (0.405, 0.486) | - | 0.815 (0.802, 0.828) (Part 1) / 0.825 (0.799, 0.849) (Part 5) | autoSCORE v2 PPV significantly lower than v1.4. This is a point of difference. |
Marker Level - PPV (Focal Non-Epi) | 0.823 (0.794, 0.852) | - | 0.513 (0.506, 0.520) (Part 1) / 0.570 (0.556, 0.585) (Part 5) | autoSCORE v2 PPV significantly higher than v1.4. |
Marker Level - PPV (Diff Non-Epi) | 0.849 (0.822, 0.876) | - | 0.696 (0.691, 0.702) (Part 1) / 0.537 (0.520, 0.554) (Part 5) | autoSCORE v2 PPV significantly higher than v1.4. |
Marker Level - PPV (IED) | 0.513 (0.486, 0.539) | 0.257 (0.166, 0.349) | 0.389 (0.281, 0.504) | autoSCORE v2 significantly higher PPV than encevis and autoSCORE v1.4. This is a key finding highlighted. |
Correlation (Prob. vs. TP Markers) | p-value |
Ask a specific question about this device
(143 days)
. |
| Product Codes
/ Regulation
Number | PIW / 21 CFR 882.1450
OMC / 21 CFR 882.1400
| PIW / 21 CFR 882.1450
OMC / 21 CFR 882.1400
The Nurochek-Pro System is intended for prescription use in healthcare facilities for subjects aged between 12 and 44 years old. for the aid in diagnosis of mild traumatic brain injury (mTBI) in conjunction with a standard neurological assessment.
The Nurochek-Pro System is indicated for the generation of visual evoked potentials (VEP) and to acquire, transmit, display, and store electroencephalograms (EEG) during the generation of VEPs. Additionally, the system is indicated to analyze captured EEG signals to provide an aid in the diagnosis of mild traumatic brain iniury (mTBI) in subjects aged between 12 and 44 years old who have sustained a potential head injury in the past 120 hours (5 days).
The Nurochek-Pro System is a portable system designed to generate visual evoked potentials (VEPs) in patients and acquire, transmit, display, and store the resulting electroencephalogram (EEG). It is intended for prescription use in healthcare facilities, by healthcare professionals, on subjects aged between 12 and 44 years old, to aid in the diagnosis of mild traumatic brain injury (mTBI). The primary components of the Nurochek-Pro System are the wearable headset, the Nurochek-Pro Software Application, and the Nurochek-Pro Server.
Here's a summary of the acceptance criteria and the study that proves the device meets them, based on the provided text:
Acceptance Criteria and Device Performance
Acceptance Criteria | Reported Device Performance (Nurochek-Pro System) |
---|---|
Sensitivity | 0.8551 (85.51%) with a 95% confidence interval of (0.7496, 0.9283) |
Specificity | 0.6705 (67.05%) with a 95% confidence interval of (0.5621, 0.7670) |
Positive Predictive Value (PPV) | 67.0% |
Negative Predictive Value (NPV) | 85.5% |
Ability to differentiate between subjects with and without mTBI | "The study demonstrated that the device can differentiate between subjects with and without mTBI." |
Study Details
-
Sample size used for the test set and data provenance:
- Sample Size: 157 individual steady-state visual-evoked potential (SSVEP) readings.
- Data Provenance: Not explicitly stated, but the study involved "study subjects (age range 12-49 years)" and the clinical research protocol required readings to be collected within 120 hours of suspected head injury, in addition to a clinical evaluation by a licensed physician. This suggests prospective data collection in a clinical setting. The manufacturer is Headsafe MFG Pty Ltd. in Australia, which may imply data origin.
-
Number of experts used to establish the ground truth for the test set and their qualifications:
- Number of Experts: Not explicitly stated as a specific number. However, the ground truth was established by "Each highly trained physician". This implies multiple physicians were involved.
- Qualifications of Experts: "highly trained physician" who used "their education and experience to deliver their mTBI determination." This likely includes neurologists or emergency medicine physicians, given the context of mTBI diagnosis.
-
Adjudication method for the test set:
- Adjudication Method: Not explicitly stated if a formal adjudication method (like 2+1 or 3+1) was used. The text mentions "Each highly trained physician used their education and experience to deliver their mTBI determination." This suggests individual clinical determination, but not necessarily a consensus or tie-breaking process.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and the effect size of how much human readers improve with AI vs without AI assistance:
- MRMC Study: Not explicitly stated that an MRMC comparative effectiveness study was conducted involving human readers with and without AI assistance. The study described focuses on the device's standalone performance against clinical diagnosis.
- Effect Size: Therefore, no effect size for human reader improvement with AI assistance is provided.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Standalone Performance: Yes, the described performance (Sensitivity, Specificity, PPV, NPV) represents the standalone performance of the Nurochek-Pro System's classification algorithm. The clinical investigation aimed to "evaluate the performance of the Nurochek-Pro System against clinical diagnosis."
-
The type of ground truth used:
- Ground Truth Type: Clinical diagnosis by a licensed healthcare professional. This diagnosis was based on "a neurological examination, a concussion-related signs and symptom evaluation, and a review of all relevant information provided by the study subject in relation to their injury."
-
The sample size for the training set:
- Sample Size: 272 individual steady-state visual-evoked potential (SSVEP) readings.
-
How the ground truth for the training set was established:
- Training Set Ground Truth: The text states, "The Headsafe classification Algorithm used in this device was generated with 272 individual steady-state visual-evoked potential (SSVEP) readings, in which there was a 24.6% prevalence of mTBI." While not explicitly detailed for the training set ground truth establishment, it can be inferred that a similar process involving clinical evaluation by licensed physicians (as described for the validation set) was used to establish the mTBI status for the cases used to generate the algorithm. The "clinical research protocol required readings to be collected within 120 hours of the suspected head injury, in addition to a clinical evaluation by a licensed physician" applies to the study subjects from which the database was generated.
Ask a specific question about this device
(172 days)
84111
Re: K243185
Trade/Device Name: REMI Remote EEG Monitoring System Regulation Number: 21 CFR 882.1400
Electroencephalograph, Electrode, Cutaneous Regulatory Class: Class II Product Code and Regulation Number: OMC - Sec. 882.1400
Regulation | Class II per 21 CFR 882.1400
| Class II per 21 CFR 882.1400
The REMI Remote EEG Monitoring System is indicated for use in healthcare settings where near real-time and/or remote EEG is warranted and in ambulatory settings where remote EEG is warranted. REMI System uses single patient, disposable, wearable sensors intended to amplify, capture, and wirelessly transmit a single channel of electrical activity of the brain for a duration up to 30 days.
REMI System uses the REMI Mobile software application that runs on qualified commercial off-the-shelf mobile computing platforms. REMI Mobile displays user setup information to trained medical professionals and provides notifications to medical professionals and ambulatory users. REMI Mobile receives and transmits data from connected REMI Sensors to the secure REMI Cloud where it is stored and prepared for review on qualified EEG viewing software.
REMI System does not make any diagnostic conclusion about the subject's condition and is intended as a physiological signal monitor. REMI System is indicated for use with adult and pediatric patients (1+ years).
The REMI System has three major components:
-
- REMI Sensor A disposable EEG sensor which is placed on the patient's scalp using a conductive REMI Sticker
-
- REMI Mobile A mobile medical application that is designed to run on a qualified commercial-off-the-shelf mobile computing platform (an Android tablet for use in healthcare settings, and a portable/wearable Android device (phone or smartwatch) for use in ambulatory settings), acquire EEG data transmitted from REMI Sensors and then transmit the EEG data and associated patient information via wireless encrypted transmission to.
-
- REMI Cloud A HIPAA-compliant secure cloud storage and data processing platform where data is processed into a qualified EEG reviewing software format for neurological review.
The provided document is a 510(k) Pre-market Notification Summary for the REMI Remote EEG Monitoring System (K243185). This document details the device's characteristics, indications for use, and the studies conducted to demonstrate its substantial equivalence to a predicate device (REMI Remote EEG Monitoring System, K230933).
Based on the provided information, here's a description of the acceptance criteria and the study that proves the device meets them:
1. Table of Acceptance Criteria and Reported Device Performance
The document primarily relies on comparisons to its own predicate device (K230933) and general performance testing against recognized standards. Specific quantitative acceptance criteria are not explicitly detailed in a table format within this summary, but the general assertion is that the device met all predetermined acceptance criteria derived from the listed tests.
Test Type | Acceptance Criteria (Implicit) | Reported Device Performance |
---|---|---|
General Electrical Safety, EMC, and Ingress Protection | Compliance with relevant IEC standards (IEC 60601-1, IEC 60601-1-2, IEC 60601-2-26, IEC 60601-1-11). | Testing conducted to and met the requirements of the specified IEC standards. |
Wireless Technology Testing | Wireless connectivity can be initiated, is stable, and accurately transfers EEG signals. Connection maintained for a minimum of 48 continuous hours. | Wireless connectivity was tested (in accordance with IEC 60601-1-2 and IEC 60601-1-11 requirements) and demonstrated to initiate, maintain stability, and accurately transfer EEG signals. A wireless connection was confirmed to be maintained for a minimum of 48 continuous hours. |
Environmental/Shelf life | Device functions as intended after accelerated aging. | Accelerated aging and subsequent functional verification testing were performed. (Outcome states "met all predetermined acceptance criteria"). |
Packaging Performance | Device maintains integrity and function after ship testing. | Ship testing and subsequent functional verification testing were performed. (Outcome states "met all predetermined acceptance criteria"). |
Biocompatibility | Long-term contact with intact skin is safe (non-cytotoxic, non-sensitizing, non-irritating). | Biocompatibility testing for long-term contact with intact skin was performed per ISO-10993-1, ISO 10993-10, and ISO 10993-23 for all patient-contacting components. (Outcome states "safe and effective for its intended use" and "met all predetermined acceptance criteria"). |
Usability/Human Factors | Tasks associated with device use are safe and effective. | Human factors/usability testing was conducted to evaluate tasks associated with use of the device. (Outcome states "met all predetermined acceptance criteria"). |
Software Verification Testing | End-to-end functionality: Acquire EEG, transmit to mobile, transmit to cloud, viewable in qualified software. Essential performance met. | End-to-end testing confirmed: (1) REMI System acquires EEG signals from REMI Sensors and transmits to REMI Mobile software, (2) REMI Mobile transfers EEG data to REMI Cloud, and (3) final EEG file format within REMI Cloud is viewable in qualified EEG viewing software. This demonstrated that the REMI System meets its Essential Performance and fulfills system requirements. |
Clinical Performance (Extension to 1-6 years pediatric patients) | REMI System (including new hydrocolloid REMI Sticker) is safe and effective for monitoring EEG in pediatric patients aged 1 to |
Ask a specific question about this device
(254 days)
Monroeville, Pennsylvania 15146
Re: K241960
Trade/Device Name: DeepRESP Regulation Number: 21 CFR 882.1400
Electroencephalograph |
| Regulation Number: | 882.1400
|
| Regulation Number | 21 CFR 882.1400
| 21 CFR 882.1400
| 21 CFR 882.1400
DeepRESP is an aid in the diagnosis of various sleep disorders where subjects are often evaluated during the initiation or follow-up of treatment of various sleep disorders. The recordings to be analyzed by DeepRESP can be performed in a hospital, patient home, or an ambulatory setting. It is indicated for use with adults (22 years and above) in a clinical environment by or on the order of a medical professional.
DeepRESP is intended to mark sleep study signals to aid in the identification of events and annotation of traces; automatically calculate measures obtained from recorded signals (e.g., magnitude, time, frequency, and statistical measures of marked events); infer sleep staging with arousals with EEG and in the absence of EEG. All output is subject to verification by a medical professional.
DeepRESP is a cloud-based software as a medical device (SaMD), designed to perform analysis of sleep study recordings, with and without EEG signals, providing data for the assessment and diagnosis of sleep-related disorders. Its algorithmic framework provides the derivation of sleep staging including arousals, scoring of respiratory events and key parameters such as the Apnea-Hypopnea Index (AHI).
DeepRESP is hosted on a serverless stack. It consists of:
- A web Application Programming Interface (API) intended to interface with a third-party client application, allowing medical professionals to access DeepRESP's analytical capabilities.
- Predefined sequences called Protocols that run data analyses, including artificial intelligence and rule-based models for the scoring of sleep studies, and a parameter calculation service.
- A Result storage using an object storage service to temporarily store outputs from the DeepRESP Protocols.
Here's a breakdown of the acceptance criteria and the study details for the DeepRESP device, based on the provided FDA 510(k) summary:
1. Table of Acceptance Criteria & Reported Device Performance:
The document doesn't explicitly state "acceptance criteria" as a separate table, but it compares DeepRESP's performance against manual scoring and predicate devices. I've extracted the performance metrics that effectively serve as acceptance criteria given the "non-inferiority" and "superiority" claims against established devices.
Metric (Against Manual Scoring) | DeepRESP Performance (95% CI) | Equivalent Predicate Performance (Nox Sleep System K192469) (95% CI) | Superiority/Non-inferiority Claim | Relevant Study Type |
---|---|---|---|---|
Severity Classification (AHI ≥ 5) | ||||
PPA% | 87.5 [86.2, 89.0] | 73.6 [PPA% reported for predicate] | Superiority | Type I/II |
NPA% | 91.9 [87.4, 95.8] | 65.8 [NPA% reported for predicate] | Non-inferiority | Type I/II |
OPA% | 87.9 [86.6, 89.3] | 73.0 [OPA% reported for predicate] | Superiority | Type I/II |
Severity Classification (AHI ≥ 15) | ||||
PPA% | 74.1 [72.0, 76.5] | 54.5 [PPA% reported for predicate] | Superiority | Type I/II |
NPA% | 94.7 [93.2, 96.2] | 89.8 [NPA% reported for predicate] | Non-inferiority | Type I/II |
OPA% | 81.5 [79.9, 83.3] | 67.2 [OPA% reported for predicate] | Superiority | Type I/II |
Respiratory Events | ||||
PPA% | 72.0 [70.9, 73.2] | 58.5 [PPA% reported for predicate] | Non-inferiority (Superiority for OPA claimed) | Type I/II |
NPA% | 94.2 [94.0, 94.5] | 95.4 [NPA% reported for predicate] | Non-inferiority | Type I/II |
OPA% | 87.2 [86.8, 87.5] | 81.7 [OPA% reported for predicate] | Superiority | Type I/II |
Sleep State Estimation (Wake) | ||||
PPA% | 95.4 [95.1, 95.6] | 56.7 [PPA% reported for predicate] | Non-inferiority | Type I/II |
NPA% | 94.6 [94.4, 94.9] | 98.1 [NPA% reported for predicate] | Non-inferiority | Type I/II |
OPA% | 94.8 [94.6, 95.0] | 89.8 [OPA% reported for predicate] | Non-inferiority | Type I/II |
Arousal Events | ||||
ArI ICC (against Sleepware G3 K202142) | 0.63 [ArI ICC] | 0.794 [ArI ICC for additional predicate] | Non-inferiority | Type I/II |
PPA% | 62.2 [61.2, 63.1] | N/A (Manual for primary predicate) | N/A | Type I/II |
NPA% | 89.3 [88.8, 89.7] | N/A (Manual for primary predicate) | N/A | Type I/II |
OPA% | 81.4 [81.1, 81.7] | N/A (Manual for primary predicate) | N/A | Type I/II |
Type III Severity Classification (AHI ≥ 5) | ||||
PPA% | 93.1 [92.2, 93.9] | 82.4 [PPA% reported for predicate] | Superiority | Type III |
NPA% | 81.1 [75.1, 86.6] | 56.6 [NPA% reported for predicate] | Non-inferiority | Type III |
OPA% | 92.5 [91.7, 93.3] | 81.1 [OPA% reported for predicate] | Non-inferiority | Type III |
Type III Respiratory Events | ||||
PPA% | 75.4 [74.6, 76.1] | 58.5 [PPA% reported for predicate] | Superiority | Type III |
NPA% | 87.8 [87.4, 88.1] | 95.4 [NPA% reported for predicate] | Non-inferiority | Type III |
OPA% | 83.7 [83.4, 84.0] | 81.7 [OPA% reported for predicate] | Superiority | Type III |
Type III Arousal Events | ||||
ArI ICC (against Sleepware G3 K202142) | 0.76 [ArI ICC] | 0.73 [ArI ICC for additional predicate] | Non-inferiority | Type III |
2. Sample Size Used for the Test Set and Data Provenance:
- Type I/II Studies (EEG present): 2,224 sleep recordings
- Type III Studies (No EEG): 3,488 sleep recordings (including 2,213 Type I recordings and 1,275 Type II recordings, processed to utilize only Type III relevant signals).
- Provenance: Retrospective study. Data originated from sleep clinics in the United States, collected as part of routine clinical work for patients suspected of sleep disorders. The patient population showed diversity in age, BMI, and race/ethnicity (Caucasian or White, Black or African American, Other, Not Reported) and was considered representative of patients seeking medical services for sleep disorders in the United States.
3. Number of Experts and Qualifications for Ground Truth:
The document explicitly states that the studies used "manually scored sleep recordings" but does not specify the number of experts or their specific qualifications (e.g., "radiologist with 10 years of experience"). It implicitly relies on the quality of "manual scoring" from routine clinical work in US sleep clinics as the ground truth.
4. Adjudication Method for the Test Set:
The document does not describe any specific adjudication method (e.g., 2+1, 3+1). It refers to "manual scoring" as the established ground truth.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
No, a MRMC comparative effectiveness study was not reported. The study design was a retrospective data analysis comparing the algorithm's performance against existing manual scoring (ground truth) and established predicate devices. There is no information about human readers improving with AI vs. without AI assistance. The device is intended to provide automatic scoring subject to verification by a medical professional.
6. Standalone (Algorithm Only) Performance:
Yes, the study report describes the standalone performance of the DeepRESP algorithm. The reported PPA, NPA, OPA percentages, and ICC values represent the agreement of the automated scoring by DeepRESP compared to the manual ground truth. The device produces output "subject to verification by a medical professional," but the performance metrics provided are for the algorithmic output itself.
7. Type of Ground Truth Used:
The ground truth used was expert consensus (manual scoring). The document states "It used manually scored sleep recordings... The studies were done by evaluating the agreement in scoring and clinical indices resulting from the automatic scoring by DeepRESP compared to manual scoring."
8. Sample Size for the Training Set:
The document does not explicitly state the sample size used for the training set. The clinical validation study is described as a "retrospective study" used for validation, but details about the training data are not provided in this summary.
9. How the Ground Truth for the Training Set Was Established:
The document does not specify how the ground truth for the training set was established. It only describes the ground truth for the validation sets as "manually scored sleep recordings" from routine clinical work.
Ask a specific question about this device
(94 days)
| 882.1400
The IceCaps (IceCap 2, IceCap 2 Small, IceCap Neonate (Sizes XS, S, & M)) are medical devices used as EEG electrodes. They are used by Healthcare Professionals on a patient in case of neurological disorders with a short or long-term EEG record (up to 72 hours).
IceCap 2 shall be placed on patients weighing at least 10 kg (22.05 lbs) and having a head circumference above 43 cm (16.93 inches).
IceCap Neonate shall be placed on the head of babies, newborns and premature babies.
The IceCaps (IceCap 2, IceCap 2 Small, IceCap Neonate (Sizes XS, S, & M)) are medical devices used as EEG electrodes.
The IceCaps are a single use cap which connects to the marketed EEG recorders using an IceAdapter or Touchproof adapter.
The electrodes placement in IceCap Product line is done accordingly to the 10/20 system.
The conductive tracks of the Flexible Printed Circuit are used to conduct EEG signals from the electrodes to the connectors.
IceCap 2 shall be placed on patients weighing at least 10 kg (22.05 lbs) and having a head circumference above 43 cm (16.93 inches).
IceCap Neonate (M,S,XS) shall be placed on the head of babies, newborns and premature babies.
The IceCap product line does not perform comparative effectiveness studies with human readers or standalone algorithm performance studies. The device is a cutaneous electrode, and its evaluation focuses on safety and performance according to relevant standards, not on AI-driven diagnostic accuracy.
Here's a breakdown of the acceptance criteria and supporting studies for the IceCap product line:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria Topic | Description/Standard | Device Performance (IceCap product line) |
---|---|---|
Indications for Use | For use as EEG electrodes by Healthcare Professionals on patients with neurological disorders for short or long-term EEG record (up to 72 hours). Specific weight and head circumference ranges for IceCap 2 and IceCap Neonate. | Meets stated indications for use, including up to 72 hours of use, matching predicate device (2). |
Safety Standards | Compliance with electrical safety and electromagnetic compatibility standards (e.g., IEC 60601-1, IEC 60601-1-2, IEC 60601-1-11). | Conforms to IEC 60601-1:2005, IEC 60601-1:2005/AMD1:2012, IEC 60601-1:2005/AMD2:2020, IEC 60601-1-2: 2014 + A1 (2020), IEC 60601-1-11:2015, IEC 60601-1-11:2015/AMD1:2020. |
Biocompatibility | Materials in contact with the patient must be biocompatible (ISO 10993-1). | Biocompatible and compliant with ISO 10993-1 Fifth edition 2018-08. |
Duration of Use | Up to 72 hours of continuous use. | Qualifies for 72 hours of use. |
Fit to Form and Usability | Ability to accommodate different head sizes and proper installation. | Qualified via fit to form test and usability test for installation. |
Signal Quality (Implied) | The number of electrodes and material composition should not negatively impact the quality of EEG signal. | Qualified via impedance test and general quality of signal. |
Material Composition | Specific materials used for electrodes and adhesives. | Materials listed (PET, Ag/AgCl inks, insulation inks, stiff PETG film, skin/silicone adhesive, graphical ink, protective polyolefin foam on acrylic adhesive) are biocompatible. |
Storage Life | Expected shelf life of the device. | 12 months. (Matches predicate 2, but shorter than predicate 1. This difference does not affect safety and effectiveness.) |
Single Use/Sterility | Non-sterile, single-use device. | Single use, non-sterile. |
Montage System | Conforms to the 10/20 System for electrode placement. | 10/20 System. |
2. Sample Size Used for the Test Set and Data Provenance
The provided document does not explicitly state the sample size used for specific test sets (e.g., for fit-to-form, impedance, or usability tests). It mentions that the "clinical data were not necessary to determine substantial equivalence," indicating that animal or human subject testing for diagnostic or comparative effectiveness was not performed as a primary means of establishing substantial equivalence for this type of device.
The document does not provide information on the country of origin of the data or whether the data was retrospective or prospective. The studies primarily involve non-clinical performance and safety testing.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts
As this device is an EEG electrode, the primary "ground truth" for its performance is its ability to meet electrical and biocompatibility standards, and to effectively acquire EEG signals as confirmed by non-clinical tests. There is no mention of human experts being used to establish a ground truth for a diagnostic outcome, as the device itself does not provide diagnostic interpretations.
4. Adjudication Method for the Test Set
Not applicable. The evaluation performed is based on compliance with harmonized standards and engineering tests, not on human-based adjudication of diagnostic outcomes.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
Not applicable. The IceCap product line is an EEG electrode, not an AI-powered diagnostic tool. Therefore, MRMC studies with human readers are not relevant to its clearance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
Not applicable. The device is a hardware component (EEG electrode) and does not involve a standalone algorithm for performance evaluation in a diagnostic context.
7. The type of ground truth used
The ground truth used for evaluating the IceCap product line is based on established engineering standards and regulatory requirements for medical devices, particularly for cutaneous electrodes. This includes:
- Performance standards: e.g., electrical impedance, signal integrity (implied by "general quality of signal").
- Safety standards: e.g., electrical safety (IEC 60601-1), electromagnetic compatibility (IEC 60601-1-2), usability for medical electrical equipment in the home healthcare environment (IEC 60601-1-11).
- Biocompatibility standards: (ISO 10993-1) for materials in contact with the patient.
- Functional tests: Fit-to-form, usability for installation.
The "truth" is whether the device meets these specified, measurable criteria.
8. The Sample Size for the Training Set
Not applicable. This device is a hardware product (EEG electrode) and does not involve AI or machine learning algorithms that require a training set.
9. How the Ground Truth for the Training Set Was Established
Not applicable. This device is a hardware product (EEG electrode) and does not involve AI or machine learning algorithms that require a training set or ground truth for training.
Ask a specific question about this device
Page 1 of 44