Search Results
Found 33 results
510(k) Data Aggregation
(56 days)
Natus Medical Incorporated DBA Excel-Tech Ltd. (XLTEK)
The Natus BrainWatch System including the Natus BrainWatch Headband, is intended to record and store EEG signals and present these signals visually to assist trained medical staff in making neurological diagnoses in patients aged 2 years and older.
The device does not provide any diagnostic conclusions about the subject's condition and does not provide any automated alerts of an adverse clinical event. The Natus BrainWatch System is intended for use within a professional healthcare facility or clinical research environment. The Natus BrainWatch Headband is intended for single-patient use.
The Natus BrainWatch system is a reliable, mobile, and easy-to-use EEG device intended to record, store, and visually present EEG signals to assist trained medical staff in making neurological diagnoses in patients.
The system includes a touchscreen tablet as its primary interface. The Natus BrainWatch Headband is a single-use disposable headpiece with an integrated array of 10 passive electrodes that are applied to the patient's head to record EEG signals when connected to an amplifier.
The Natus BrainWatch System consists of the following components: Tablet, IV Pole Handle, Amplifier, Headband(s), Gel Pods and a Mobile Application:
- Touchscreen Tablet with charger
- Single-Use disposable elastic fabric headband with 10 electrodes (available in sizes Small, Medium, and Large) containing:
- Hydroflex patch with 2 built-in electrodes
- 8 electrodes attached to gel pods used to improve impedance levels labeled L1-L4, R1-R4
- Wireless amplifier that attaches to the headband and connects to the tablet via Bluetooth
- IV pole handle that holds the tablet for a hands-free experience
- Gels pods attach to the electrodes to improve impedance levels
The Natus BrainWatch System is a portable 10-channel EEG monitoring system. 10 patient electrodes are used to record the 10 channels. Channels 1-5 should be used for the patient's left hemisphere, with channel 1 at the front of the patient's head and channel 5 at the back. Channels 6-10 should be used for the patient's right hemisphere, with channel 6 at the front of the patient's head and channel 10 at the back.
EEG recording files are transferred wirelessly to a computer from the Natus BrainWatch Tablet using a Wi-Fi connection. The EEG sessions from Natus BrainWatch are stored using a cloudbased solution which allows the end user to view studies at a later date. Recorded sessions can be reviewed remotely on a computer using the Neuroworks EEG software.
The device is a portable, 10-channel EEG monitoring system. The device connects to a headband consisting of 10 patient electrodes which are used to form the 10 channels and may be used with any scalp EEG electrodes.
The system acquires the EEG signals of a patient and presents the EEG signals in visual formats in real time. The EEG recordings are displayed on a computer or tablet using an EEG viewer software. The visual signals assist trained medical staff to make neurological diagnoses. It does not provide any diagnostic conclusion about the subject's condition and does not provide any automated alerts of an adverse clinical event.
Micro-USB cable is used to connect the Natus BrainWatch System to power adapter for charging. Bluetooth is used to connect amplifier with tablet to transfer EEG recording files. When the Natus BrainWatch System is connected to a power adapter of a computer, all EEG acquisition functions are automatically disabled.
The Natus BrainWatch System is a portable 10-channel EEG monitoring system intended to record, store, and visually present EEG signals to assist trained medical staff in making neurological diagnoses in patients aged 2 years and older. It does not provide diagnostic conclusions or automated alerts of adverse clinical events.
Here's a breakdown of the acceptance criteria and the study verifying the device's performance:
1. Table of Acceptance Criteria and Reported Device Performance
The provided text details the performance verification against various standards rather than specific quantitative acceptance criteria for clinical performance. The focus is on demonstrating safety and electrical performance equivalence to predicate devices.
Acceptance Criteria Category | Reported Device Performance (Summary) |
---|---|
Electrical Safety | Verified in accordance with IEC 60601-1-6:2010/AMD2:2020, IEC 60601-1:2005/AMD2:2020, and IEC 80601-2-26:2019. |
Electromagnetic Compatibility | Verified in accordance with IEC 60601-1-2 Ed 4.1. Underwent Wireless Coexistence testing per ANSI C63.27-2021. FCC Part 15 certified. |
Packaging & Handling | Successfully passed verification as per ASTM D4169-22. |
Bench Verification & Validation (Functional/Performance) | Successfully passed performance verification and validation in accordance with internal requirements and specifications. Met defined acceptance criteria for functional and performance characteristics. |
EEG Specific Performance | Met requirements for basic safety and essential performance of electroencephalographs per IEC 80601-2-26. Met Performance Criteria of FDA Guidance "Cutaneous Electrodes for Recording Purposes - Performance Criteria for Safety and Performance Based Pathway". |
Battery Safety | Tested per IEC 62133. |
Biocompatibility | Patient contacting components (including conductive electrolyte gel) verified with Irritation, Sensitization, and Cytotoxicity testing per ISO 10993-5:2009, ISO 10993-23:2021, and ISO 10993-10:2021. |
Shelf-life | Shelf-life testing performed. |
Wireless Technology | Utilizes Bluetooth 5.0 technology (similar to reference device CGX Quick-20m K203331). No interference with other electronic devices; coexists well in a typical medical environment. |
Analogue-to-Digital Conversion | 24-Bit Delta-Sigma (Same as Predicate 1). |
Sampling Rate | 250 Hz (Same as Predicate 1). |
2. Sample size used for the test set and the data provenance
The document does not specify a separate "test set" in the context of clinical data for algorithmic performance. The testing described is primarily bench and engineering verification against international standards. Therefore, information about sample size, country of origin, or retrospective/prospective nature of a clinical test set is not provided.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Not applicable, as the provided information does not describe a study involving clinical ground truth establishment by experts for a test set. The device assists trained medical staff in making diagnoses but does not provide diagnostic conclusions itself.
4. Adjudication method for the test set
Not applicable, as there is no described clinical study involving a test set and ground truth adjudication.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC comparative effectiveness study is described. The device is a data acquisition and visualization tool; it does not explicitly feature AI for interpretation or diagnostic assistance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Not applicable. The "Natus BrainWatch System" is an electroencephalograph (EEG) device that records and visually presents signals for trained medical staff to interpret. It does not provide any diagnostic conclusions or automated alerts, meaning it is not a standalone diagnostic algorithm.
7. The type of ground truth used
Not applicable. The testing focuses on engineering performance, safety, and functional compliance rather than diagnostic accuracy against a clinical ground truth (e.g., pathology, outcomes data, or expert consensus).
8. The sample size for the training set
Not applicable, as the document does not describe the development or validation of an AI algorithm with a training set. The device's function is to capture and display raw EEG signals.
9. How the ground truth for the training set was established
Not applicable, as there is no described training set or AI algorithm for which ground truth would need to be established.
Ask a specific question about this device
(115 days)
Natus Medical Incorporated DBA Excel-Tech Ltd. (XLTEK)
The ALGO Pro Newborn Hearing Screener is a mobile, noninvasive instrument used to screen infants for hearing loss. The screener uses Automated Auditory Brainstem Response (AABR®) for automated analysis of Auditory Brainstem Response (ABR) signals recorded from the patient. The screener is intended for babies between the ages of 34 weeks (gestation age) and 6 months. Babies should be well enough to be ready for discharge from the hospital, and should be asleep or in a quiet state at the time of screener is simple to operate. It does not require special technical skills or interpretation of results. Basic training with the equipment is sufficient to learn how to screen infants who are in good health and about to be discharged from the hospital. A typical screening process can be completed in 15 minutes or less. Sites appropriate for screening include the well-baby nursery, NICU, mother's bedside, audiology suite, outpatient clinic, or doctor's office.
The ALGO® Pro is a fully automated hearing screening device used to screen infants for hearing loss. It provides consistent, objective pass/refer results. The ALGO Pro device utilizes Auditory Brainstem Response (ABR) as the hearing screening technology, which allows the screening of the entire hearing pathway from outer ear to the brainstem. The ABR signal is evoked by a series of acoustic broadband transient stimulus (clicks) presented to a subject's ears using acoustic transducers and recorded by sensors placed on the skin of the patient. The ALGO Pro generates each click stimulus and presents to the patient's ear using acoustic transducers attached to disposable acoustic earphones. The click stimulus elicits a sequence of distinguishable electrophysiological signals produced as a result of signal transmission and neural responses within the auditory nerve and brainstem of the infant. Disposable sensors applied to the infant's skin pick up this evoked response, and the signal is transmitted to the screener via the patient electrode leads. The device employs advanced signal processing technology such as amplification, digital filtering, artifact rejection, noise monitoring and noise-weighted averaging, to separate the ABR from background noise and from other brain activity. The ALGO Pro uses a statistical algorithm based on binomial statistics to determine if there is a response to the stimulus that matches to the ABR template of a normal hearing newborn. If a response is detected that is consistent with the ABR template derived from normal hearing infants (automated auditory brainstem response technology, AABR), the device provides an automated 'Pass' result. A 'Refer' result is automatically generated if the device cannot detect an ABR response with sufficient statistical confidence or one that is consistent with the template.
Here's a breakdown of the acceptance criteria and study information for the ALGO Pro Newborn Hearing Screener, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The FDA submission primarily focuses on demonstrating substantial equivalence to a predicate device (ALGO 5) rather than setting specific, numerical acceptance criteria for a new clinical performance study. The "acceptance criteria" here are implied through the comparison with the predicate device's established performance and the demonstration that the ALGO Pro performs comparably.
Acceptance Criterion (Implied) | Reported Device Performance (ALGO Pro / Comparative) |
---|---|
Safety | Complies with: IEC 60601-1 Ed. 3.2, IEC 60601-2-40 Ed. 2.0, IEC 60601-1-6, IEC 62366-1, IEC62304, IEC 62133-2, IEC 60601-1-2 Ed. 4.1, IEC 60601-4-2, FCC Part 15. |
Biocompatibility | Passed Cytotoxicity, Sensitization, and Irritation tests (ISO 10993-1:2018 for limited contact). |
Mechanical Integrity | Passed drop and tumble, cable bend cycle, electrode clip cycle, power button cycle, connector mating cycle, bassinet hook cycle, and docking station latch/pogo pin cycle testing. |
Effectiveness (AABR Algorithm Performance) | Utilizes the exact same AABR algorithm as predicate ALGO 5. |
Algorithmic Sensitivity | 99.9% for each ear (using binomial statistics, inherited from ALGO AABR algorithm). |
Overall Clinical Sensitivity | 98.4% (combined results from independent, peer-reviewed clinical studies using the ALGO AABR algorithm, e.g., Peters (1986), Herrmann et al. (1995)). |
Specificity | 96% to 98% (from independent, peer-reviewed clinical studies using the ALGO AABR algorithm). |
Performance Equivalence to Predicate | Bench testing confirmed equivalence of acoustic stimuli, recording of evoked potentials, and proper implementation of ABR template and algorithm, supporting device effectiveness. |
Software Performance | Software Verification and Validation testing conducted, Basic Documentation Level provided. |
Usability | Formative and summative human factors/usability testing conducted, no concerns regarding safety and effectiveness raised. |
2. Sample Size Used for the Test Set and Data Provenance
No new clinical "test set" was used for the ALGO Pro in the context of a prospective clinical trial. The performance data for the AABR algorithm (sensitivity and specificity) are derived from previously published, peer-reviewed clinical studies that validated the underlying ALGO AABR technology.
- Sample Size for AABR Algorithm Development: The ABR template, which forms the basis of the ALGO Pro's algorithm, was determined by superimposing responses from 35 neonates to 35 dB nHL click stimuli.
- Data Provenance for ABR Template: The data for the ABR template was collected at Massachusetts Eye and Ear Infirmary during the design and development of the original automated infant hearing screener.
- Data Provenance for Clinical Performance (Sensitivity/Specificity): The studies cited (Peters, J. G. (1986) and Herrmann, Barbara S., Aaron R. Thornton, and Janet M. Joseph (1995)) are generally long-standing research from various institutions. The document doesn't specify the exact country of origin for the studies cited beyond the development of the template in the US. These studies would be retrospective relative to the ALGO Pro submission, as they describe the development and validation of the original ALGO technology.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts
This information is not provided in the document for the studies that established the ground truth for the ABR template or the clinical performance of the ALGO AABR algorithm. The template was derived from "normal hearing" neonates, implying a clinical assessment of their hearing status, but the specifics of how that ground truth was established (e.g., specific experts, their qualifications, or methods other than the ABR itself) are not detailed within this submission summary.
4. Adjudication Method for the Test Set
Not applicable, as no new clinical "test set" requiring adjudication for the ALGO Pro itself was conducted or reported in this submission. The historical studies developing the AABR algorithm would have defined their own ground truth and validation methods, but these are not detailed here.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
Not applicable. The ALGO Pro is an automated hearing screener that provides a "Pass" or "Refer" result without requiring human interpretation of the ABR signals themselves. It is not an AI-assisted human reading device, but rather a standalone diagnostic aid.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
Yes, a standalone performance assessment of the AABR algorithm (which is essentially the "algorithm only" component) was done indirectly through historical studies and directly through bench testing.
- The core AABR algorithm has a 99.9% algorithmic sensitivity (based on binomial statistics).
- Historically, independent clinical studies (cited) showed an overall clinical sensitivity of 98.4% and specificity of 96% to 98% for the ALGO AABR technology when used in clinical settings.
- For the ALGO Pro specifically, bench testing was performed to confirm the equivalence of the acoustical stimuli, recording of evoked potentials, and proper implementation of the ABR template and algorithm between the ALGO Pro and its predicate device (ALGO 5). This bench testing effectively confirmed the standalone performance of the ALGO Pro's algorithm against the established performance of the predicate.
7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)
- For ABR Template Development: The ABR template was based on the morphology of ABR waveforms from normal hearing neonates. This implies a ground truth established by clinical assessment of "normal hearing" status.
- For Clinical Performance (Sensitivity/Specificity): The clinical studies cited (Peters, Herrmann et al.) would have established ground truth for hearing status through follow-up diagnostic audiologic evaluations, which could include behavioral audiometry, auditory steady-state response (ASSR) testing, or other objective measures (likely expert consensus based on these diagnostic tests). The document does not specify the exact ground truth methodology of these historical studies.
8. The Sample Size for the Training Set
The document states that the ABR template, which underpins the algorithm, was derived by superimposing responses from 35 neonates. This set of 35 neonates effectively served as the "training set" or foundational data for the ABR template.
9. How the Ground Truth for the Training Set Was Established
The ground truth for the "training set" (the 35 neonates used to derive the ABR template) was established based on their status as "normal hearing" infants. This implies a determination of their hearing status through established clinical methods for neonates at the time (e.g., standard audiologic evaluation to confirm normal hearing), though the specific details of these diagnostic methods are not provided in this summary.
Ask a specific question about this device
(134 days)
Natus Medical Incorporated DBA Excel-Tech Ltd. (Xltek)
General ophthalmic imaging including retinal, corneal, and external imaging.
Photodocumentation of pediatric ocular diseases, including retinopathy of prematurity (ROP).
Screening for Type 2 pre-threshold retinopathy of prematurity (ROP) (zone 1, stage 1 or 2, without plus disease, or zone 2, stage 3, without plus disease), or treatment-requiring ROP, defined as Type 1 ROP (zone 1, any stage, with plus disease; zone 1, stage 3 without plus disease; or zone 2, stage 2 or 3, with plus disease), or threshold ROP (at least 5 contiguous or 8 non-contiguous clock hours of stage 3 in zone 1 or 2, with plus disease) * in 35-37 week postmenstrual infants.
The RetCam Envision system is a contact type wide-field fundus ophthalmic imaging system used for general ophthalmic imaging including retinal, corneal, and external imaging. Photodocumentation of pediatric ocular diseases, including retinopathy of prematurity (ROP). Screening for Type 2 pre-threshold retinopathy of prematurity (ROP) (zone 1, stage 1 or 2, without plus disease, or zone 2, stage 3, without plus disease), or treatment-requiring ROP, defined as Type 1 ROP (zone 1, any stage, with plus disease; zone 1, stage 3 without plus disease; or zone 2, stage 2 or 3, with plus disease), or threshold ROP (at least 5 contiguous or 8 non-contiguous clock hours of stage 3 in zone 1 or 2, with plus disease) * in 35-37 week postmenstrual infants. A fundus camera comprised of handpiece, detachable lens piece, LED light sources, control panel, footswitch and application software running on a PC are used to acquire still images and video of the eye. The LED light is used to illuminate the retina uniformly and the image is transferred from the handpiece to the PC for display storage, review & transfer. The camera focus and image light intensity as well as the image capture is controlled by the user via a button panel on the cart or a footswitch. Controls are also available via the keyboard mouse and touchscreen.
The provided text describes a 510(k) premarket notification for the RetCam Envision ophthalmic camera, outlining its indications for use, technological characteristics, and comparison to a predicate device. However, it does not include a study specifically designed to demonstrate the device's performance against detailed acceptance criteria for its clinical indications (e.g., screening for ROP using a defined metric like sensitivity/specificity). Instead, the document focuses on demonstrating substantial equivalence to a predicate device through engineering and safety testing.
Therefore, many of the requested details about a study proving the device meets clinical acceptance criteria cannot be extracted from this document. The information primarily relates to electrical safety, electromagnetic compatibility, packaging, and internal performance verification against specifications, rather than clinical performance for its stated indications for use.
Here's an attempt to answer based on the provided text, highlighting what is present and what is missing:
1. A table of acceptance criteria and the reported device performance
The document provides acceptance criteria in terms of compliance with various engineering and safety standards, and internal performance specifications. It does not provide quantitative clinical acceptance criteria (e.g., specific sensitivity, specificity, or image quality metrics for ROP screening) nor a study demonstrating performance against such criteria.
Acceptance Criteria Category | Specific Standard/Requirement | Reported Device Performance |
---|---|---|
Electrical Safety | IEC 60601-1-6: 2010, Am1: 2013; IEC 62366: 2007, Am1: 2014 | Verified for performance in accordance with the standard. |
Electromagnetic Compatibility | IEC 60601-1-2 Edition4.0: 2014-02 | Verified for performance in accordance with the standard. |
Packaging and Handling | ISTA-2B: Packaged Products weighing over 150 lbs (68 kg) | Successfully passed packaging and handling verification. |
Performance Verification & Validation (Bench Testing) | Internal requirements and specifications (functional and performance characteristics), including: Image Comparison Test, Optics verification and Validation Test, Software Test, Mechanical design Test, Light Safety Test, EMC and Electrical Safety Test, Biocompatability Test, Packaging ISTA Test | Successfully passed performance verification and validation; met defined acceptance criteria. |
Clinical Indication Performance | (Not specified in terms of quantitative metrics like sensitivity/specificity for ROP screening) | (Not a clinical study; document states "Verification and Validation were performed to ensure no new questions of safety or effectiveness are raised. The results of these activities demonstrate that the RetCam Envision is as safe, as effective, and performs as well as or better than the predicate device.") |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document describes engineering bench testing and internal verification/validation. It does not mention a clinical test set sample size, data provenance, or whether it was retrospective or prospective. The "Performance Testing – Bench Verification & Validation" refers to internal testing of the device itself.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
Given that no clinical test set for ROP screening performance is described, there is no information provided on experts or ground truth establishment relevant to clinical indications.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
As no clinical test set is detailed, no adjudication method for a clinical ground truth is specified.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document makes no mention of a multi-reader multi-case (MRMC) comparative effectiveness study, nor does it discuss AI assistance or improvement of human readers. The device described is an ophthalmic camera, not an AI-powered diagnostic tool.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The device is an imaging system; it does not feature an AI algorithm for standalone diagnosis based on the provided text. Therefore, no standalone algorithm performance study was reported.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
For the technical performance aspects, the "ground truth" was likely established by reference to engineering specifications and standard requirements (e.g., electrical safety standards, packaging standards). For clinical ground truth for indications like ROP screening, this information is not provided because a clinical performance study is not described.
8. The sample size for the training set
Since no AI algorithm is detailed, there is no mention of a training set or its sample size.
9. How the ground truth for the training set was established
As there is no training set described, this information is not applicable.
Summary of what the document does provide regarding "studies":
The document primarily describes a comprehensive set of engineering and safety verification and validation activities to demonstrate that the RetCam Envision meets its predetermined specifications and applicable standards, and that it is substantially equivalent to existing predicate devices. These activities include:
- Electrical Safety testing: Compliance with IEC 60601-1-6 and IEC 62366.
- Electromagnetic Compatibility testing: Compliance with IEC 60601-1-2.
- Packaging and Handling Verification: Compliance with ISTA-2B.
- Bench Testing (Verification & Validation): Internal testing covering Image Comparison, Optics, Software, Mechanical design, Light Safety, EMC, Electrical Safety, Biocompatibility, and Packaging.
The "study that proves the device meets the acceptance criteria" in this context refers to these internal engineering and safety tests, which concluded that "the RetCam Envision system complies with its predetermined specifications and the applicable standards." The document does not present a clinical performance study with defined clinical acceptance criteria for its ROP screening indications.
Ask a specific question about this device
(46 days)
Natus Medical Incorporated DBA Excel-Tech Ltd. (XLTEK)
Natus NeuroWorks is EEG software that displays physiological signals. The intended user of this product is a qualified medical practitioner trained in Electroencephalography who will exercise professional judgment in using the information. The NeuroWorks EEG software allows acquisition, display, archive, review and analysis of physiological signals. · The Seizure Detection component of NeuroWorks is intended to mark previously acquired sections of the adult (greater than or equal to 18 years) EEG recordings that may correspond to electrographic seizures, in order to assist qualified clinical practitioners in the assessment of EEG traces. EEG recordings should be obtained with full scalp montage according to the standard 10/20 system. · The Spike Detection component of NeuroWorks is intended to mark previously acquired sections of the adult (greater than or equal to 18 years) EEG recordings that may correspond to electrographic spikes, in order to assist qualified clinical practitioners in the assessment of EEG traces. EEG recordings should be obtained with full scalp montage according to the standard 10/20 system. · aEEG, Burst Suppression, Envelope, Alpha variability, Spectral Entropy trending functionalities included in NeuroWorks are intended to assist the user while monitoring the state of the brain. The automated event marking function of Neuroworks is not applicable to these analysis features. · Neuro Works also includes the display of a quantitative EEG plot, Density Spectral Array (DSA), which is intended to help the user to monitor and analyze the EEG waveform. The automated event marking function of NeuroWorks is not applicable to DSA. · This device does not provide any diagnostic conclusion about the patient's condition to the user.
Natus NeuroWorks is electroencephalography (EEG) software that displays physiological signals. The software platform is designed to work with Xltek and other select Natus amplifiers (headboxes). Software add-ons and optional accessories let you customize your system to meet your specific clinical EEG monitoring needs.
The provided document describes the Natus NeuroWorks software, an EEG software with functionalities including seizure and spike detection, and various trending capabilities.
Here's an analysis of the acceptance criteria and the study that proves the device meets them:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly present a table of quantitative acceptance criteria and corresponding reported device performance with specific metrics (e.g., sensitivity, specificity, accuracy thresholds for seizure/spike detection). Instead, the demonstration of equivalence for the new trend features relies on qualitative comparison to a predicate device.
The "Comments" column in Table 1: Substantial Equivalence, Trends and other features implicitly states the performance goal for the newly added or enhanced features: to be "Equivalent" or "Same" as the specified predicate device. For example, for Burst Suppression, Envelope Trend, Spectral Entropy, Spectral Edge, Alpha Variability, and R-R interval trend, the comment is "Equivalent: Feature added to Natus NeuroWorks Subject device. With this implementation the Natus NeuroWorks Subject device is now equivalent to NicoletOne and Moberg Predicate devices."
For the DSA / Spectrogram feature, the comment indicates that the feature was already available but improved in terms of common naming, additional color scales for better contrast, and improved spectral resolution (64Hz vs 30Hz), making it "equivalent to NicoletOne predicate device."
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document does not specify a separate "test set" for performance evaluation of the trend features as typically understood in a clinical study context. Instead, for the newly added or improved trend features (Burst Suppression, Envelope, Spectral Entropy, Spectral Edge, Alpha Variability, DSA), the performance evaluation involved showing that the "resulting graphs are identical" when using the same study data as examples from the NicoletOne predicate device.
The document does not provide information on the country of origin of this "same study data" or whether it was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
The document does not specify the number or qualifications of experts used to establish ground truth for the trend feature comparison. The comparison relies on visual identity of trend plots against a predicate device.
For the Seizure Detection and Spike Detection components, the indications for use state they are "intended to mark previously acquired sections of the adult (greater than or equal to 18 years) EEG recordings that may correspond to electrographic seizures/spikes, in order to assist qualified clinical practitioners in the assessment of EEG traces." This implies that the 'ground truth' for these features is ultimately the interpretation by "qualified clinical practitioners," but the document doesn't detail how this was established for the performance study.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The document does not describe any formal adjudication method for the performance evaluation of the trend features. The comparison relies on direct graphical comparisons.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
A multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly described. The document states: "There were no clinical studies performed for this submission."
The device functions as "computer-assisted tools" for marking electrographic events to "assist qualified clinical practitioners." However, no study measuring improvement in human reader performance with this assistance is presented.
6. If a standalone (i.e. algorithm only, without human-in-the-loop performance) was done
While the document doesn't explicitly refer to "standalone performance" metrics for seizure and spike detection (e.g., sensitivity, specificity of the algorithm alone), the performance testing for the trend features was essentially a standalone comparison: the algorithm's output (trend graph) was compared to the predicate's algorithm output using the same input data. The "identical" nature of the graphs suggests successful replication of the predicate's standalone trending functionality.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
For the trend features (Burst Suppression, Envelope, Spectral Entropy, Spectral Edge, Alpha Variability, DSA), the "ground truth" implicitly used for comparison was the output of the predicate device's algorithms on the same raw EEG data. The goal was to demonstrate that the Natus NeuroWorks algorithms produced graphically identical or equivalent trends.
For the Seizure Detection and Spike Detection, the ground truth is ultimately "electrographic seizures/spikes" as interpreted by "qualified clinical practitioners," but the method of establishing this ground truth for validation is not detailed.
8. The sample size for the training set
The document does not provide information on the sample size for the training set for any of the algorithms. It states that the software was "designed and developed according to a robust software development process" and "rigorously verified and validated," but omits details about machine learning model training.
9. How the ground truth for the training set was established
The document does not provide information on how the ground truth for the training set was established. Given the lack of details on training data and methods, this information is not available in the provided text.
Ask a specific question about this device
(122 days)
Natus Medical Incorporated
The neoBLUE blanket LED Phototherapy System is indicated for the treatment of unconjugated hyperbilirubinemia in a hospital environment, and administered by trained professional medical staff, on the order of a physician, or in the home environment administered by a trained caregiver. The neoBlue blanket device provides intensive phototherapy underneath the patient and can be used with a bassinet, open bed, radiant warmer, incubator, or while holding the patient.
The neoBLUE blanket LED Phototherapy System is a portable phototherapy system that consists of five components: the neoBLUE blanket phototherapy light box, fiber optic blanket with attached cable, blanket mattress, disposable mattress covers, and power supply. The neoBLUE blanket LED Phototherapy System delivers a narrow band of high-intensity blue light via a blue light emitting diode (LED), in order to provide treatment for neonatal hyperbilirubinemia.
The portable neoBLUE blanket light box contains a blue LED which emits light in the range of 400 – 550 nm (peak wavelength 450-475 nm). This blue light is directed through an optical fiber cable to the fiber optic blanket pad. The fiber optic blanket pad is composed of high performance optical fibers enclosed in a vinyl material. A polyurethane mattress is placed on top of the fiber optic blanket, and a disposable polypropylene cover is placed around the blanket pad and mattress. The fiber optic blanket generates sufficient light output to provide intensive phototherapy which is defined as ≥30µW/cm²/nm.
The provided text describes a 510(k) premarket notification for a medical device called the "neoBLUE® blanket LED Phototherapy System." This document primarily focuses on demonstrating substantial equivalence to a predicate device rather than detailing specific clinical or standalone performance studies with acceptance criteria for an AI/ML device.
Therefore, many of the requested details regarding acceptance criteria, study design for AI/ML performance, sample sizes for training/test sets, expert involvement for ground truth, and MRMC studies are not present in this document. The device in question is a phototherapy system, not an AI/ML diagnostic or therapeutic device.
Here's an analysis based on the available information and an explanation of why other requested information is not applicable:
1. A table of acceptance criteria and the reported device performance:
The document describes performance specifications that the device meets, rather than acceptance criteria for an AI/ML model's output. The criteria are related to the physical and functional characteristics of the phototherapy system.
Acceptance Criteria (Performance Specification) | Reported Device Performance |
---|---|
Light Spectrum Range | 400 – 550 nm (peak wavelength 450-475 nm) |
Light Intensity (Intensive Phototherapy definition) | ≥30 µW/cm²/nm (for both blanket sizes) |
Factory Set Intensity | 35 ± 5 µW/cm²/nm |
Adjustable Light Output Range | 50 to 60 µW/cm²/nm (adjustable) |
Expected LED Life | >20,000 hours |
Operating Temperature (Light Box) | 41° to 86° F (5 to 30°C) |
Operating Humidity (Light Box) | 10% to 90%, non-condensing |
Operating Temperature (Blanket) | 41° to 100° F (5 to 38°C) |
Operating Humidity (Blanket) | 10% to 90%, non-condensing |
Storage Temperature | 32° to 122° F (0 to 50°C) |
Storage Humidity | 10% to 90%, non-condensing |
Altitude / Atmospheric pressure | 700 hPa to 1060 hPa |
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
This information is not applicable as the document explicitly states "Clinical Tests: N/A" and focuses on non-clinical engineering and performance testing of the physical medical device (phototherapy unit), not a software or AI/ML-based diagnostic system that would require a test set of data.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This information is not applicable. Ground truth, expert consensus, and qualifications of experts are relevant for AI/ML models that interpret medical images or data. The neoBLUE blanket LED Phototherapy System is a physical device delivering light therapy, not an interpretive AI.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not applicable for the same reasons as point 3.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
This information is not applicable. An MRMC study is relevant for evaluating the impact of an AI system on human reader performance, typically in diagnostic imaging. The neoBLUE blanket LED Phototherapy System is a therapeutic device, not an diagnostic AI assistant.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
This information is not applicable as there is no algorithm/AI in this medical device. Its performance is based on its physical specifications and light output.
7. The type of ground truth used (expert concensus, pathology, outcomes data, etc)
This information is not applicable. For this physical device, "ground truth" relates to measurable physical properties and operational performance tested against engineering specifications and recognized standards (e.g., light intensity measurements, electrical safety).
8. The sample size for the training set
This information is not applicable. There is no "training set" as this is not an AI/ML device.
9. How the ground truth for the training set was established
This information is not applicable for the same reason as point 8.
Summary of the Study Proving Device Meets Acceptance Criteria (Non-Clinical):
The study proving the device meets its performance specifications (which serve as "acceptance criteria" for a physical device) was based on non-clinical tests and design verification and validation.
- Type of Study: Non-clinical tests, design verification and validation.
- Purpose: To ensure the device meets performance specifications and demonstrates equivalence to the predicate device (K103589).
- Tests Performed: Specific testing for device power, electrical safety, indicators, LED performance, intensity levels, intensity range, effective area, and usability.
- Standards Applied: Where appropriate, testing was performed to recognized international and industry standards, including:
- IEC 60601-2-50:2009 +A1 / ED 2.1: 2016-04 (Particular Requirements For The Basic Safety And Essential Performance Of Infant Phototherapy Equipment)
- IEC 60601-1:2005 (Third Edition) + CORR. 1:2006 + CORR. 2:2007 + A1:2012 (General Requirements For Basic Safety And Essential Performance)
- IEC 60601-1-11:2015 (Second Edition) (Requirements for medical electrical equipment and medical electrical systems used in the home healthcare environment)
- IEC 60601-1-6:2010, AMD1:2013 (Usability)
- IEC 60601-1-2 ED 4.0: 2014-02 (Electromagnetic disturbances)
- An AIM Standard 7351731 Rev 2.0: 2017-02-03 (Electromagnetic disturbances)
- Conclusion: The verification and validation summary and risk analysis documentation supported the conclusion that the device is as safe and effective as the predicate device and is substantially equivalent.
No human data, expert review of images, or AI/ML model performance evaluation was part of this submission, as it is for a physical phototherapy device.
Ask a specific question about this device
(17 days)
Natus Medical Incorporated
General ophthalmic imaging including retinal, corneal, and external imaging.
-Photodocumentation of pediatric ocular diseases including retinopathy of prematurity (ROP).
-Screening for Type 2 pre-threshold retinopathy of prematurity (ROP) (zone 1, stage 1 or 2, without plus disease, or zone 2, stage 3, without plus disease) or treatment-requiring ROP, defined as Type 1 ROP (zone 1, any stage, with plus disease; zone 1, stage 3 without plus disease; or zone 2, stage 2 or threshold ROP (at least 5 contiguous or 8 non-contiguous clock hours of stage 3 in zone 1 or 2, with plus disease)* in 35-37 week postmenstrual infants.
RetCam Ophthalmic Imaging Systems utilize a digital camera in a handpiece (with multiple field of view lenses) to capture color ophthalmic images including retinal, corneal, and external images. An on board computer (RetCam 3 Ophthalmic Imaging System) or laptop computer (RetCam Shuttle Ophthalmic Imaging System and RetCam Portable Ophthalmic Imaging System) is used to store, view, retrieve, and export the digital ophthalmic images. A standard Halogen light source is used in all RetCam Ophthalmic Imaging Systems to provide illumination to the eye through the handpiece. An optional Fluorescein light source is also available on the RetCam 3 Ophthalmic Imaging System.
Here's an analysis of the provided text regarding acceptance criteria and study information:
Summary of Device and Changes:
The document describes the Natus Medical Incorporated RetCam Ophthalmic Imaging Systems (RetCam 3, RetCam Shuttle, RetCam Portable). These are digital cameras used for general ophthalmic imaging, photodocumentation of pediatric ocular diseases (including Retinopathy of Prematurity - ROP), and screening for specific types of ROP in 35-37 week postmenstrual infants.
The current submission (K182263) is for modified versions of previously cleared RetCam systems (K090326). The modifications include:
- Upgrade of the camera (due to obsolescence).
- Addition of DICOM communication features.
- Software updates to support these modifications.
The fundamental scientific technology and indications for use remain unchanged from the predicate device.
Acceptance Criteria and Device Performance (as reported):
The document does not detail specific, quantitative acceptance criteria in a tabular format, nor does it present specific performance metrics like sensitivity, specificity, or accuracy.
Instead, it states that the modified devices were tested and found to comply with relevant consensus standards and internal tests. The performance assertion focuses on substantial equivalence to the predicate device due to unchanged indications for use, fundamental scientific technology, operating principle, basic design, and materials.
Based on the provided text, the acceptance criteria are implicitly met by:
Acceptance Criterion | Reported Device Performance |
---|---|
Regulatory Compliance (General) | Complies with IEC 60601-1:2005 (basic safety and essential performance) |
Complies with IEC 60601-1-2:2007/(R) 2012 (electromagnetic disturbances) | |
Complies with DICOM NEMA PS 3.1-3.20 (2016) | |
Software Verification & Validation | Software categorized as "moderate level of concern" and verified/validated per FDA guidance (May 11, 2005) |
Internal Performance Testing | Met defined acceptance criteria for "Image Comparison Test," "Optics verification and Validation Test," and "ISTA Test" (details of these tests and their specific acceptance metrics are not provided in this document). |
Substantial Equivalence to Predicate (K090326) | "have the same indications for use," "has the same fundamental scientific technology," "uses the same operating principle," "incorporates the same basic design," and "incorporates the same materials." |
Detailed Study Information:
-
Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective):
- This document does not specify a separate test set or its sample size. The performance data section refers to internal testing of the modified devices (Image Comparison Test, Optics verification and Validation Test, ISTA Test) and compliance with standards. There is no mention of patient data (images) being used for a clinical or performance study to determine algorithm efficacy.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience):
- Not applicable / not provided. The document describes a medical device (ophthalmic camera system) that captures images, not an AI algorithm that interprets them. The "screening" indication refers to the device's capability to screen by providing images, not an automated diagnostic capability from the device itself. Therefore, there's no mention of experts establishing a ground truth for an algorithm's performance on a test set.
-
Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not applicable / not provided. As above, no clinical test set for an algorithm is described.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No. This document focuses on the clearance of an ophthalmic imaging system, not an AI-powered diagnostic tool. There is no mention of AI assistance or MRMC studies. The device is a camera system for clinicians to use, not an interpretation algorithm.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- No. The device is an imaging system; it is not an algorithm for standalone performance.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Not applicable. No ground truth for algorithm performance is discussed. The "ground truth" for the device's function would be its ability to capture clear, diagnostically useful images, which is assessed through the internal tests (Image Comparison, Optics, ISTA) and compliance with imaging and electrical standards.
-
The sample size for the training set:
- Not applicable. This device is an imaging system, not an AI algorithm requiring a training set.
-
How the ground truth for the training set was established:
- Not applicable. No training set for an AI algorithm is discussed.
Ask a specific question about this device
(171 days)
Natus Medical Incorporated DBA Excel-Tech Ltd. (XLTEK)
The Natus Photic Stimulator is indicated for photic activation of the EEG study and in the generation of visual evoked potentials.
The Natus Medical Incorporated (Natus) DBA Excel-Tech Ltd. (XLTEK) Photic Stimulator is used by trained medical staff in a medical environment to apply photic flashes to the patient during neurophysiology studies such as electroencephalographic (EEG) studies, where it is used as an activation to test photo-sensibility related to epilepsy.
The Natus Photic Stimulator is typically used by EEG technicians in the hospital environment in fixed, mobile or portable systems, and with patients of all ages. The lamp head assembly is mounted on an arm allowing easy placement at 30 cm away from the patient's face. The arm is ergonomically designed so that the lamp can be move and rotated in the direction of the patient using a handgrip on the lamp.
Trigger pulses applied to the trigger input of the Natus Photic Stimulator generate 1 millisecond photic flashes at specific frequencies, typically in the range of 0.5 Hz to 60 Hz by a white light emitting diode (LED) lamp. The basic operating mode consists of generating a single flash per trigger pulse applied. Flash intensity has a maximum range from 22,000 lux up to 75,000 lux measured at 30 cm from the LED lamp at the position of highest intensity and 3,520 lux to 12,000 lux calculated at 75cm. The frequency and intensity of the flash is controlled by the acquisition software.
It can also be used along with evoked potential devices for stimulating visual evoked potentials.
This document describes the Natus Photic Stimulator, a device used in neurophysiology studies. The information provided focuses on the device's technical specifications and compliance with regulatory standards, rather than clinical performance or AI integration.
Therefore, the following information regarding acceptance criteria and study details cannot be fully extracted as it pertains to clinical performance and AI, which are not the primary focus of this submission. The document primarily reports on technical and safety performance testing.
Here's what can be extracted based on the provided text:
1. A table of acceptance criteria and the reported device performance
The document does not detail specific acceptance criteria for clinical performance (e.g., sensitivity, specificity, accuracy) akin to what would be provided for a diagnostic AI device. Instead, the "acceptance criteria" are compliance with various technical and safety standards.
Acceptance Criteria (Standards) | Reported Device Performance |
---|---|
Electrical Safety: | The Natus Photic Stimulator complies with: |
IEC 60601-1: 2005, Am1: 2012 | Results indicate that the Natus Photic Stimulator complies with the applicable standards. |
Electromagnetic Compatibility: | The Natus Photic Stimulator complies with: |
IEC 60601-1-2: 2007 | Results indicate that the Natus Photic Stimulator complies with the applicable standards. |
IEC 60601-2-40: 2016 | Results indicate that the Natus Photic Stimulator complies with the applicable standards. |
Bench Performance Testing: | The Natus Photic Stimulator complies with its predetermined specifications and applicable standards, including: |
Internal requirements | Results indicate that the Natus Photic Stimulator complies with its predetermined specifications and the applicable standards. |
IEC 60601-1-6: 2010, Am1: 2013 | Verified for performance in accordance with internal requirements and the applicable clauses of the standards. |
IEC 60601-2-40: 2016 | Verified for performance in accordance with internal requirements and the applicable clauses of the standards. |
IEC 62366: 2007, Am1: 2014 | Verified for performance in accordance with internal requirements and the applicable clauses of the standards. |
IEC 62471: 2006 | Verified for performance in accordance with internal requirements and the applicable clauses of the standards. |
ANSI Z80.36-2016 | Verified for performance in accordance with internal requirements and the applicable clauses of the standards. |
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
This information is not provided. The performance testing described is bench testing against technical and safety standards, not a clinical study on a patient population.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This is not applicable as the submission describes device safety and performance testing against engineering standards, not a clinical evaluation requiring expert interpretation of ground truth in patient data.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This is not applicable. The performance testing involves engineering verification and validation, not clinical adjudication of patient cases.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
There is no mention of AI assistance or a MRMC study in this document. This submission is for a photic stimulator, which is a hardware device for neurophysiology studies, not an AI software.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
This is not applicable. The device is a hardware photic stimulator, and the performance testing focuses on its compliance with safety and technical standards as a standalone physical device.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The "ground truth" for the device's performance is compliance with established international and national technical, electrical, electromagnetic, usability, and photobiological safety standards (e.g., IEC 60601 series, IEC 62366, IEC 62471, ANSI Z80.36). This is verified through objective measurements and tests against those standards, not through clinical ground truth like pathology or expert consensus on patient outcomes.
8. The sample size for the training set
This is not applicable. This device is not an AI algorithm trained on data.
9. How the ground truth for the training set was established
This is not applicable. This device is not an AI algorithm trained on data.
Ask a specific question about this device
(133 days)
Natus Medical Incorporated DBA Excel-Tech Ltd. (Xltek)
The Natus Brain Monitor Amplifier is intended to be used as an electroencephalograph: to acquire, display, store and archive electrophysiological signals. The amplifier should be used in conjunction with Natus NeuroWorks™/ SleepWorks™ software to acquire scalp and intracranial electroencephalographic (EEG) signals as well as polysomnographic (PSG) signals. The Natus Brain Monitor Amplifier is intended to be used by trained medical professionals, and is designed for use in clinical environments such as hospital rooms, epilepsy monitoring units, intensive care units, and operating rooms. It can be used with patients of all ages, but is not designed for fetal use.
The Natus Brain Monitor family of amplifiers are intended to be used as an electroencephalograph: to acquire, display, store and archive electrophysiological signals. The amplifier should be used in conjunction with Natus NeuroWorks™/SleepWorks™ software to acquire electroencephalographic (EEG) signals as well as polysomnographic (PSG) signals. The Natus Brain Monitor family of amplifiers are intended to be used by trained medical professionals, and are designed for use in clinical environments such as hospital rooms, clinics, epilepsy monitoring units, intensive care units, and operating rooms. It can be used with patients of all ages, but is not designed for fetal use.
The Natus Brain Monitor (Natus Embla NDx, Natus Embla SDx) are comprised of a base unit and a breakout box. It is part of a system that is made up of a personal computer, software, a photic stimulator, an isolation transformer, video and audio equipment, networking equipment, and mechanical supports. EEG and other physiological signals from electrodes placed on the head and body as well as other accessories such as pulse oximeters, respiratory effort and airflow sensors can be acquired by the amplifiers. The amplifiers include sensor inputs for respiratory effort and airflow as well as snoring. The amplifiers include an integrated pressure sensor and pulse oximeter module. These signals are digitized and transmitted to the personal computer running the Natus NeuroWorks/SleepWorks software. The signals are displayed on the personal computer and can be recorded to the computer's local storage or to remote networked storage for later review.
Here's a breakdown of the acceptance criteria and study information for the Natus Brain Monitor, based on the provided text:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly state acceptance criteria in a pass/fail threshold format for each performance metric. Instead, it provides a comparison to predicate devices, implying that performance comparable to or better than the predicates is acceptable. The performance is reported in the "Comparison to Predicate Device" table.
Specification | Predicate Device (K143440, K111742, K172711) | Subject Device (Natus Brain Monitor - Natus Brain Monitor, Embla NDx, Embla SDx) |
---|---|---|
Referential Channels | 256 (K143440), 32 (K111742), 32 (K172711) | 40 (programmable to up to 64), 16 (Embla SDx Model) |
Bipolar Channels | 16 (K143440), 8 (K111742), 8 (K172711) | 12, 4 (Embla SDx Model) |
DC Inputs | 16 (+/-5Vdc or +/-2.5Vdc) | 16 (+/-5Vdc), 8 (+/-5Vdc) - For Embla SDx model |
SpO2, Pulse Rate, Plethysmogram | Yes (all predicates, some with PPG) | Yes (with PPG) |
Body Position | Uses universal sensor via DC input or integrated proprietary | Integrated proprietary, or uses universal sensor via DC input |
Resolution | 24 bit, 22 bit, 16 bit | 24 bit (16 bit stored) |
EEG Channels | 64-256, 40, 32 | 64, 20 (Embla SDx) |
Reference Channels | Dedicated separate reference and ground, Dedicated ground | Dedicated separate reference and ground |
Input Impedance | >1000 MOhm, >20MΩ, ≥20 ΜΩ | >1000 MOhm |
Input Noise | 110dB@60Hz, >80dB, >80 dB (signal ref), > 100 dB (earth ref) | >106db@60Hz |
Sampling Frequency | 256-16384 Hz, 64-512 Hz, 200-800 Hz | 256, 512, 1024, 2048, 4096 (256, 512Hz for Embla SDx) |
Sampling Resolution - EEG channels | 24 bits, 22 bits, 16 bits | 24 bits |
Sampling Quantization - EEG channels | 305nV, N/A, 0.06 µV/bit | 305nV |
Storage Resolution - EEG channels | 16 bits | 16 bits |
Impedance Check |
Ask a specific question about this device
(26 days)
Natus Medical Incorporated DBA Excel-Tech Ltd. (XLTEK)
The NeuroWorks is EEG software that displays physiological signals. The intended user of this product is a qualified medical practitioner trained in Electroencephalography. This device is intended to be used by qualified medical practitioners who will exercise professional judgment in using the information.
-The NeuroWorks EEG software allows acquisition, display, archive, review and analysis of physiological signals.
-The Seizure Detection component of NeuroWorks is intended to mark previously acquired sections of the adult (greater than or equal to 18 years) EEG recordings that may correspond to electrographic seizures, in order to assist qualified clinical practitioners in the assessment of EEG traces. EEG recordings should be obtained with full scalp montage according to the standard 10/20 system.
-The Spike Detection component of NeuroWorks is intended to mark previously acquired sections of the adult (greater than or equal to 18 years) EEG recordings that may correspond to electrographic spikes, in order to assist qualified clinical practitioners in the assessment of EEG traces. EEG recordings should be obtained with full scalp montage according to the standard 10/20 system.
-The aEEG functionality included in NeuroWorks is intended to monitor the state of the brain. The automated event marking function of NeuroWorks is not applicable to aEEG.
-NeuroWorks also includes the display of a quantitative EEG plot, Compressed Spectrum Array (CSA), which is intended to help the user to monitor and analyze the EEG waveform. The automated event marking function of NeuroWorks is not applicable to CSA.
This device does not provide any diagnostic conclusion about the patient's condition to the user.
Natus NeuroWorks is electroencephalography (EEG) software that displays physiological signals. The software platform is designed to work with Xltek and other select Natus amplifiers (headboxes). Software add-ons and optional accessories let you customize your system to meet your specific clinical EEG monitoring needs.
The provided text describes the Natus NeuroWorks EEG software and its FDA 510(k) clearance application. However, it does not contain the detailed information necessary to answer all parts of your request regarding acceptance criteria and the study proving the device meets them.
Specifically, the document states:
- "The NeuroWorks software was designed and developed according to a robust software development process, and was rigorously verified and validated."
- "Results indicate that the NeuroWorks software complies with its predetermined specifications, the applicable guidance documents, and the applicable standards."
- "Verification and validation activities were conducted to establish the performance and safety characteristics of the device modifications made to the NeuroWorks software. The results of these activities demonstrate that the NeuroWorks software is as safe, as effective, and performs as well as or better than the predicate device."
While these statements confirm that performance testing was done and the device met its specifications, the actual acceptance criteria (e.g., sensitivity, specificity, F-score targets for seizure/spike detection) and the detailed results of those tests are not explicitly provided in this summary.
Here's a breakdown of what can and cannot be answered based on the provided text:
1. A table of acceptance criteria and the reported device performance
- Acceptance Criteria: Not explicitly stated in terms of quantitative metrics (e.g., target true positive rate, false positive rate for seizure/spike detection). The text broadly states compliance with "predetermined specifications" and "applicable standards."
- Reported Device Performance: Not explicitly stated in terms of quantitative metrics against specific acceptance criteria. The text concludes that the device "performs as well as or better than the predicate device."
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size: Not specified for any performance evaluation.
- Data Provenance: Not specified.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not specified. The document mentions "qualified clinical practitioners" as the intended users, but doesn't detail their involvement in establishing ground truth for testing.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not specified.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- A MRMC comparative effectiveness study is not explicitly mentioned. The "Seizure Detection" and "Spike Detection" components are described as intended to "assist qualified clinical practitioners," implying a human-in-the-loop scenario. However, the study design and results of such assistance are not detailed.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- A standalone performance study focused solely on the algorithm's performance (without human interaction) is not explicitly mentioned as a separate activity with results. The device's indications for use emphasize assisting "qualified clinical practitioners," suggesting an integrated workflow.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Not specified. Given the nature of EEG analysis, expert consensus would be the most probable method, but it is not stated in the document.
8. The sample size for the training set
- Not specified.
9. How the ground truth for the training set was established
- Not specified.
In summary: The provided 510(k) summary focuses on the regulatory aspects, intended use, technological comparison to the predicate, and adherence to software development and general performance standards. It lacks the specific clinical performance metrics, study designs, and detailed data provenance typically found in a comprehensive clinical validation study report for an AI/ML medical device. Further details would likely be found in the complete 510(k) submission, which is not fully provided here.
Ask a specific question about this device
(98 days)
Natus Medical Incorporated DBA Excel-Tech Ltd. (XLTEK)
This software is intended for use by qualified research and clinical professionals with specialized training in the use of EEG and PSG recording instrumentation for the digital recording, playback, and analysis of physiological signals. It is suitable for digital acquisition, display, comparison, and archiving of EEG potentials and other rapidly changing physiological parameters.
The Natus Medical Incorporated (Natus) DBA Excel-Tech Ltd. (XLTEK) Grass® TWin ® (Grass TWin) is a comprehensive software program intended for Electroencephalography (EEG), Polysomnography (PSG), and Long-term Epilepsy Monitoring (LTM). TWin is incredibly powerful and flexible, but also designed for easy and efficient day-to-day use. Grass TWin is a software product only, and does not include any hardware.
This document is a 510(k) summary for the Grass TWin, a software program intended for Electroencephalography (EEG), Polysomnography (PSG), and Long-term Epilepsy Monitoring (LTM). The focus of the provided text is on demonstrating the device's substantial equivalence to a predicate device and its compliance with regulatory standards for software and usability.
Based on the provided text, the Grass TWin software does not appear to have detailed acceptance criteria or a specific study proving device performance against those criteria in the typical sense of a clinical performance study with metrics like sensitivity, specificity, or accuracy. Instead, the "performance testing" described focuses on software verification and validation and bench testing for compliance with pre-determined specifications and regulatory standards.
Here's an analysis of the information, addressing your requests based only on the provided text:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly present a table of quantitative "acceptance criteria" and "reported device performance" in terms of clinical outcomes or diagnostic accuracy. Instead, the acceptance criteria are framed as compliance with internal requirements and regulatory standards for software development, usability, and safety.
Acceptance Criteria Category | Reported Device Performance |
---|---|
Software Development | "The Grass TWin software was designed and developed according to a robust software development process, and was rigorously verified and validated." "Results indicate that the Grass TWin software complies with its predetermined specifications, the applicable guidance documents, and the applicable standards." (referencing FDA guidance documents and IEC 62304: 2006) |
Usability | "The Grass TWin was verified for performance in accordance with internal requirements and the applicable clauses of the following standards: IEC 60601-1-6: 2010, Am1: 2013, Medical electrical equipment -Part 1-6: General requirements for basic safety and essential performance – Collateral standard: Usability; IEC 62366: 2007, Am1: 2014, Medical devices – Application of usability engineering to medical devices." "Results indicate that the Grass TWin complies with its predetermined specifications and the applicable standards." |
Safety & Effectiveness | "Verification and validation activities were conducted to establish the performance and safety characteristics of the device modifications made to the Grass TWin. The results of these activities demonstrate that the Grass TWin is as safe, as effective, and performs as well as or better than the predicate devices." |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document does not describe a "test set" in the context of clinical data or patient samples. The performance evaluation focuses on software verification/validation and bench testing. Therefore, information about sample size, data provenance, or whether it was retrospective/prospective is not applicable as described in this document.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This information is not provided because the performance testing described is not based on a clinical test set requiring expert-established ground truth.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not provided because the performance testing described is not based on a clinical test set requiring expert adjudication.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not mention an MRMC comparative effectiveness study, nor does it refer to AI or assistance for human readers. The device is software for recording, playback, and analysis of physiological signals, not an AI-driven interpretive tool.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
While the Grass TWin is "software only" and can be considered a standalone algorithm in that it performs its functions without direct hardware integration, the performance evaluation documented here describes its compliance with specifications and standards, not a specific standalone clinical performance study with metrics like sensitivity or specificity. The "Indications for Use" explicitly state it is "intended for use by qualified research and clinical professionals with specialized training," implying human-in-the-loop operation.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The concept of "ground truth" as typically used in clinical performance studies (e.g., against pathology or expert consensus) is not directly applicable to the performance testing described. The "truth" against which the software was evaluated was its predetermined specifications and compliance with regulatory standards (e.g., correct operation of software features, adherence to usability principles).
8. The sample size for the training set
The document does not refer to a "training set" for an algorithm, as it describes a software application that is verified and validated rather than trained using machine learning.
9. How the ground truth for the training set was established
As there is no mention of a training set, this information is not provided.
In summary, the provided document describes a regulatory submission for software (Grass TWin) that demonstrates substantial equivalence by focusing on:
- Technology Comparison: Showing direct equivalence in intended use and technological characteristics with a predicate device, noting minor differences that do not raise new questions of safety or effectiveness (e.g., operating system, additional features like PTT Trend Option, Montage Editor Summation Feature).
- Software Verification and Validation: Adherence to robust software development processes and compliance with general FDA guidance documents and international standards (IEC 62304 for software lifecycle processes, IEC 60601-1-6 and IEC 62366 for usability).
- Bench Performance Testing: Verification against internal requirements and applicable standards, specifically for usability.
The detailed clinical performance metrics typical for diagnostic or AI-assisted devices that you've asked about are not present in this 510(k) summary, as the device's nature (recording, playback, and analysis software for existing physiological signals, rather than a novel diagnostic algorithm) and the context of a substantial equivalence submission likely did not require them.
Ask a specific question about this device
Page 1 of 4