Search Results
Found 58 results
510(k) Data Aggregation
(188 days)
OLV
The Falcon HST is an EEG and respiratory signal recorder. The device is intended for use by adult patients in the home or clinical environment, under the direction of a qualified healthcare practitioner, to aid in the diagnosis of sleep disorders.
The Falcon HST comprises hardware and software which provide separate parameters for recording, review, and analysis of collected and stored physiological parameters, including EEG, EOG, ECG and respiratory signals, which are then used as an aid in the diagnosis of respiratory and/or cardiac related sleep disorders by qualified physicians.
The Falcon system consists of the main unit and the charging cradle. The main unit is a small device that is worn on the patient's chest over clothing. It is equipped with a touch-screen LCD and contains various channel inputs such as for the inductive plethysmography bands, and electrodes. The Falcon charging cradle is used to charge the device, as well as provide a USB interface for transferring study data to the PC.
The manufacturer, Compumedics Limited, demonstrates the substantial equivalence of the Falcon HST to its predicate devices for aid in the diagnosis of sleep disorders. The acceptance criteria and the study that proves the device meets the acceptance criteria are described below.
Acceptance Criteria and Reported Device Performance
The acceptance criteria for the Falcon HST are based on establishing substantial equivalence to its predicate devices, the Zmachine Synergy (K172986) and Zmachine DT-100 (K101830). The performance of the Falcon HST was deemed acceptable if its capabilities for recording EEG, respiratory effort, respiratory airflow, body position, and pulse oximetry, as well as its software's ability to produce AHI and sleep staging results, were found to be substantially equivalent to the predicate devices and gold standard polysomnography (PSG) data.
Acceptance Criteria / Performance Metric | Reported Device Performance |
---|---|
EEG Input Circuit Performance: Acquired EEG signals from Falcon HST are substantially equivalent to Zmachine Synergy with high agreement to design limits. | The EEG characteristics were found to be in high agreement with the design limits for all points of comparison. The EEG recording capabilities were found to be substantially equivalent. |
Respiratory Effort Performance: Acquired respiratory effort signals from Falcon HST are substantially equivalent to Zmachine Synergy. | Both units (Falcon HST and Zmachine Synergy) produced similar readings during simulated inhalation and exhalation. The Respiratory Effort characteristics were found to be substantially equivalent. |
Respiratory Airflow Performance: Acquired respiratory airflow signals from Falcon HST are substantially equivalent to Zmachine Synergy. | Both units produced similar readings when using nasal cannula with the same breathing rate. The Respiratory Airflow characteristics were found to be substantially equivalent. |
Body Position Performance: Acquired body position signals from Falcon HST are substantially equivalent to Zmachine Synergy and Falcon HST reports angle with regard to gravity appropriately against an angular reference. | The acquired data from Falcon HST and Zmachine Synergy was analyzed, and the Body Position recording capabilities were found to be substantially equivalent after rotating devices through 360 degrees against an angular reference. |
Pulse Oximetry Performance: Acquired pulse oximetry signals (heart rate and oxygen saturation) from Falcon HST are substantially equivalent to Zmachine Synergy. | The heart rate and oxygen saturation readings were found to be in high agreement when comparing the two systems. The Pulse Oximeter recording capabilities were found to be substantially equivalent. |
Profusion PSG Software 5.1 Performance (AHI and Sleep Staging): Produces substantially equivalent results for calculating the apnea hypopnea index (AHI) and sleep staging (N1, N2, N3, REM and Wake) when compared to expert review of gold standard polysomnography data. | Clinical performance testing validated that the performance of the Profusion PSG software 5.1 produces substantially equivalent results for calculating the apnea hypopnea index (AHI) and sleep staging (N1, N2, N3, REM and Wake) when compared to expert review of gold standard polysomnography data. (Specific metrics for "substantially equivalent" were not detailed in the provided text but implied by the successful validation statement.) |
Electrical Safety: Compliance with IEC 60601-1:2005 (Third Edition) + COR1:2006 + COR2:2007 + A1:2012 + A2:2020. | All tests passed. |
EMC: Compliance with IEC 60601-1-2:2014 + A1:2020, EN 60601-1-2:2015+A1:2021. | All tests passed. |
Mechanical and Environmental Requirements: Compliance with IEC 60601-1-11:2015+A1:2020. | All tests passed. |
Electroencephalograph Safety and Performance: Compliance with IEC 80601-2-26:2019, including accuracy of amplitude and rate of variation signal reproduction, input dynamic range and differential offset voltage, input noise, frequency response, and common mode rejection ratio. | All tests passed. |
Ambulatory Electrocardiography Systems Safety and Essential Performance: Compliance with IEC 60601-2-47:2012. | All tests passed. |
Battery Safety: Compliance with IEC 62133-2:2017/AMD1:2021 for secondary cells and batteries containing alkaline or other non-acid electrolytes (Lithium systems). | All tests passed. |
Functional Requirements: Performance meets hardware and software design specifications including functionality substantially equivalent to the Zmachine Synergy predicate device. | All tests passed with results equivalent to the Zmachine Synergy and Zmachine DT-100 and did not raise additional concerns of safety and effectiveness. |
Study Details:
-
Sample sizes used for the test set and the data provenance:
- Bench Testing (Side-by-Side Comparison): The text does not specify the exact number of devices or data points used for the side-by-side bench comparison tests between the Falcon HST and Zmachine Synergy for EEG input, respiratory effort, respiratory airflow, body position, and pulse oximetry. The description implies at least one of each device was used, subject to repeated measurements or simulated inputs.
- Clinical Performance Testing (Profusion PSG Software): The text does not provide a specific sample size for the clinical performance testing used to validate the software's ability to calculate AHI and sleep staging.
- Data Provenance: The document does not explicitly state the country of origin or whether the data for the mentioned tests was retrospective or prospective. For bench testing, it involved simulated inputs or direct comparison against predicate devices. For clinical performance testing of the software, it's compared against "gold standard polysomnography data," implying real patient data.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):
- For the clinical performance testing of the Profusion PSG software, the ground truth for AHI and sleep staging was established through "expert review of gold standard polysomnography data." The number of experts and their specific qualifications (e.g., years of experience, specific certifications) are not specified in the provided text.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- The document does not specify an adjudication method for establishing ground truth, particularly for the clinical performance testing where expert review was used.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- The provided text does not mention an MRMC comparative effectiveness study involving human readers or any AI assistance. The study focuses on the device's equivalence to existing technology and the accuracy of its software against expert-reviewed data, not on human reader performance with or without AI assistance. The Falcon HST is an EEG and respiratory signal recorder, and its software is used to aid in the diagnosis by processing these signals, not primarily as an AI assistance tool for human interpretation in the context of what would typically be considered an MRMC study for AI.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- For the hardware components (EEG, respiratory effort, airflow, body position, pulse oximetry), standalone performance testing was conducted by comparing the Falcon HST's output directly against that of the predicate devices or against angular references/simulated inputs.
- For the Profusion PSG software 5.1, its ability to calculate AHI and sleep staging was validated by comparing its outputs directly against "expert review of gold standard polysomnography data." This indicates a standalone performance evaluation of the algorithm's output against established ground truth, effectively without human-in-the-loop for the algorithm's calculation step itself. The device is intended "to aid in the diagnosis," implying that its output will be reviewed by a human practitioner.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc):
- For testing the accuracy of AHI and sleep staging calculations by the Profusion PSG software, the ground truth used was expert review of gold standard polysomnography data.
- For the bench testing of individual physiological parameters (EEG, respiratory effort, airflow, body position, pulse oximetry), the ground truth was established by comparison to the predicate device (Zmachine Synergy) or by using controlled simulated inputs and angular references.
-
The sample size for the training set:
- The document does not provide any information regarding a training set size. This might be because the device's algorithms or software features (like sleep staging) may have been developed and validated previously, or the submission focuses on demonstrating equivalence to established technologies rather than novel algorithm training. The software, Profusion PSG software 5.1, is mentioned to be identical to versions previously cleared (K072201 and K093223), suggesting its core functionality and training (if any) happened prior to this submission.
-
How the ground truth for the training set was established:
- As no information about a training set is provided, how its ground truth was established is not detailed in the document.
Ask a specific question about this device
(230 days)
OLV
Noxturnal Web is intended to be used for the diagnostic evaluation by a physician to assess sleep quality and as an aid for the diagnosis of sleep and respiratory-related sleep disorders in adults only.
Noxturnal Web is a software-only medical device to be used to analyze physiological signals and manually score sleep study results, including the staging of sleep, AHI, and detection of sleep disordered breathing events including obstructive apneas.
It is intended to be used under the supervision of a clinician in a clinical environment.
Noxturnal Web is a web-based software that can be utilized to screen various sleep and respiratoryrelated sleep disorders. The users of Noxturnal Web are medical professionals who have received training in the areas of hospital/clinical procedures, physiological monitoring of human subjects, or sleep disorder investigation. Users can input a sleep study recording stored on the cloud (electronic medical record repository) using their established credentials. Once the sleep study data has been retrieved, the Noxturnal Web software can be used to display, manually analyze, generate reports and print the prerecorded physiological signals.
Noxturnal Web is used to read sleep study data for the display, analysis, summarization, and retrieval of physiological parameters recorded during sleep and awake. Noxturnal Web facilitates a user to review or manually score a sleep study either before the initiation of treatment or during the treatment follow-up for various sleep and respiratory-related sleep disorders.
Noxturnal Web presents information from the input sleep study data in an organized layout. Multiple visualization layouts (e.g., Study Overview, Respiratory Signal Sheet, etc.) are available to allow the users to optimize the visualization of key data components. The reports generated by Noxturnal Web allow the inclusion of custom user comments, and these reports can then be viewed on the screen and/or printed.
The provided document is a 510(k) summary for the medical device Noxturnal Web. It states that clinical data were not relied upon for a determination of substantial equivalence. Therefore, there is no information in this document regarding a clinical study or a test set with expert-established ground truth.
However, the document does describe the performance expectations and how suitability was determined through non-clinical testing, specifically software verification and validation.
Here's the information based on the provided text, focusing on the non-clinical and comparative aspects:
1. A table of acceptance criteria and the reported device performance
The document does not present explicit quantitative acceptance criteria for performance in a table format with reported numerical device performance. Instead, it describes functional equivalence to the predicate device through comparative analysis and states that the software meets its pre-specified requirements and performs as intended.
The comparison table on pages 8-9 highlights the functional equivalences:
Acceptance Criteria (Inferred from Functional Equivalence) | Reported Device Performance (as stated in document) |
---|---|
Aid/Assist in the diagnosis of sleep and respiratory-related sleep disorders | Yes (Same as predicates) |
Arousal Scoring | Yes (Same as predicates) |
Respiratory Events Scoring | Yes (Same as predicates) |
Leg Movement Events Scoring | Yes (Same as predicates) |
Sleep Study Scoring Method (Manual) | Manual (Same as primary predicate; additional predicate also has automatic) |
Sleep Stage Scoring (W, N1/N2/N3, R) | Yes (Same as predicates) |
Report Generation | Yes (Same as predicates) |
Calculation of AASM standardized indices | Yes (Same as predicates) |
Data Inputs (EEG, EOG, EMG, ECG, Chest/Abdomen movements, Airflow, Oxygen Saturation, Body Position/Activity) | All "Yes" (Same as predicates for all relevant inputs) |
Software Type (Web-based) | Web-based (Same as additional predicate; primary predicate is computer program) |
Physical Characteristics (Web-based operating in the cloud with Windows or Mac OS) | Web-based software operating in the cloud with Windows or Mac OS (Similar to additional predicate) |
Standard of Scoring Manual | The American Academy of Sleep Medicine (AASM) Manual for the Scoring of Sleep and Associated Events (Same as predicates) |
Backend implementation | Identical to corresponding qualitative and quantitative functionality implemented in the reference device (Nox Sleep System, K192469) |
Cybersecurity controls | Implemented in accordance with FDA's Guidance "Cybersecurity for Networked Medical Devices Containing Off-the-Shelf (OTS) Software" and "Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions" |
The general acceptance criterion is that the Noxturnal Web is "as safe and effective as the predicate devices" and "meets its pre-specified requirements."
2. Sample size used for the test set and the data provenance
The document explicitly states: "Clinical data were not relied upon for a determination of substantial equivalence." Therefore, there is no clinical test set of patient data with ground truth as would be used in a clinical study.
The testing performed was "Software verification and validation testing... to demonstrate safety and performance based on current industry standards," and "Verification and Validation testing of all requirement specifications defined for Noxturnal Web was conducted and passed." This implies that the 'test set' consisted of various software functions and their outputs, but not a large set of patient physiological recordings serving as a "test set" in the context of a clinical performance study. The data provenance and size of this kind of "test set" (software test cases) are not detailed in this summary.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Given that "Clinical data were not relied upon," there was no clinical test set requiring expert-established ground truth in the traditional sense for demonstrating substantial equivalence. The summary highlights that the device supports manual scoring completed by medical professionals who have received training in relevant areas (page 7). This implies that the human-in-the-loop performance is based on the expertise of the user, rather than the device itself establishing ground truth.
4. Adjudication method for the test set
Not applicable, as no clinical test set with established ground truth was used for assessing substantial equivalence.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC study was done, as indicated by the statement "Clinical data were not relied upon for a determination of substantial equivalence." The device's primary function is to facilitate manual scoring by a clinician, not to provide AI-assisted automated interpretations that would then be compared to human-only interpretations via an MRMC study.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The device is described as "software-only medical device to be used to analyze physiological signals and manually score sleep study results" and "It is intended to be used under the supervision of a clinician in a clinical environment." This indicates that the device is not intended for standalone (algorithm only) performance without human-in-the-loop interaction for interpretation and scoring. The comparative table also notes that both the subject device and the primary predicate "rely on manual scoring."
7. The type of ground truth used
For the purpose of regulatory clearance, the "ground truth" for the device's functionality was its ability to replicate the features and performance of legally marketed predicate devices, as demonstrated through "comparative analysis, software and performance testing." The ground truth for interpreting sleep studies using this device resides with the trained medical professional who manually scores the data according to the "American Academy of Sleep Medicine (AASM) Manual for the Scoring of Sleep and Associated Events."
8. The sample size for the training set
Not applicable, as this device appears to be a software tool for manual scoring and analysis, rather than an AI/ML algorithm that requires a "training set" in the context of deep learning or machine learning models. The summary makes no mention of AI/ML or training data; its emphasis is on providing tools for manual clinician review.
9. How the ground truth for the training set was established
Not applicable, for the same reasons as point 8.
Ask a specific question about this device
(269 days)
OLV
The HomeSleepTest is a non-invasive prescription device for home use with patients suspected to have sleep disorders including sleep-related breathing disorders. The HomeSleepTest is a diagnostic aid for the detection of sleep-related breathing disorders, sleep staging (REM, N1, N2, N3, Wake), and snoring level. Using three frontal electrodes and one mastoid electrode, the HomeSleepTest records electrical data. In addition, the device records triaxial accelerometer data, ambient light and acoustic data. With help of the ComfortOxyRing, the device records plethysmographic data, as well as SpO2, heart rate and activity data. The HomeSleepTest calculates and reports to clinicians EEG/EOG/EMG, sleep stages, SpO2, plethysmography, pulse rate and snoring level. Based on this, the HomeSleepTest indices derived parameters, such as autonomic arousal (based on plethysmogram of ring), oxygen desaturation index and hypnogram-derived indices, such as time in each sleep stage, as well as other sleep-related parameters, to aid in determining sleep quality and quantity. HomeSleepTest data is not intended to be used as the sole or primary basis for diagnosing any sleep disorders, or sleep-related breathing disorders, prescribing treatment, or determining whether additional diagnostic assessment is warranted. The HomeSleepTest is not intended for use in life-support and monitoring systems. The HomeSleepTest and its accessories are not intended for patients requiring monitoring and intensive care. The HomeSleepTest is a prescription device indicated for adult patients aged 21 years and over.
The HomeSleepTest is used to record vital signs to determine a hypnogram (sleep profile) and sleep parameters that can be helpful in objectively determining sleep quality and quantity. The HomeSleepTest is available in a standard HomeSleepTest (HST) version and a HomeSleepTest REM+ (HST REM+) version. The only difference between the two basic devices is the measuring point of the outer electrodes on the forehead. HomeSleepTest refers to both versions of the device. Furthermore, the HomeSleepTest can be used to detect the patient's activity, head position and snoring. A recording with the HomeSleepTest is initialized in the clinic/practice. For this purpose, the patient is given the HomeSleepTest and a tablet to take home. The recording can thus be carried out independently in the home environment. Immediately after recording, the data is automatically transferred to the HST cloud. The physician has access to this data via the HST cloud where they can analyze the data via the HST cloud in DOMINO and generate a report. A tablet app with video sequences supports the patient in correct application and guides them through the required biocalibration of the system. Once the test is started, all data from the HomeSleepTest is recorded via Bluetooth. The HomeSleepTest records nine signals (3 x frontopolar EEG, 2 x EOG, EMG, snoring, activity and head position). Snoring and snoring rhythm are recorded via the tablet's microphone. Head position, movement and light support information about time in bed (TIB) and other sleep-related parameters. After completion of the measurement the next morning by the patient, the data is automatically uploaded to the HST cloud where it can be analyzed with help of DOMINO software. After the data has been successfully transferred to the cloud, an exchange with the physician is possible via the feed-back function. It is also possible to release a new measurement for this patient, so that recordings can be made over several nights to capture variability. An overview of all measurements is available to the physician in the HST Cloud. A chat area is used for communication between doctor and patient. In the cloud-based evaluation software, both the pre-evaluation and the raw data of the measurement should be analyzed or edited in an AASM compliant manner. A simple report is available immediately after uploading the measurement. However, the recorded data must still be verified by a physician: for the detailed report (standard report), the measurement is opened in the DOMINO software and scored manually. After evaluation, the standard report is also available in the overview and provides additional information on the various sleep parameters.
Here's an analysis of the acceptance criteria and the study proving the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Parameter / Metric | Acceptance Criteria | Reported Performance (HomeSleepTest) |
---|---|---|
Concordance of Manually Scored Sleep Stages (HST vs. PSG) | 82.0% | 83.0% (Cohen's kappa 0.77) |
Concordance of Automated Sleep Stages (HST Pre-evaluation vs. PSG Manual) | 50.0% | 53.7% (Cohen's kappa 0.40) |
Interrater Variability (HST Sleep Stages Scorer 1 vs. Scorer 2) | >70.0% | 71.3% |
Total Sleep Time (TST) | Not explicitly stated, but "minor differences" implied acceptance | Minor differences in TST between PSG and HST. Sustained sleep efficiency for PSG was 81.85% and for HST was 80.40% (difference of 1.45%). |
Sleep Latency, REM Latency, Sleep Period Time, Wake After Sleep Onset | Not explicitly stated, but "minor time differences" implied acceptance | Minor time differences between subject and predicate device. |
Different Sleep Stages as % TIB | ± 10% | Wake, REM, N1, N2, N3 all meet the acceptance criteria of ± 10%. |
Manually Scored Arousal Index | Not explicitly stated, but "met acceptance criteria" | Met acceptance criteria. |
Oxygen Desaturation Parameters | Not explicitly stated, but "met acceptance criteria" | Number of Desaturations (total, 90%, |
Ask a specific question about this device
(289 days)
OLV
The Cerebra Sleep System is an integrated diagnostic platform that acquires, transmits, analyzes, and displays physiological signals from adult patients, and then provides for scoring (automatic and manual), editing, and generating reports. The system uses polysomnography (PSG) to record the electroencephalogram (EEG), electrooculogram (EOG), electrocardiogram (ECG), electromyogram (EMG), accelerometry, acoustic signals, nasal airflow, thoracic and abdomen respiratory effort, pulse rate, and oxyhemoglobin saturation, depending on the sleep study configuration. The Cerebra Sleep System is for prescription use in a home or healthcare facility.
The Cerebra Sleep System is intended to be used as a support tool by physicians and PSG technologists to aid in the evaluation and diagnosis of sleep disorders. It is intended to provide sleep-related information that is interpreted by a qualified physician to render findings and/or diagnosis, but it does not directly generate a diagnosis.
The Cerebra Sleep System is an integrated diagnostic platform that acquires, transmits, analyzes, and displays physiological signals from adult patients, and then provides for scoring (automatic and manual), editing, and generating reports. It is for prescription use in a home or healthcare facility and is used by physicians and polysomnographic (PSG) technologists as a support tool to aid in the evaluation and diagnosis of sleep disorders. A PSG tecnologist must edit, score, and review the data before sleep reports are generated.
The Cerebra Sleep System is capable of collecting data required for Level 2 PSG and Level 3 HSAT studies. A Level 3 HSAT study is a home sleep apnea test with a minimum of 4 channels that include oxygen saturation, electrocardiogram (ECG) or heart rate, airflow (e.g., nasal flow), and respiratory effort (e.g., chest band). A Level 2 PSG study is an unattended sleep test with a minimum of 7 channels that include electroencephalogram (EEG), electrooculogram (EOG), electrocardiogram (ECG) or heart rate, chin electromyogram (EMG), and all signals from Level 3.
The Cerebra Sleep System (CSS) is comprised of three main areas:
- Prodigy: This is a PSG recorder capable of performing Level 2 and Level 3 sleep studies. It includes the Head Mounted Unit (HMU), which is worn on the patient's head, the Chest Mounted Unit (CMU), which is affixed to a chest effort belt, the Table Top Unit (TTU), which receives data wirelessly, and third party accessories including an oximeter.
- Cerebra Analytics Suite (CAS): The CAS has 4 components Web Processing (for signal processing of data), Web Scoring (for generating scoring results, which encompasses autoscoring), Cerebra Viewer (for viewing and editing PSG studies and scoring results) and Web Reporting (for generating reports). A PSG technician must edit, score, and review the data before reports are generated. These components utilize well-defined file formats to enable communication; communication with each component is done through the internet, via the Cerebra Portal.
- Cerebra Portal: All areas of the CSS product are managed by the Portal software. The Portal is used to configure a study on the TTU and allows sleep analysis service providers to manage inventory, patients, and sleep studies. It also configures Prodigy system hardware for an individual sleep study to be performed by a patient.
The Cerebra Sleep System includes an "autoscoring" module, the performance of which was evaluated against acceptance criteria.
1. Acceptance Criteria and Reported Device Performance
Scoring Function | Acceptance Criteria (Overall % Agreement) | Reported Device Performance (Overall % Agreement) | Acceptance Criteria (Kappa) | Reported Device Performance (Kappa) |
---|---|---|---|---|
SLEEP STAGING | Not explicitly stated, but compared to Michele and Alice 5. Michele: 82.6%, Alice 5: 30.5% | 79.9% | Not explicitly stated, but compared to Michele and Alice 5. Michele: 0.77, Alice 5: 0.06 | 0.72 |
Awake | Not explicitly stated | 82.32% APPA / 97.15% ANPA | Not explicitly stated | Not applicable |
N1 | Not explicitly stated | 56.80% APPA / 91.64% ANPA | Not explicitly stated | Not applicable |
N2 | Not explicitly stated | 82.95% APPA / 88.05% ANPA | Not explicitly stated | Not applicable |
N3 | Not explicitly stated | 84.51% APPA / 97.16% ANPA | Not explicitly stated | Not applicable |
REM | Not explicitly stated | 79.41% APPA / 98.84% ANPA | Not explicitly stated | Not applicable |
AROUSALS | Not explicitly stated, but compared to Michele and Alice 5. Michele: 89.9%, Alice 5: 57.9% | 86.36% | Not explicitly stated, but compared to Michele and Alice 5. Michele: 0.54, Alice 5: 0.10 | 0.48 |
Yes | Not explicitly stated | 64.79% APPA / 89.74% ANPA | Not explicitly stated | Not applicable |
PLMs | Not explicitly stated, but compared to Michele and Alice 5. Michele: 95.7%, Alice 5: 88.3% | 95.99% | Not explicitly stated, but compared to Michele and Alice 5. Michele: 0.69, Alice 5: 0.38 | 0.69 |
Yes | Not explicitly stated | 64.96% APPA / 98.59% ANPA | Not explicitly stated | Not applicable |
RESPIRATORY EVENTS | Not explicitly stated, but compared to Michele and Alice 5. Michele: 94.0%, Alice 5: 78.0% | 89.18% | Not explicitly stated, but compared to Michele and Alice 5. Michele: 0.74, Alice 5: 0.25 | 0.54 |
Overall Apneas | Not explicitly stated | 62.60% APPA / 93.03% ANPA | Not explicitly stated | Not applicable |
Hypopneas & RERAs | Not explicitly stated | 66.52% APPA / 98.65% ANPA | Not explicitly stated | Not applicable |
Note: While specific numerical acceptance criteria for the Cerebra Sleep System's autoscoring performance are not explicitly stated as strict cut-offs, the comparison tables (Table 3 and Table 5) strongly imply that the acceptance criteria were based on achieving "moderate to substantial agreement" with manual scoring and demonstrating performance similar to or better than the identified predicate devices (Michele and Alice 5 autoscoring software systems).
2. Sample Size and Data Provenance
- Test Set Sample Size: 138 randomly selected pre-existing sleep studies.
- Data Provenance: The studies were from both sleep laboratory (Level 1 sleep test) and in-home (Level 2 or 3 sleep test) use environments. The document does not specify the country of origin, but given the FDA submission, it's likely to encompass data relevant to US regulatory standards. The data was "pre-existing," indicating a retrospective study design.
3. Number of Experts and Qualifications
- Number of Experts: Three.
- Qualifications of Experts: Board-certified registered polysomnographic technologists (RPSGT) with a minimum of 13 years of experience scoring sleep studies.
4. Adjudication Method
- Method: Consensus manual scoring. The document states that the autoscoring software was compared to "the consensus manual scoring." For "No Consensus" epochs in sleep staging (1654 epochs) and respiratory events (833 epochs), it's explicitly mentioned that "all three technologists gave different scores," indicating that if the three experts did not agree, those epochs were labeled as "No Consensus" and likely excluded from the primary agreement calculations against the consensus. This suggests a form of 3-way agreement leading to "consensus."
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No, a direct MRMC comparative effectiveness study involving human readers improving with AI vs without AI assistance was not explicitly described in the provided text for the autoscoring module. The study focuses on evaluating the standalone performance of the autoscoring algorithm against manual scoring ("gold standard").
6. Standalone (Algorithm Only) Performance
- Yes, a standalone performance study was done. The "Software Verification and Validation Testing - Autoscoring" section details the comparison of the "Cerebra Sleep System (CSS) autoscoring software" to the "consensus manual scoring," which evaluates the algorithm's performance without a human-in-the-loop.
7. Type of Ground Truth Used
- Type of Ground Truth: Expert consensus manual scoring. The text states: "The testing consisted of comparing the autoscoring software with the current gold standard (manual scoring) for key sleep variables." And further, "Sleep recordings were manually scored by three board-certified registered polysomnographic technologists (RPSGT)..."
8. Sample Size for the Training Set
- The document does not explicitly state the sample size used for the training set of the autoscoring module. It only specifies the test set size of 138 studies.
9. How Ground Truth for Training Set was Established
- The document does not provide details on how the ground truth for the training set was established. It focuses solely on the validation/test set.
Ask a specific question about this device
(113 days)
OLV
The SOMNOscreen® plus is a non-life-supporting portable physiological signal recording device intended to be used for testing adults and children (age 2 to 12 years)/adolescents (age 12 and above) suspected of having sheep related breathing disorders.
The SOMNOscreen® plus is a modular system with the following components available .: Thermistor, Nasal Canula, Effort belts with respective sensor, SpO2-Sensor, Microphone, Headbox (EXG Channels), external body position sensor, shoulder belts, activity sensor, EMG-PLM sensor, pressure sensor, gold cup electrodes and LoFlo CO2-module (optional). The SOMNOscreen® plus device provides 13 AC channels (10 Referential and 3 Differential), 11 Respiratory and AUX Channels, and 8 Internal Channels.
The SOMNOscreen plus is available in 4 different configurations.
- . Cardio-RESP
- Home Sleep
- PSG .
- . EEG 32
The purpose of this 510(k) submission is to expand the patient population to include children and adolescents. For the use in pediatric patients the DOMINO software only allows manual (visual) scoring by qualified RPSG. There is no automated analysis or highlighting available for pediatric patients.
The provided text describes a medical device, SOMNOscreen plus, and its FDA 510(k) clearance. The submission focuses on expanding the patient population to include children and adolescents, emphasizing that for this new population, only manual scoring by qualified personnel is allowed, with no automated analysis.
Here's an analysis of the acceptance criteria and study information, based only on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly present a table of "acceptance criteria" against "reported device performance" in a quantitative manner for specific diagnostic metrics (e.g., accuracy, sensitivity, specificity for detecting sleep-related breathing disorders). Instead, it discusses compliance with general safety and performance standards for electroencephalographs and polysomnographs.
However, we can infer some performance aspects based on the technical specifications and comparisons:
Performance Aspect (Implied Acceptance Criteria) | Reported Device Performance (from text) |
---|---|
Signal Recording Capabilities | Complies with performance criteria set forth by SOMNOmedics, including minimum performance specifications recommended by the American Academy of Sleep Medicine (AASM). |
Data Processing Resolution | Up to 16 Bit (consistent with predicates) |
Data Processing Storage Rate | Up to 512 Hz (higher than Alice 5's 200 Hz, supporting improved signal quality) |
Battery Life / Recording Duration | Up to 24 hours (consistent with predicates) |
Integrated Display Functionality | Allows signal check, programmable time setting, menu control directly on the main device (advantage over Alice 5, which has no internal display) |
Electrode Impedance Check | Capable (similar to Alice 5, but not on all predicates) |
Calibration Check | Capable (similar to predicates) |
Usability Engineering | Compliant with IEC 62366-1 |
Safety (Electrical) | Compliant with IEC 60601-1 |
Electromagnetic Compatibility (EMC) | Compliant with IEC 60601-1-2 |
Risk Management | Applied according to ISO 14971 |
Quality Management System | Compliant with ISO 13485 |
Biocompatibility | Established according to ISO 10993 for new components; many accessories previously cleared. |
2. Sample Size Used for the Test Set and Data Provenance
The document explicitly states: "Clinical data were not relied upon for a determination of substantial equivalence." This means no specific clinical test set was used to evaluate the device's diagnostic performance for sleep-related breathing disorders in humans for this submission. The evaluation focused on technical and safety equivalence to predicate devices and adherence to standards.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
As no clinical data was relied upon, there was no test set requiring expert-established ground truth. The "ground truth" implicitly referred to here is compliance with technical standards and AASM recommendations for polysomnography signal acquisition. For pediatric applications, the text mentions "manual (visual) scoring by qualified RPSG" (Registered Polysomnographic Technologist), implying that human experts are crucial for interpreting the device's output, especially in the expanded pediatric population. However, this is about the use of the device, not the validation of its performance in a clinical study for this submission.
4. Adjudication Method for the Test Set
Not applicable, as no clinical test set was used or described.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done
No, an MRMC comparative effectiveness study was not done, as stated by "Clinical data were not relied upon for a determination of substantial equivalence." The document indicates that for pediatric populations, the software "only allows manual (visual) scoring by qualified RPSG. There is no automated analysis or highlighting available for pediatric patients." This suggests that any comparative effectiveness of human readers with AI assistance versus without AI assistance is not relevant to this specific clearance for the pediatric population, as AI assistance for scoring is explicitly not provided for them.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done
No standalone algorithm performance study was described. In fact, for the new pediatric population, the device explicitly does not offer automated analysis or highlighting, emphasizing human interpretation.
7. The Type of Ground Truth Used
For the purposes of this 510(k) submission, the "ground truth" used was primarily compliance with recognized standards (IEC 60601-1, IEC 60601-1-2, ISO 14971, ISO 10993, IEC 62366-1) and technical specifications, as well as adherence to minimum performance specifications recommended by the American Academy of Sleep Medicine (AASM) for signal types. No clinical ground truth (like pathology, expert consensus on diagnostic outcomes, or long-term outcomes data) was used in a specific clinical study for this submission.
8. The Sample Size for the Training Set
No training set information is provided, as the submission did not rely on clinical data or automated algorithms that would require a training set for this clearance. The "training" here refers to the design and verification process of the device itself against engineering and safety requirements.
9. How the Ground Truth for the Training Set Was Established
Not applicable, as no training set for an AI algorithm was described. The device's "training" in an engineering sense involved verification and validation against pre-specified requirement specifications and relevant performance standards, ensuring basic safety and essential performance.
Ask a specific question about this device
(150 days)
OLV
The Cadwell ApneaTrak device is intended for home sleep testing, including the acquisition of physiological and environmental data. The recorded signals are then transmitted to a PC so that they can be viewed. ApneaTrak is intended for use on patients older than 2 years of age.
ApneaTrak is intended for use in hospitals, sleep centers and other sleep testing environments, including the patient's home. ApneaTrak is intended to be used when prescribed by a qualified healthcare provider for use on patients suspected of sleep disorders, specifically Sleep Disordered Breathing (SDB) and requires review by qualified medical personnel. ApneaTrak is NOT intended to perform automatic diagnosis.
Cadwell's ApneaTrak is a sleep diagnostic system consisting of: (1) acquisition hardware that can acquire, record, store, and transfer up to 3 channels of ExG (including EEG, EMG, ECG, and EOG signals) data, 2 respiratory effort channels, 1 thermistor channel, 1 pressure channel, 1 snore channel, and 1 oximetry channel; (2) a host electronic device (typically a PC) capable of running the software as well as charging and interfacing with the acquisition device; and (3) software that allows for device configuration and data download.
ApneaTrak is connected, by a clinical user, to a host device via USB cable for initialization. After initialization and having been given instruction on correct clinical use of the device, ApneaTrak is then used by the patient at home. The device acquires and stores physiological and/or environmental data to onboard memory. After use, the device is returned to the clinical user, who connects the device to the host PC. The software downloads and stores data from the device in European Data Format (EDF).
The provided document does not contain an acceptance criteria table detailing specific performance metrics and a study describing how the device meets those criteria with clinical data. Instead, it details that the ApneaTrak device underwent performance testing against recognized electrical safety, electromagnetic disturbance, and performance standards for its various signal acquisition functionalities.
The document indicates that the device's technical specifications and intended use are similar to predicate devices (Zmachine Synergy and Nox T3 Sleep Recorder). The testing primarily focuses on demonstrating compliance with standards and equivalence in signal acquisition performance through bench testing rather than a comparative effectiveness study involving human readers or a standalone algorithm-only performance study.
Here's a breakdown of the available information based on your request:
1. A table of acceptance criteria and the reported device performance
The document does not provide a table of acceptance criteria with specific performance thresholds for diagnostic accuracy (e.g., sensitivity, specificity for identifying sleep disorders) that the device must meet against a ground truth. Instead, it lists the following performance tests and their outcomes:
Test | Test Method Summary | Reported Device Performance (Results) |
---|---|---|
ExG | ExG functionality validated by complying with essential performance requirements from IEC 60601-2-26 and IEC 60601-2-40. | All test results demonstrate compliance with the standards. |
Pulse Oximetry | Pulse oximetry functionality validated by complying with ISO 80601-2-61 Particular requirements for pulse oximeters. | All test results demonstrate compliance with the standard. |
Respiratory Effort | A known oscillating input signal was injected into the respiratory channel of the subject device. The input and output data were plotted and quantitatively compared. | Passing result based on high measure of equivalence between input and output signals. |
Airflow - Pressure | A known oscillating input signal was input to the airflow pressure channel of the subject device. The input and output data were plotted and quantitatively compared. | Passing result based on high measure of equivalence between input and output signals. |
Airflow - Thermal | A known oscillating input signal was input to the airflow thermal channel of the subject device. The input and output data were plotted and quantitatively compared. | Passing result based on high measure of equivalence between input and output signals. |
Snore | A known oscillating input signal was input to the snore channel of the subject device. The input and output data were plotted and quantitatively compared. | Passing result based on high measure of equivalence between input and output signals. |
Electrical Safety | Compliance with IEC 60601-1:2005 + CORR. 1:2006 + CORR. 2:2007 + A1:2012, AAMI ES60601-1:2005 +C1+A2 [R2012], IEC 60601-2-40:2016, IEC 60601-2-26:2012, ISO 80601-2-61:2017, IEC 60601-1-11:2015, IEC 60601-1-6:2013, IEC 62304:2006 + A1:2015. | All test results demonstrate compliance with the applicable standards. |
Electromagnetic | Compliance with IEC 60601-1-2:2014. | All test results demonstrate compliance with the applicable standards. |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document describes "performance bench tests" where "a known oscillating input signal was injected" or "tested in accordance with internal software requirements, system requirements, and usability requirements." This implies laboratory-based testing with simulated data or controlled inputs rather than a clinical test set with patient data. Therefore, there is no sample size of patients or information on data provenance (country, retrospective/prospective) for a clinical test set provided.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
Given the nature of the bench testing described (injecting known signals), the concept of "experts establishing ground truth for a test set" as typically understood in AI/clinical validation studies does not apply here. The ground truth for these tests was the "known oscillating input signal" or the performance requirements of the standards themselves.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
No adjudication method is mentioned, as the testing described focuses on device functionality and signal accuracy against known inputs or standards, not on diagnostic interpretations requiring adjudication.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC comparative effectiveness study was done or reported in the provided document. The device is described as a data acquisition system for home sleep testing, and it "is NOT intended to perform automatic diagnosis." The focus is on accurate signal acquisition, not AI-assisted interpretation or improvement of human readers.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The document does not describe a standalone algorithm performance study. The device's purpose is to acquire and transmit physiological data for review by qualified medical personnel. It explicitly states, "ApneaTrak is NOT intended to perform automatic diagnosis."
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
For the bench tests, the ground truth was known oscillating input signals or the defined specifications and requirements of the relevant IEC/ISO standards. For the purpose of regulatory substantial equivalence, the ground truth for the device's claims implicitly relies on the predicate devices and their established performance for similar data acquisition.
8. The sample size for the training set
No training set is mentioned as the document does not describe a machine learning or AI-based diagnostic algorithm that would require a training set. The device is an electrophysiological signal acquisition system.
9. How the ground truth for the training set was established
As no training set is discussed, this question is not applicable.
Ask a specific question about this device
(141 days)
OLV
The Serenity Piezo and Serenity Thermocouple Sensors are intended to measure and limb movement and thermal respiratory flow signals, respectively, from a patient for archival in a polysomnography study.
The sensors are accessories to a polysomnography system which records and conditions the physiological signals for analysis and display, such that the data may be analyzed by a qualified sleep clinician to aid in the diagnosis of sleep disorders.
The Serenity Piezo and Serenity Thermocouple Sensors are intended for use on both adults and children by healtheare professionals within a hospital, laboratory, clinic, or nursing home; or outside of a medical facility under the direction of a medical professional.
The Serenity Piezo and Serenity Thermocouple Sensors are not intended for the life monitoring of high risk patients, do not include or trigger alarms, and are not intended to be used as a critical component of:
- an alarm or alarm system;
- · an apnea monitor or apnea monitoring system; or
- · life monitor or life monitoring system.
Serenity sleep sensors are intended to measure and output physiologic signals used for Polysomnography (PSG) or Sleep Studies. These devices are to be used as an accessory to compatible amplifiers.
Typical sleep amplifiers use sensors and electrodes to collect physiological signals to further digitize, and the amplifiers send these signals to a host PC.
Serenity sleep sensors are worn by the patient and connected directly to compatible inputs of an amplifier. The amplifier and related software then processes the signal for review by qualified practitioners to score polysomnograms and diagnose Sleep Disorders.
The Serenity Piezo Sensor uses an embedded piezo sensing element to detect the vibrations of snoring or to sense a patient's limb movement. The sensor outputs a signal which corresponds to movements of the limbs or snore vibrations. The Piezo sensor can be placed on the skin or worn in a heel strap.
The Serenity Thermocouple Sensor uses thermocouple wire that is ioined together to form sensing elements. Thermocouple junctions under each nostril and in front of the mouth output a signal which corresponds to the patient's thermal airflow. The Serenity Thermocouple sensor is available with an optional cannula hanger to aid in patient usability when worn with an airflow pressure cannula.
The Neurotronics Serenity Piezo Sensor and Serenity Thermocouple Sensors are medical devices intended to measure and output physiological signals (snore, limb movement, and thermal respiratory flow) for polysomnography studies. The provided text describes several non-clinical tests conducted to demonstrate the device's performance and safety.
Here's an analysis of the acceptance criteria and the study that proves the device meets them:
1. Table of Acceptance Criteria and Reported Device Performance
Test Type | Acceptance Criteria | Reported Device Performance |
---|---|---|
Electrical Safety | Specified in 60601-1, Dielectric Strength (1.5 kVAC, 10s ramp, 1 min), Ingress of Liquids, Patient Leads (21CFR898) | All samples passed the acceptance criteria. |
Piezo Sensor Verification | Signal Level: Movement and vibration clearly visible with recommended configuration. Output signal within listed specifications. | All samples passed the acceptance criteria. |
Wire Test: Connector Retention >= 4.5N, Tensile Strength >= 50N, Leadwire Resistance = 3,650 | All samples passed the acceptance criteria. | |
Thermocouple Sensor Verification | Signal Level: Oral and Nasal breathing clearly visible at sensitivity of 20. Output signal within listed specifications. | All samples passed the acceptance criteria. |
Wire Test: Connector Retention >= 4.5N, Tensile Strength >= 50N, Leadwire Resistance = 3,650 | All samples passed the acceptance criteria. | |
Reference Device Comparison (Piezo) | Sensor snore response to vibration relative to noise floor (SNR) at varied frequencies and complex waveforms using recommended polysomnography montage configuration. Sensor limb movement response to movement. Output signal within listed specifications. | Comparison testing shows equivalent performance of the Serenity sensors and the reference devices. |
Reference Device Comparison (Thermocouple) | Sensor response as warm air passes (signal cessation attenuated by >=90% of pre-event baseline). Output signal within listed specifications. | Comparison testing shows equivalent performance of the Serenity sensors and the reference devices. |
Biocompatibility | Compliance with ISO 10993-1, ISO 10993-5 (in vitro Cytotoxicity), ISO 10993-10 (Irritation and Skin Sensitization). | All samples passed the acceptance criteria for the performed biocompatibility testing. |
Sterility | Not applicable (device is not sterile). | Not applicable (device is not sterile). |
2. Sample Size Used for the Test Set and the Data Provenance
The document does not explicitly state the specific numerical sample size used for each individual test (e.g., "50 devices were tested"). Instead, it states "All samples passed the acceptance criteria" for electrical safety, piezo sensor verification, and thermocouple sensor verification. The reference device comparison also mentions "comparison testing shows equivalent performance."
- Sample Size: Not explicitly quantified. "All samples" is used consistently, implying a sufficient number were tested to be representative and satisfy the acceptance criteria.
- Data Provenance: The studies are non-clinical testing conducted by the manufacturer, Neurotronics, Inc. There is no mention of country of origin for the data or whether it was retrospective or prospective in the context of human patient data. These are engineering and performance validation tests, not clinical trials.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts
This section is not applicable as the tests described are non-clinical engineering and performance validations, not studies involving human interpretation or diagnosis. Therefore, there is no "ground truth" established by human experts in the context of the device's diagnostic performance. The ground truth for these tests is defined by the technical specifications and standards (e.g., signal level, resistance, tensile strength, ISO standards).
4. Adjudication Method for the Test Set
This section is not applicable as the tests are non-clinical and do not involve human interpretation or subjective assessment that would require adjudication. The results are objective measurements against defined technical criteria.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
This section is not applicable. The device (Serenity Piezo Sensor, Serenity Thermocouple Sensor) is a sensor that collects physiological signals, not an AI-powered diagnostic tool for interpretation by a human reader. Its function is to accurately measure and output signals, not to process them with AI or aid in human interpretation improvement.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
This section is not applicable. The device is a sensor; it does not contain a standalone algorithm for diagnostic performance. It outputs raw physiological signals for analysis by a polysomnography system and qualified sleep clinician.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
For the non-clinical tests described, the "ground truth" is based on:
- Technical Specifications: Defined parameters for signal level, electrical properties (dielectric strength, resistance), mechanical properties (connector retention, tensile strength, mating cycles).
- International Standards: ISO 10993 for biocompatibility and IEC 60601-1 for electrical safety.
- Reference Device Performance: The performance of the Serenity sensors was compared to established performance characteristics of predicate and reference devices, implying these established devices serve as a benchmark for "ground truth" equivalence in signal output.
8. The sample size for the training set
This section is not applicable. The device is a sensor, not a machine learning model. Therefore, there is no "training set" in the context of AI/ML development.
9. How the ground truth for the training set was established
This section is not applicable for the same reasons as #8.
Ask a specific question about this device
(194 days)
OLV
The Serenity RIP and Serenity Body Position Sensors are intended to measure and output respiratory effort signals and body position, respectively, from a patient for archival in a polysomnography study. The sensors are accessories to a polysomnography system which records and conditions the physiological signals for analysis and display, such that the data may be analyzed by a qualified sleep clinician to aid in the diagnosis of sleep disorders.
The Serenity RIP and Serenity Body Position Sensors are intended for use on both adults and children by healthcare professionals within a hospital, laboratory, clinic, or nursing home; or outside of a medical facility under the direction of a medical professional.
The Serenity RIP and Serenity Body Position Sensors are not intended for the life monitoring of high risk patients, do not include or trigger alarms, and are not intended to be used as a critical component of:
- an alarm or alarm system;
- · an apnea monitor or apnea monitoring system; or
- · life monitor or life monitoring system.
Serenity sleep sensors are intended to measure and output physiologic signals used for Polysomnography (PSG) or Sleep Studies. These devices are to be used as an accessory to compatible amplifiers.
Typical sleep studies use sensors and electrodes to collect, digitize, and send physiological signals to a host PC.
Serenity sleep sensors are worn by the patient and connected directly to compatible inputs of an amplifier. The amplifier and related software then processes the signal for review by qualified practitioners to score polysomnograms and diagnose Sleep Disorders
The Serenity Body Position uses a 3-axis accelerometer to track the patient's body orientation; outputting a voltage which corresponds to one of 5 positions (sitting/upright, supine, prone, left-side, and rightside).
The Serenity RIP Sensor uses respiratory inductance plethysmography to output a waveform which corresponds to patient's respiratory effort. The patient wears an adjustable elastic belt which connects to the RIP driver, the RIP driver then connects to the host device. The Serenity RIP sensor is available for both thorax and abdomen. Thorax and abdomen versions are identical, except that they operate at different frequencies to avoid interference.
The provided text, K173868, details the 510(k) premarket notification for the Serenity Body Position Sensor and Serenity RIP Sensors. It focuses on demonstrating substantial equivalence to predicate devices rather than providing detailed clinical study data with specific acceptance criteria and performance metrics for an AI-powered device. Therefore, a comprehensive answer to your request, particularly regarding AI-specific criteria, human reader improvement with AI assistance, and detailed ground truth establishment, cannot be fully extracted from this document as the device in question is a sensor, not an AI algorithm.
However, based on the information provided for the sensors, here's a breakdown of what can be inferred:
1. A table of acceptance criteria and the reported device performance:
The document doesn't provide acceptance criteria and reported performance in terms of clinical accuracy or diagnostic capabilities for the sensors themselves in a traditional table format with quantitative metrics. Instead, it demonstrates compliance with safety and performance standards and comparative performance to predicate devices.
Category | Acceptance Criteria (Implied) | Reported Device Performance |
---|---|---|
Safety and Essential Performance | Compliance with IEC 60601-1 (Medical Electrical Equipment - Part 1: General Requirements for Basic Safety and Essential Performance) concerning: |
- Humidity Preconditioning
- Determination of Applied Part and Accessible Parts
- Legibility of Marking
- Durability of Marking
- Patient Leakage Current
- Dielectric Voltage Withstand
- Resistance to Heat
- Excessive Temperature
- Ingress of Liquids (IEC 60529)
- Cleaning, Disinfection and Sterilization of ME Equipment and ME Systems
- Enclosure Mechanical Strength (Push, Impact, Drop Test Hand-Held ME Equipment)
- Mold Stress Relief | "All samples passed the acceptance criteria." (for both Serenity Body Position Sensor and Serenity RIP Sensor). The document notes that predicate devices were not found to publish testing to a basic safety standard. |
| Electromagnetic Compatibility (EMC) | Compliance with IEC 60601-1-2 (Medical Electrical Equipment - Part 1-2: General Requirements for Safety - Collateral Standard: Electromagnetic Compatibility) concerning: - Radiated Emissions (CISPR11 ed5.0)
- Electro-Static Discharge Immunity Test (IEC 61000-4-2 ed2.0)
- Conducted, Radio-Frequency, Electromagnetic Immunity Test (IEC 61000-4-6 ed2.0)
- Power Frequency Magnetic Field Immunity Test (IEC 61000-4-8 ed2.0) | "All samples passed the acceptance criteria." (for both Serenity Body Position Sensor and Serenity RIP Sensor). The document notes that predicate devices were not found to publish testing for electromagnetic compatibility. |
| Body Position Sensor Specific | The Serenity Body Position sensor is expected to accurately detect and output signals corresponding to 5 positions: Right Side, Left Side, Supine, Prone, and Upright (Sitting).
It should also meet specified performance for: - Position test
- Dielectric strength
- Transition and Hysteresis
- Output Impedance
- Operational Battery Life Calculation
- Dimensional Analysis
- Output Noise
- Connector Tests
- Strap Fasten/Unfasten Cycle
- Wire Construction Test
- Operational Battery Voltage Range Test | "All samples passed the acceptance criteria." The document notes that predicate devices were not found to publish testing details. Comparative testing showed "equivalent performance of the Serenity sensors and the reference devices, using the same host system configurations." |
| RIP Sensor Specific | The Serenity RIP sensor is expected to accurately detect and output a waveform corresponding to respiratory effort.
It should demonstrate performance equivalent to the predicate device in detecting respiratory effort from chest or abdomen movement. | Comparative testing showed "equivalent performance of the Serenity sensors and the reference devices, using the same host system configurations." |
| Predicate Comparison | The Serenity Body Position Sensor and Serenity RIP Sensors should demonstrate substantial equivalence to identified predicate devices (Braebon Ultima Body Position Sensor and Ambu RIPmate for technical characteristics, and Neurotronics Polysmith Sleep System/Nomad Sleep System for overall intended use and integration as accessories). This implies similar physical, electrical, and environmental designs, and no new questions of safety or effectiveness. | The document concludes: "Based on the results of the Intended Use Comparison, the Technical Comparison, and Testing Data, it is believed that the Serenity Body Position Sensor and Serenity RIP Sensors present no new questions of safety and effectiveness and, are substantially equivalent to the identified predicate. Both sensors have similar physical, electrical, and environmental designs. Both share the same intended use." |
| Biocompatibility | No toxic or irritating effects from patient contact. | "Not Applicable" for the regulatory submission, implying the materials are standard and well-understood for patient contact, or fall under a category where specific biocompatibility testing for this 510(k) was not deemed necessary for substantial equivalence given the context of a sensor accessory. The document notes that predicate devices were not found to publish biocompatibility information. |
| Sterility | If applicable, the device should meet sterility requirements. | "Not applicable." The document notes that predicate devices were not found to publish sterility information. |
2. Sample sizes used for the test set and the data provenance:
- Sample Size: The document mentions "All samples passed the acceptance criteria" for the safety, EMC, and specific sensor verification tests. However, it does not specify the exact number of samples (devices) used for these tests. It's common for these types of engineering verification tests to use a small, representative sample size (e.g., 3-10 units) rather than large clinical trial numbers.
- Data Provenance: The tests appear to be retrospective engineering verification and validation tests conducted by the manufacturer, Neurotronics, Inc. The document does not specify the country of origin for the data or testing other than being performed by the applicant (Neurotronics, Inc.) located in Gainesville, Florida, USA.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
This information is not applicable/provided in the context of this 510(k) submission. This submission is for physical sensors that measure physiological signals (body position and respiratory effort) for archival in a polysomnography study. It is not an AI algorithm that performs an "analysis" or "diagnosis" by itself requiring expert consensus on ground truth for an AI test set. The sensors output raw signals, which are then analyzed by a "qualified sleep clinician." The expertise required is in the manufacturing and testing of medical devices to relevant standards, and the comparison is largely on technical specifications and intended use.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
This is not applicable/provided. Adjudication methods are typically used in clinical studies where multiple human readers interpret medical images or data and their interpretations need to be reconciled to establish a ground truth or resolve discrepancies, particularly for AI algorithm validation. This document describes engineering and performance verification of physical sensors, not a clinical study involving human interpretation consensus.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs. without AI assistance:
This is not applicable/provided. The device is a sensor, not an AI algorithm. Therefore, an MRMC study assessing human reader improvement with AI assistance is not relevant to this submission.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
This is not applicable/provided. The device is a sensor, not an algorithm, and its output is explicitly stated to be "for archival in a polysomnography study" and for analysis "by a qualified sleep clinician." It is not a standalone diagnostic algorithm.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
For the sensor performance tests, the "ground truth" is established by engineering measurements and compliance with specified physical and electrical parameters and industry standards. For comparative performance, the ground truth is simply the measured output of both the new device and the predicate device when subjected to the same inputs/conditions, demonstrating "equivalent performance." There is no mention of expert consensus, pathology, or outcomes data being used to establish ground truth for the sensor's direct output. The interpretation of the sensor data by a sleep clinician in a PSG study implicitly relies on established clinical ground truth for sleep disorders, but this is downstream from the sensor itself.
8. The sample size for the training set:
This is not applicable/provided. This device is a physical sensor, not an AI algorithm that requires a "training set."
9. How the ground truth for the training set was established:
This is not applicable/provided, as no training set for an AI algorithm is involved.
Ask a specific question about this device
(98 days)
OLV
This software is intended for use by qualified research and clinical professionals with specialized training in the use of EEG and PSG recording instrumentation for the digital recording, playback, and analysis of physiological signals. It is suitable for digital acquisition, display, comparison, and archiving of EEG potentials and other rapidly changing physiological parameters.
The Natus Medical Incorporated (Natus) DBA Excel-Tech Ltd. (XLTEK) Grass® TWin ® (Grass TWin) is a comprehensive software program intended for Electroencephalography (EEG), Polysomnography (PSG), and Long-term Epilepsy Monitoring (LTM). TWin is incredibly powerful and flexible, but also designed for easy and efficient day-to-day use. Grass TWin is a software product only, and does not include any hardware.
This document is a 510(k) summary for the Grass TWin, a software program intended for Electroencephalography (EEG), Polysomnography (PSG), and Long-term Epilepsy Monitoring (LTM). The focus of the provided text is on demonstrating the device's substantial equivalence to a predicate device and its compliance with regulatory standards for software and usability.
Based on the provided text, the Grass TWin software does not appear to have detailed acceptance criteria or a specific study proving device performance against those criteria in the typical sense of a clinical performance study with metrics like sensitivity, specificity, or accuracy. Instead, the "performance testing" described focuses on software verification and validation and bench testing for compliance with pre-determined specifications and regulatory standards.
Here's an analysis of the information, addressing your requests based only on the provided text:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly present a table of quantitative "acceptance criteria" and "reported device performance" in terms of clinical outcomes or diagnostic accuracy. Instead, the acceptance criteria are framed as compliance with internal requirements and regulatory standards for software development, usability, and safety.
Acceptance Criteria Category | Reported Device Performance |
---|---|
Software Development | "The Grass TWin software was designed and developed according to a robust software development process, and was rigorously verified and validated." "Results indicate that the Grass TWin software complies with its predetermined specifications, the applicable guidance documents, and the applicable standards." (referencing FDA guidance documents and IEC 62304: 2006) |
Usability | "The Grass TWin was verified for performance in accordance with internal requirements and the applicable clauses of the following standards: IEC 60601-1-6: 2010, Am1: 2013, Medical electrical equipment -Part 1-6: General requirements for basic safety and essential performance – Collateral standard: Usability; IEC 62366: 2007, Am1: 2014, Medical devices – Application of usability engineering to medical devices." "Results indicate that the Grass TWin complies with its predetermined specifications and the applicable standards." |
Safety & Effectiveness | "Verification and validation activities were conducted to establish the performance and safety characteristics of the device modifications made to the Grass TWin. The results of these activities demonstrate that the Grass TWin is as safe, as effective, and performs as well as or better than the predicate devices." |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document does not describe a "test set" in the context of clinical data or patient samples. The performance evaluation focuses on software verification/validation and bench testing. Therefore, information about sample size, data provenance, or whether it was retrospective/prospective is not applicable as described in this document.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This information is not provided because the performance testing described is not based on a clinical test set requiring expert-established ground truth.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not provided because the performance testing described is not based on a clinical test set requiring expert adjudication.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not mention an MRMC comparative effectiveness study, nor does it refer to AI or assistance for human readers. The device is software for recording, playback, and analysis of physiological signals, not an AI-driven interpretive tool.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
While the Grass TWin is "software only" and can be considered a standalone algorithm in that it performs its functions without direct hardware integration, the performance evaluation documented here describes its compliance with specifications and standards, not a specific standalone clinical performance study with metrics like sensitivity or specificity. The "Indications for Use" explicitly state it is "intended for use by qualified research and clinical professionals with specialized training," implying human-in-the-loop operation.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The concept of "ground truth" as typically used in clinical performance studies (e.g., against pathology or expert consensus) is not directly applicable to the performance testing described. The "truth" against which the software was evaluated was its predetermined specifications and compliance with regulatory standards (e.g., correct operation of software features, adherence to usability principles).
8. The sample size for the training set
The document does not refer to a "training set" for an algorithm, as it describes a software application that is verified and validated rather than trained using machine learning.
9. How the ground truth for the training set was established
As there is no mention of a training set, this information is not provided.
In summary, the provided document describes a regulatory submission for software (Grass TWin) that demonstrates substantial equivalence by focusing on:
- Technology Comparison: Showing direct equivalence in intended use and technological characteristics with a predicate device, noting minor differences that do not raise new questions of safety or effectiveness (e.g., operating system, additional features like PTT Trend Option, Montage Editor Summation Feature).
- Software Verification and Validation: Adherence to robust software development processes and compliance with general FDA guidance documents and international standards (IEC 62304 for software lifecycle processes, IEC 60601-1-6 and IEC 62366 for usability).
- Bench Performance Testing: Verification against internal requirements and applicable standards, specifically for usability.
The detailed clinical performance metrics typical for diagnostic or AI-assisted devices that you've asked about are not present in this 510(k) summary, as the device's nature (recording, playback, and analysis software for existing physiological signals, rather than a novel diagnostic algorithm) and the context of a substantial equivalence submission likely did not require them.
Ask a specific question about this device
(83 days)
OLV
The Zmachine Synergy is an EEG and respiratory signal recorder. The device is intended for use by adult patients in the home or clinical environment, under the direction of a qualified healthcare practitioner, to aid in the diagnosis of sleep disorders.
The Zmachine Synergy system is a portable, battery operated, medical device housed within an ABS enclosure. The system combines the single-channel EEG recording capability of the Zmachine DT-100 (K101830) with the respiratory signal recording capability of the Resmed ApneaLink Air (K143272). The Zmachine Synergy is thereby capable of recording EEG, respiratory airflow, respiratory effort, blood oxygen saturation, pulse rate, and body position during sleep.
The provided text focuses on establishing substantial equivalence for the Zmachine Synergy device by comparing it to predicate devices (Zmachine DT-100 and ApneaLink Air) through technical and bench testing. It does not contain information about clinical study data, ground truth establishment by experts, or MRMC studies that would typically be associated with AI/algorithm performance claims.
Therefore, many of the requested categories cannot be fully addressed based on the provided document.
Here's a breakdown of the available information:
1. Table of Acceptance Criteria and Reported Device Performance
The document describes "bench comparison testing" where the Zmachine Synergy was compared against predicate devices. The "acceptance criteria" are implied to be "substantially equivalent" or "high agreement" between the Zmachine Synergy and the predicate devices for each measured parameter.
Test | Acceptance Criteria (Implied) | Reported Device Performance |
---|---|---|
EEG | Substantially equivalent EEG amplifier characteristics. | High agreement with design limits and each other for amplifier gain, highpass/lowpass filter cutoff frequency, DC offset, and noise floor. Found substantially equivalent. |
Respiratory Airflow | Strong linear relationship / substantially equivalent signals. | Pearson's correlation coefficient revealed a strong linear relationship. Found substantially equivalent. |
Respiratory Effort | Strong linear relationship / substantially equivalent signals. | Pearson's correlation coefficient revealed a strong linear relationship. Found substantially equivalent. |
Pulse Oximetry | High agreement and low mean squared error. | Heart rate and oxygen saturation readings were in high agreement with calibrator output levels and showed low mean squared error when comparing the two systems. Found substantially equivalent. |
Body Position | Very high agreement with angular reference. | Angular readings and angular reference positions were in very high agreement throughout 360 degrees of rotation. Found substantially equivalent. |
2. Sample Size Used for the Test Set and Data Provenance
The study described is bench testing, not a clinical study involving patients or a test set of data in the typical sense for algorithm performance. The "test set" consisted of:
- A multi-channel EEG analog playback system for EEG.
- A variable pressure air pump for Respiratory Airflow.
- Controlled belt stretching and relaxing against a linear scale for Respiratory Effort.
- A patient simulator for Pulse Oximetry.
- Rotation against an angular reference for Body Position.
Therefore, traditional "sample size" is not applicable, as these were controlled bench tests. Data provenance is specific to the synthetic signals generated by the test equipment.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts
Not applicable. The "ground truth" for these bench tests was established by the precise outputs of the test equipment (e.g., specific frequencies, known pressure values, specific angles) and the design specifications of the devices themselves. No human expert adjudication was involved in establishing this "ground truth."
4. Adjudication Method for the Test Set
Not applicable. There was no human adjudication as the tests were performed against known physical inputs from laboratory equipment.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
No. The provided text does not describe an MRMC study. The study focuses on comparing the fundamental signal acquisition characteristics of the Zmachine Synergy with its predicate devices at a technical, hardware level, not on assessing how human readers perform with or without AI assistance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
Partially applicable, but for hardware performance, not an algorithm's diagnostic output. The "standalone" performance described is the ability of the Zmachine Synergy's hardware components to accurately acquire and record various physiological signals, as compared to predicate devices or known inputs, without human interpretation of derived diagnostic information. The device is an "EEG and respiratory signal recorder" intended to "aid in the diagnosis of sleep disorders," implying subsequent human interpretation of the recorded signals. The described tests confirm the accuracy of the recording capabilities, which is a form of standalone performance for the signal acquisition hardware. Specific algorithm performance for detecting sleep disorders based on these signals is not detailed.
7. The Type of Ground Truth Used
The ground truth used was synthetic signals from laboratory equipment and design specifications/known outputs.
- For EEG: Broad spectrum shaped white noise and zero-level output signal.
- For Respiratory Airflow: Stepped variable air pressure signal.
- For Respiratory Effort: Controlled belt stretching and relaxing against a linear scale.
- For Pulse Oximetry: Specified calibrator output levels from a patient simulator.
- For Body Position: An angular reference the device was rotated against.
8. The Sample Size for the Training Set
Not applicable. This document describes a pre-market notification for a medical device (Zmachine Synergy), focusing on demonstrating substantial equivalence through technical bench testing of hardware components. It does not involve a training set for an AI or machine learning algorithm.
9. How the Ground Truth for the Training Set Was Established
Not applicable. As there is no training set mentioned or implied for an AI/ML algorithm within this document, the method of establishing its ground truth is irrelevant here.
Ask a specific question about this device
Page 1 of 6