Search Results
Found 1 results
510(k) Data Aggregation
(232 days)
SMARTMONITOR 2, MODEL 4000
The SmartMonitor® 2 Infant Apnea Monitor is intended for use in the continuous monitoring of respiration and heart rate of infant patients in a home or hospital environment. The monitor detects and alarms for periods of central apnea and high or low heart rates.
The SmartMonitor 2 is a monitoring device designed to monitor respiration and heart rate. Upon detection of abnormal events, SmartMonitor 2 alerts the caregiver via both visual and audible alarms and records the information for subsequent clinical review.
SmartMonitor 2 acquires the electrical activity of the heart via a two or three-lead electrode configuration. The same set of electrodes is used to measure transthoracic impedance and to subsequently develop a respiration signal. Detection of heart beats and respiration breaths is accomplished via softwarebased algorithms, which analyze the ECG and respiration signals. When beats or breaths are detected, SmartMonitor 2 provides feedback by blinking the Heart and Respiration LED's and calculates apnea intervals, average heart rates, and average breath rates for the purpose for identifying ECG and respiration rates that violate preset threshold values. In addition to the alarms, when abnormal ECG and respiration rates are detected, both tabular data and associated waveforms are logged in nonvolatile memory for subsequent review by a Health Care Professional.
Here's an analysis of the provided text regarding the Respironics SmartMonitor 2, broken down by your requested categories:
1. Table of Acceptance Criteria and Reported Device Performance
The document provides performance specifications for both Respiration and ECG monitoring. These specifications serve as the acceptance criteria. The document states that the SmartMonitor 2 "meets the requirements specified in the Product Specification and that it conforms to the required standards," indicating it achieved these criteria.
Feature | Acceptance Criteria (SmartMonitor and SmartMonitor 2) | Reported Device Performance (SmartMonitor 2) |
---|---|---|
Respiration Monitoring | ||
Resp. Detection Rate | 1 to 120 BrPM @ 1 Ohm, peak to peak | Meets requirements |
Sensitivity | 0.1 to 5 Ohms, peak to peak | Meets requirements |
Output Amplitude Accuracy | +/- 5% | Meets requirements |
CMRR | > 75 dB at 60 Hz | Meets requirements |
Input Impedance | > 75 KOhms | Meets requirements |
Detection Amplitude Range | 0.2 to 5 Ohms, peak to peak | Meets requirements |
ECG Monitoring | ||
ECG Detection Rate | 25 to 300 BPM @ 1mV, baseline to peak | Meets requirements |
Sensitivity | +/- 0.1 to +/- 5.0 mV | Meets requirements |
Output Amplitude Accuracy | +/- 2% | Meets requirements |
ECG CMRR | > 75 dB at 60 Hz | Meets requirements |
Input Impedance | > 75 KOhms | Meets requirements |
Detection Amplitude Range | 0.2 mV to 5 mV, baseline to peak | Meets requirements |
2. Sample Size Used for the Test Set and Data Provenance
The document states: "Data were compiled from multiple sites on at least 25 patients."
- Sample size for test set: At least 25 patients.
- Data Provenance: Not explicitly stated (e.g., country of origin, specific demographics). The study was a comparison of the SmartMonitor 2 to a predicate, and the data was "compiled from multiple sites," implying it was real-world clinical data. It's likely retrospective as it involves comparing to "predicate devices" and using "hand scoring" of data, suggesting analysis of pre-existing records rather than a forward-looking, interventional study. However, the document doesn't explicitly state retrospective or prospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document indicates that the "gold standard" for ground truth was "hand scoring."
- Number of experts: Not specified.
- Qualifications of experts: Not specified (e.g., radiologist with 10 years of experience). It simply refers to "hand scoring," implying trained personnel who interpret the raw physiological signals.
4. Adjudication Method for the Test Set
The document does not explicitly state an adjudication method (like 2+1 or 3+1). It only mentions "hand scoring" as the "gold standard," which could imply a single scorer or an internal process not detailed.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size
No MRMC study is mentioned. The clinical study described compares the device (SmartMonitor 2) against a "gold standard" (hand scoring) and aims to show substantial equivalence to a predicate device, not to evaluate human reader performance with or without AI assistance.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
Yes, a standalone performance evaluation was done. The clinical protocol compared the SmartMonitor 2's performance (its algorithms for detecting apnea) "to the gold standard of hand scoring." This directly assesses the algorithm's performance without a human in the loop for the detection task itself, though a healthcare professional reviews the logged data. The device's primary function is automated detection and alarm.
7. The Type of Ground Truth Used
The type of ground truth used for the clinical study was expert consensus (implied via "hand scoring") on the physiological signals (respiration and heart rate) to identify central apnea events.
8. The Sample Size for the Training Set
The document does not specify a separate training set. The descriptions of "system qualification testing" and "software testing" refer to verification and validation activities. The clinical study described is for validation of the final device, not for training a model. Given the time period (2002) and the nature of apnea monitors, it's highly likely it was based on deterministic algorithms rather than machine learning models requiring extensive training data.
9. How the Ground Truth for the Training Set Was Established
As no specific training set for a machine learning model is mentioned, the method for establishing ground truth for a training set is not applicable or provided. The device's performance is validated against clinical data after its algorithms are developed, implying ground truth for algorithm development would be based on physiological principles and expert-defined thresholds, not a machine learning training set in the modern sense.
Ask a specific question about this device
Page 1 of 1