(445 days)
The Compumedics Sleep Monitoring System is used as an aid in the diagnosis of sleep and respiratory related sleep disorders. The use of this sleep monitoring system is to be under the supervision of a physician, sleep technologist or clinician. The Compumedics Sleep Monitoring System is an information management tool to record, display, organise, redisplay (retrieve) and generate user-defined reports based on the subject's data received from monitoring devices typically used to evaluate sleep and sleep related respiratory disorders.
The Compumedics Sleep Monitoring System is a device which integrates the monitoring and recording function of these individual devices, and provides the means to gather all these signals simultaneously into a single "box" where these signals can be viewed, stored, and retrieved in formats selected by the clinician or sleep technologist.
The Compumedics Sleep Monitoring System is compared to the SensorMedics Series 4000 Sleep System (K915856) in a clinical test to demonstrate its substantial equivalence. The document doesn't explicitly define formal "acceptance criteria" with numerical thresholds, but rather establishes equivalence based on the comparison of features, performance characteristics, and the agreement of scoring results with established benchmarks of human inter-rater variability.
Here's a breakdown of the information based on the provided text:
1. A table of acceptance criteria and the reported device performance
The document doesn't provide a table of acceptance criteria in the typical sense (e.g., "sensitivity > X%", "specificity > Y%"). Instead, it focuses on demonstrating substantial equivalence to predicate devices. The "performance" is primarily described in terms of agreement with human scoring variability.
Criterion Type (Implicit) | Reported Device Performance |
---|---|
Functional Equivalence | - Records EEG, EOG, EMG, ECG, Intercostal EMG, Leg EMG, Air Flow, Respiratory Effort, Snoring Sounds, External Capnograph Signal, External pH Monitor Signal, External NPT Signals, Body Sleeping Position. |
- Displays Raw Incoming Data and Raw Data for Interpretation.
- Provides Montage Flexibility, Sleep Stage Scoring, Definable Sleep Event Criteria, Adjustable Sleep Scoring Rules, Apnea/Hypopnea Scoring, Definable Flow/Effort Criteria, Adjustable Apnea/Hypopnea Rules, Generates a Printed Report.
- Capable of Remote Telephonic Sleep Surveillance, Setting Recording Parameters Remotely, and ability to do an In-Home Study. |
| Technological Characteristics Equivalence | - A/D Vertical Resolution: 8 or 12 bits (same as predicates). - Sampling Rate/Channel Samples/sec: Max 500 per channel or 250 per channel (P-Series) (comparable to predicates).
- Storage Rate/Channel Samples/sec: Max 500 per channel or 250 per channel (P-Series) (comparable to predicates).
- Storage Medium: Magneto Optical/Zip (comparable to predicates' optical disk and floppy/hard disk).
- CPU Type and Clock Speed: Pentium 166 or greater / 16 Bit /14 Mhz (comparable to predicates' 80486/Pentium and 68020 CPUs).
- Resolution: 1024 x 768 or greater (comparable).
- Power: AC or Battery (comparable).
- Preamplifiers: Yes (same as predicates). |
| Safety Equivalence | - Conforms to IEC 601.1. (Explicit safety standard compliance). - Does not treat, provide therapeutic effect, administer energy, or perform diagnoses; poses no measurable risk to subjects (stated as equivalent to predicates). |
| Clinical Performance Equivalence | - Sleep Staging of manually scored SensorMedics and Compumedics show similar appearance of traces. - Distinguishable features appear in the same relative locations on the Compumedics Sleep Monitoring System when compared to the SensorMedics Sleep Monitoring System.
- Concordance between manual and manual scoring, and manual and automatic scoring is within the range specified as being typical between two manual scorers (82% to 91.3% agreement as per cited articles).
- Comparison of random epochs of raw data for visualization of signal quality shows equivalence.
- Comparison of specific events (abnormal respiratory events, EEG arousal, different sleep stages) shows equivalence. |
2. Sample size used for the test set and the data provenance
- Sample Size: Not explicitly stated as a number of patients or recordings. The study involved a "side-by-side comparison" and analyzed "random epochs of raw data" and "specific events." It references inter-rater agreement studies that used 60-second epochs and 20-second epochs. It's unclear if new data was collected or if pre-existing recordings were used for the side-by-side comparison.
- Data Provenance: Not explicitly stated, but the device is manufactured in Australia (Compumedics Sleep Pty. Ltd 1 Marine Parade Abbotsford, Victoria, 3067 Australia). The clinical test was conducted by comparing it with the SensorMedics Series 4000 Sleep System (K915856). The nature of the comparison (e.g., if new patient data was generated on both systems simultaneously, or if stored data was processed by both) is not detailed. It is implied to be a retrospective comparison of outputs.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- The study references external literature to establish benchmarks for human inter-rater variability for sleep stage scoring.
- Gaillard and Tissot (2) and Smith et. al. (3) studies: Used two human scorers. No specific qualifications are provided for these scorers in this document.
- Stantus et. al. (7) and Ferri et. al. (14) studies: Used two independent readers for agreement. No specific qualifications are provided.
- Japanese study (16): Used 10 laboratories for inter-reader agreement. No specific qualifications are provided beyond "laboratories."
- For the direct comparison of the Compumedics system against the SensorMedics system, it involved "manually scored SensorMedics" data and "Compumedics Manual Analysis." The individuals performing this manual scoring are referred to as "clinician or sleep technologist" in the intended use section, but their number and specific qualifications for establishing ground truth in this particular comparison are not specified.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- None specified for the direct comparison of the Compumedics system with the SensorMedics system. The agreement was based on visual appearance of traces and concordance.
- The referenced literature for human inter-rater variability (Gaillard & Tissot, Smith et al., Stantus et al., Ferri et al., Japanese study) discusses agreement rates between two human scorers or multiple laboratories, implying a comparison against each other rather than a 2+1 or 3+1 adjudication with a tie-breaker.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, an MRMC comparative effectiveness study was not explicitly done in the way described. The study did not evaluate human reader improvement with AI assistance. The Compumedics device's "automatic analysis" function is compared to "Compumedics Manual Analysis" and "SensorMedics Manual Analysis," and its concordance is evaluated against the variability between human scorers. The device itself is framed as an "information management tool" to aid diagnosis, rather than an AI assistant improving human reader performance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Yes, a form of standalone performance was implicitly evaluated. The "Compumedics Automatic Analysis" was compared to "Compumedics Manual Analysis" for sleep staging and event scoring. The results stated "Concordance between manual and manual scoring, and manual and automatic scoring is within the range specified as being typical between two manual scorers." This indicates an assessment of the algorithm's performance on its own against a human-scored benchmark.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- The ground truth for sleep staging and event scoring is based on expert consensus guided by established methodology. Specifically, the document mentions:
- "Scoring the data is based on established methodology (Rechtscaffen, A and Kales, A. A Manual of Standardised Terminology, Techniques and Scoring System for Sleep Stages of Human Subjects, United States National Institutes of Health publication No. 204, 1968)."
- The device's sleep staging functions are based on "standard Rechtscaffen and Kales (R&K) methodology."
- The "manual scoring" performed on the SensorMedics system and the "Compumedics Manual Analysis" would likely represent this expert-derived ground truth.
8. The sample size for the training set
- Not applicable / Not specified. The document does not describe the development or training of an AI/ML model in the contemporary sense. The "automatic analysis" functions (sleep staging, respiratory scoring, arousal detection, PLM detection) are stated to be based on "standard Rechtscaffen and Kales (R&K) methodology" and use "default values or clinician-selected criteria." This suggests a rule-based system or an algorithm derived from established guidelines rather than a data-trained machine learning model. Therefore, a distinct "training set" for a machine learning algorithm is not mentioned.
9. How the ground truth for the training set was established
- Not applicable / Not specified. As there is no explicitly defined training set described for a data-driven model, the method for establishing its ground truth is not relevant. The "ground truth" for the device's rule-based automatic functions is the R&K methodology.
§ 868.2375 Breathing frequency monitor.
(a)
Identification. A breathing (ventilatory) frequency monitor is a device intended to measure or monitor a patient's respiratory rate. The device may provide an audible or visible alarm when the respiratory rate, averaged over time, is outside operator settable alarm limits. This device does not include the apnea monitor classified in § 868.2377.(b)
Classification. Class II (performance standards).