Search Results
Found 2 results
510(k) Data Aggregation
(104 days)
Spacelabs Healthcare patient monitors, functioning as either bedside or central monitors; passively display data generated by Spacelabs Healthcare parameter modules, Flexports interfaces, and other SDLC based products in the form of waveform and numeric displays, trends and alarms. Key monitored parameters available on the model 91367, 91369, 91370 and 91387, when employing the Spacelabs Command Module, consist of ECG, respiration, invasive and noninvasive blood pressure, SpO2, temperature and cardiac output. Additional parameters and interfaces to other systems are also available depending on the parameter modules employed.
Spacelabs Healthcare patient monitors are intended to alert the user to alarm conditions that are reported by Spacelabs Healthcare parameter modules and/or other physiologic monitors via Flexport interfaces. These determine a) when an alarm condition is violated; b) the alarm priority (i.e. high, medium or low); c) alarm limits; and d) when to initiate and terminate alarm notifications. The patient monitors are also capable of displaying alarm conditions on other monitors that are on the network through the Alarm Watch feature.
Spacelabs Healthcare patient monitors may also function as a generic display or computer terminal. As a generic display or terminal, the patient monitors allow networkbased applications to open windows and display information on other networked monitors.
Spacelabs Healthcare patient monitors are also designed to communicate with a variety of external devices such as displays, network devices, serial devices, user input devices, audio systems, and local/remote recorders.
Spacelabs Healthcare patient monitors are intended for use under the direct supervision of a licensed healthcare practitioner, or by personnel trained in proper use of the equipment in a hospital environment.
The Spacelabs Medical Patient Monitors are a component of the Spacelabs Medical Patient Monitoring System. The four (4) monitor models 91367, 91369 and 91370 and the stationary monitor model 91387; are all similar in that they are all employ the same software and all accept inputs from the family of Spacelabs Parameter Modules. The monitors accept and display parameter information, waveform and numeric data, and alarm conditions including arrhythmia information received from the same family of modules modules.
The portable monitors are capable of operating independent of or connected to the Spacelabs Patient monitoring Network. As independent, portable monitors these devices operate from either AC or battery power. All alarm information received from the parameter modules is visually and audibly available at each monitor. When networked, either physically or wirelessly, these monitors are able to share their information with a central station or with other monitors on the network according to conditions establish by the user/system administrator. They are also able to connect, via the healthcare institution's network, through Dynamic Network Access (DNA) to other applications available on the network.
The stationary monitor, model 91387, can be configured at installation as either a bedside or central station. As an independent bedside monitor the device operates from AC and. presents waveform, numeric data, and alarm conditions, including arrhythmia information, received from parameter modules. When physically networked these monitors are able to share their information with a central station or with other monitors on the network according to conditions establish by the user/system administrator. They are also able to connect, via the healthcare institution's network, through Dynamic Network Access (DNA) to other applications available on the newt6work.
The model 91387 central station monitor provides full monitoring control of remote parameters, including displays and alarms with both visual and audible annunciation for up to 16 patients. All waveform and current numeric data, arrhythmia, ST segment, and trends are available are available at the central station.
The provided text describes the Spacelabs Patient Monitors (models 91367, 91369, 91370, and 91387) and their substantial equivalence to predicate devices. However, it does not contain specific acceptance criteria or a detailed study report with performance metrics in the format usually associated with a medical device's performance evaluation against predefined criteria.
Instead, the document states that the devices were validated through "rigorous testing" to ensure compliance with standards and accurate presentation of parameter data, but it does not quantify this performance or present it in a table of acceptance criteria vs. reported performance.
Therefore, for aspects like "Table of acceptance criteria and reported device performance," "Sample sized used for the test set," etc., the information is not present in the provided text.
Here's an analysis based on the information available in the text:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Specific Performance Metrics for accuracy, sensitivity, specificity, etc. | Not provided in the document. The document states "Test programs verified that parameter data provided by parameter modules...to the Patient Monitors could be accurately presented and that the interface supported the intended clinical work flows and met the user's clinical needs." However, no quantifiable performance metrics, thresholds, or pass/fail criteria are given. |
Compliance with relevant standards (unspecified, but mentioned in "Software section") | The device was subject to "rigorous testing that, in part, support the compliance of the software to the Standards mentioned in the Software section of this submission." No specific standards or results against them are detailed. |
Support intended clinical workflows and meet user's clinical needs | "Test programs verified that...the interface supported the intended clinical work flows and met the user's clinical needs." No specific details on how this was verified or what criteria were met. |
Software developed following a robust software development process | "the Spacelabs Patient Monitors' software was developed following a robust software development process that was fully specified and validated." No details about the process or validation. |
2. Sample Sizes Used for the Test Set and Data Provenance
- Sample Size: Not specified. The document mentions "test programs" but does not give the number of cases, patients, or data points used.
- Data Provenance: Not specified (e.g., country of origin, retrospective or prospective).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- Number of Experts: Not specified.
- Qualifications of Experts: Not specified.
4. Adjudication Method for the Test Set
- Adjudication Method: Not specified.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was an MRMC study done? No. The document describes patient monitors that display data from other modules and communicate alarm conditions. It does not involve human readers interpreting data that could be assisted by AI.
- Effect size of human readers with vs. without AI assistance: Not applicable, as no MRMC study or AI assistance is mentioned in this context.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study
- Was a standalone study done? The device itself (the monitor) is not an "algorithm" in the sense of making diagnostic interpretations. It's a display and communication device for physiological parameters. The "rigorous testing" mentioned was for the monitor's ability to accurately present data and interface. This aligns more with functional and system integration testing rather than an AI algorithm's standalone performance. No specific standalone performance metrics for an algorithm are provided.
7. Type of Ground Truth Used
- Type of Ground Truth: Not explicitly stated for specific performance metrics. The phrasing "parameter data provided by parameter modules...could be accurately presented" suggests that the ground truth for "accuracy" would be the direct output from the source parameter modules or a reference standard for those parameters. For clinical workflow, the "ground truth" would be the subjective assessment of meeting user needs.
8. Sample Size for the Training Set
- Sample Size: Not applicable. This device is a patient monitor displaying data, not an AI/ML algorithm that is "trained" on a dataset.
9. How the Ground Truth for the Training Set Was Established
- How Ground Truth Established: Not applicable. This device is a patient monitor displaying data, not an AI/ML algorithm that is "trained" on a dataset.
Summary of Device and Study Information:
This 510(k) submission is for patient monitors that display and communicate physiological data and alarms from other external modules. The "study" described is a general validation of the monitors' ability to accurately present data, support clinical workflows, and comply with unspecified software standards. There are no detailed performance metrics, test set sizes, expert qualifications, or AI-related study components typically found in submissions for AI/ML-driven devices. The focus is on the functional equivalence and safety of the monitors as display and communication interfaces.
Ask a specific question about this device
(90 days)
The Hewlett-Packard Viridia CMS Patient Monitoring System, Rel.K, with M1027A EEG Measurement Module is intended for measurement and display of the electroencephalogram of adults, pediatrics, and neonates in the Operating Room and intermediate/critical care environments.
The modification is the addition of new applications software and firmware that involves the addition of the M1027A EEG Module to the HP M1175A/76A/77A Component Monitoring System to allow the measurement of electroencephalographic signals.
The provided text describes a 510(k) summary for the Hewlett-Packard Viridia M1175A/76A/77A Component Monitoring System with M1027A EEG module. However, it does not contain detailed acceptance criteria or a study design structured in the way requested by the prompt for a device performance evaluation. The document primarily focuses on the device's substantial equivalence to predicate devices and describes general validation and testing activities.
Here's an attempt to extract and infer information based on the provided text, highlighting the missing details:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
System level tests pass | Test results showed substantial equivalence. |
Integration tests pass | Test results showed substantial equivalence. |
Environmental tests pass | Test results showed substantial equivalence. |
Safety testing from hazard analysis passes | Test results showed substantial equivalence. |
Interference testing passes | Test results showed substantial equivalence. |
Hardware testing passes | Test results showed substantial equivalence. |
Standards compliance | Pass/Fail criteria based on standards, where applicable. |
Specifications cleared for predicate devices met | Pass/Fail criteria based on specifications cleared for predicate devices. |
Acceptable applicability, usability, and efficiency during clinical performance evaluation | More than 90% of users found applicability, usability, and efficiency acceptable or better. |
No adverse events (beyond minor skin irritation) | Only one instance of minor skin irritation reported. |
Missing Information: Specific quantitative thresholds for hardware, software, and clinical performance are not detailed. For example, what constitutes "acceptable" usability or specific signal-to-noise ratios for EEG acquisition are not provided.
2. Sample size used for the test set and the data provenance
The text mentions "Clinical performance evaluations were conducted with the EEG module to validate two channel functionality under conditions existing in the indicated hospital environments."
- Sample Size: Not specified. It only refers to "users."
- Data Provenance: The studies were conducted "under conditions existing in the indicated hospital environments" for the specified patient populations (adult, pediatric, and neonatal). No country of origin is explicitly mentioned, but the submitter is from Germany and the notification is to the US FDA. It's implied to be prospective clinical observations, but not explicitly stated.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
This information is not provided in the document. The clinical evaluation focused on "applicability, usability, and efficiency" as perceived by "users," not on diagnostic accuracy requiring expert ground truth establishment.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not provided as the study did not focus on diagnostic accuracy requiring adjudication.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No MRMC study was done. This device is a measurement module for EEG signals, not an AI-assisted diagnostic tool for human readers. The clinical evaluation focused on system usability and functionality.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- The document implies that the device's core functionality (measuring EEG signals) was evaluated as a standalone component within the larger monitoring system during "system level tests, integration tests, environmental tests, safety testing from hazard analysis, interference testing, and hardware testing." These tests would assess the algorithm's performance in signal acquisition and processing. However, a separate "algorithm only" performance study in the context of diagnostic accuracy (e.g., classifying EEG patterns) is not described.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
For the clinical performance evaluation mentioned, the "ground truth" was user perception of "applicability, usability, and efficiency," rather than a clinical ground truth for diagnostic accuracy (like expert consensus on EEG abnormalities or pathology). For the technical tests, the "ground truth" would be established by engineering specifications, relevant standards, and the performance of predicate devices.
8. The sample size for the training set
This information is not provided. The document describes validation and testing activities, but not the development or training of any machine learning algorithms. The device's functionality seems to be based on established signal processing and measurement principles rather than a learning-based approach requiring a "training set."
9. How the ground truth for the training set was established
This information is not applicable/not provided as there is no mention of a training set or machine learning components requiring a "ground truth" for training.
Ask a specific question about this device
Page 1 of 1