Search Results
Found 4 results
510(k) Data Aggregation
(77 days)
The IntelliVue GuardianSoftware is intended for use by healthcare providers whenever there is a need for generation of a patient record.
The IntelliVue GuardianSoftware is indicated for use in the collection, storage and management of data from Philips specified measurements, Philips Patient Monitors and qualified 3rd party measurements that are connected through networks.
The IntelliVue GuardianSoftware is a stand-alone 'Clinical Information Management System (CIMS)' software application with client-server architecture and designed to be used in professional healthcare facilities (i.e. hospitals, nursing homes) and is intended to be installed on a 'customer-supplied', compatible off-the-shelf (OTS) information technology (IT) equipment.
The IntelliVue GuardianSoftware is a documentation, charting, and decision-support software that is configurable by the hospital to suit the needs of individual clinical units. The device collects data/vital signs from the following Philips compatible patient monitor/measuring devices.
Using the collected data, the device provides trending, review, reporting and notification. The 'Guardian Early Warning Score (EWS)' application is integrated into the IntelliVue GuardianSoftware to provide the healthcare professional/provider basic assessment and the ability to recognize early signs of deterioration in patients.
The IntelliVue GuardianSoftware is not an alarming device and displays alarms from patient monitors as supplemental information only.
Here's an analysis of the acceptance criteria and study information based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The provided text does not contain specific quantitative acceptance criteria or a side-by-side performance comparison of the new device versus these criteria. Instead, the document focuses on demonstrating substantial equivalence to a predicate device. The acceptance is implied by the successful software verification and validation, as well as human factors testing, which collectively show the device "met all safety and reliability requirements and performance claims."
The closest we get to "performance" in the context of the comparison is in the "Substantial Equivalence Determination" column, which consistently states "IDENTICAL" or provides explanations for differences that do not affect substantial equivalence.
Key areas assessed for equivalence (and thus, implicit "performance" against the predicate):
| Feature | Predicate Device (Rev. D.0) | Subject Device (Rev. E.0X) | Acceptance/Equivalence Determination |
|---|---|---|---|
| Intended Use | Identical | Identical | Substantially Equivalent (IDENTICAL) |
| Indications for Use | Collection, storage, management of data from Philips specified measurements & Patient monitors. | Collection, storage, management of data from Philips specified measurements, Patient Monitors, and qualified 3rd party measurements. | Substantially Equivalent (Difference in indications for use does not affect substantial equivalence, HL7 testing verified 3rd party measurements.) |
| System Platform | Client Server Architecture, Microsoft OS, OTS IT equipment | Identical | Substantially Equivalent (IDENTICAL) |
| Operating System(s) & Database | Windows 7/8.1/10, Win Server 2008R2/2012R2/2016, SQL 2014/2016/2017, Android 4.4+ | Windows 8.1/10, Win Server 2012R2/2016/2019, SQL 2014/2016/2017, Android 5.0+ | Substantially Equivalent (Updates to OS versions (removal of unsupported, addition of newer) ensure continued support and do not affect substantial equivalence.) |
| Programming Language | Microsoft® .NET C#, Microsoft® .NET C++, Java (mobile client) | Identical | Substantially Equivalent (IDENTICAL) |
| Maximum # of Supported Patients/Servers/Clients | Patients: 1200, Servers: 120, Clients: 240, SW Clients: 40 | Identical | Substantially Equivalent (IDENTICAL) |
| Compatible Devices | Philips IntelliVue Cableless Measurements, MP5/MP5SC, MX400/XG50, SureSigns VS3/VS4, Biosensor EarlySense Insight Device | Adds Philips EarlyVue VS30 (K190624) and Philips Biosensor BX100 (K192875) | Substantially Equivalent (Addition of new patient monitoring devices does not affect substantial equivalence.) |
| Software Functionality (General Overview) | Clinical Documentation, Patient Data Management, Reporting (SBAR), Calculations (Protocol Watch, EWS Scoring), Clinical decision support, Storage | Identical | Substantially Equivalent (IDENTICAL) |
| System Interfaces (IT Network Requirements) | Hospital IT (W)LAN infra, HL7, ADT, Labs, Paging | Adds HL7 data import extension | Substantially Equivalent (Addition of HL7 data import expands compatibility to 3rd party systems and does not affect substantial equivalence.) |
| Device Interfaces | Internal interface for connection to measuring devices via hospital LAN | Identical | Substantially Equivalent (IDENTICAL) |
| Remote Viewing/Operation | Independent display/operating interface, operations from host measuring device, PC UI (mouse/touchscreen), XDS Infrastructure Service | Identical | Substantially Equivalent (IDENTICAL) |
2. Sample size used for the test set and the data provenance:
The document does not specify a "test set" in terms of patient data. The evaluation relies on:
- Software Verification and Validation Testing: This is typically performed on software builds and simulated environments, not directly on patient data.
- Human Factors and Usability Testing: This involves human users interacting with the device. The sample size for this specific testing is not mentioned.
- Data Provenance: Not applicable in the context of patient data testing, as no patient data was used for performance evaluation of the software itself.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
This information is not provided. Since the evaluation focused on software verification/validation and human factors/usability, and not on clinical performance with patient data requiring expert ground truth, this type of detail is absent.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
No adjudication method is mentioned, as there was no test set requiring expert adjudication for clinical ground truth.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
No MRMC comparative effectiveness study was conducted or mentioned. The device is a "Clinical Information Management System" and not an AI-assisted diagnostic tool for human readers. It collects, stores, and manages data, and provides decision support (e.g., Early Warning Score), but it doesn't appear to directly assist human readers in interpreting medical images or other complex data where "improvement" with AI would be measured.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
The device is inherently a "Clinical Information Management System (CIMS) software application" that operates in a standalone manner as software. However, its function as data collection, storage, management, and providing decision support (like EWS) means it is intended for use by healthcare providers and is integrated into clinical workflows. The performance testing focuses on its software functionality, reliability, and human factors, rather than a diagnostic algorithm's standalone performance. The "Guardian Early Warning Score (EWS)" is an algorithm, and its performance would be assessed for accuracy in calculating scores, but the document doesn't provide details on its standalone performance metrics.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
For the software verification and validation, the "ground truth" would be the expected functional behavior and output of the software as defined by its requirements and specifications. For human factors testing, the ground truth would be the expected safe and effective interaction of users with the device. There isn't an external clinical ground truth (like pathology or outcomes) applied to the software's performance itself in this submission.
8. The sample size for the training set:
Not applicable. This is not an AI/machine learning model where a specific training set (of patient data) would be used. The software is developed based on engineering principles and regulatory requirements.
9. How the ground truth for the training set was established:
Not applicable, as there is no training set for an AI/machine learning model.
Ask a specific question about this device
(138 days)
The IntelliVue GuardianSoftware is intended for use by healthcare providers whenever there is a need for generation of a patient record.
The IntelliVue GuardianSoftware is indicated for use in the collection, storage and management of data from Philips specified measurements and Philips Patient Monitors that are connected through networks.
The IntelliVue GuardianSoftware (866009) is a Clinical Information Management System. It collects and manages vital signs data acquired from the Philips specified measurements and Philips Patient Monitors. The IntelliVue GuardianSoftware provides review, reporting, clinical documentation, remote viewing, operating, interfacing, storage, printing and predictive trend analytics, meaning trending, notification, calculations and clinical advisories including EWS deterioration status. The IntelliVue GuardianSoftware is a software only product. It is intended to be installed on customer supplied compatible off-the shelf information technology equipment that meet the technical requirements as specified by Philips.
The provided text primarily details an FDA 510(k) submission for the Philips IntelliVue GuardianSoftware and an administrative change letter. It does not contain an elaborative study description with acceptance criteria and its proof for device performance in the requested format.
However, based on the general information provided in the 510(k) summary, I can extract and infer some information, but it will not be a complete answer to all your specific questions:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state quantitative acceptance criteria or detailed device performance metrics in a table format. It broadly states:
| Acceptance Criteria (Inferred) | Reported Device Performance (Inferred) |
|---|---|
| Performance, functionality, and reliability characteristics met | All test results showed substantial equivalence to the predicate device. |
| Compliance with hazard analysis pass/fail criteria | All specified pass/fail criteria have been met. The test results confirmed the effectiveness of the implemented design risk mitigation measures. |
| Meeting safety and reliability requirements and performance claims | The Philips IntelliVue GuardianSoftware (SW Rev.D.0) meets all safety and reliability requirements and performance claims. |
2. Sample size used for the test set and the data provenance
- Sample Size for Test Set: Not specified. The document only mentions "software testing on integration level (functional testing and regression testing) and software testing on system level (hazard analysis testing and dedicated software performance testing)." It does not indicate the number of patient records or data points used in these tests.
- Data Provenance: Not specified. It does not mention the country of origin of the data or whether the data was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Not specified. The document does not describe the establishment of a "ground truth" for the test set using human experts. The testing appears to be functional and performance-based against specifications, not clinical outcomes evaluated by experts.
4. Adjudication method for the test set
Not specified. Since no expert ground truth establishment is mentioned, there's no mention of an adjudication method.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No. The document describes the Philips IntelliVue GuardianSoftware as a Clinical Information Management System for collecting, storing, and managing vital signs data, and providing review, reporting, clinical documentation, remote viewing, operating, interfacing, storage, printing, and predictive trend analytics. It is not an AI-assisted diagnostic device, and therefore, an MRMC comparative effectiveness study comparing human readers with and without AI assistance is not applicable to this device and was not mentioned.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
Yes, implicitly. The testing described is for the software itself, independently verifying its performance, functionality, and reliability according to specifications and safety standards. The device is a "software only product" and its testing would inherently be standalone.
7. The type of ground truth used
For the software verification and validation, the "ground truth" would be the pre-defined specifications, requirements, and safety standards (e.g., ANSI/AAMI/IEC 62304:2006). The software was tested to ensure it met these established criteria. It does not involve medical ground truth like pathology, expert consensus on images, or outcomes data.
8. The sample size for the training set
Not applicable/Not specified. This device is described as a "Clinical Information Management System" that handles data collection, storage, and management, including predictive trend analytics and clinical advisories. While "predictive trend analytics" could potentially involve machine learning, the document does not elaborate on the specific algorithms used or if a training set, characteristic of machine learning models, was employed or is relevant to its substantial equivalence claim. The focus here is on the functionality and safety of the data management software itself.
9. How the ground truth for the training set was established
Not applicable/Not specified, for the same reasons as #8. If predictive trend analytics involve trained models (which is not explicitly stated but hinted at), the method for establishing ground truth for such a training set is not described. The document emphasizes testing against a predicate device's cleared specifications and general software safety standards.
Ask a specific question about this device
(214 days)
The IntelliVue GuardianSoftware is indicated for use by healthcare providers whenever there is a need for generation of a patient record.
The IntelliVue GuardianSoftware is intended for use in the collection, storage and management of data from Philips specified measurements and Philips Patient Monitors that are connected through networks.
The IntelliVue GuardianSoftware (866009) is a Clinical Information Management System. It collects and manages vital signs data acquired from the IntelliVue Cableless Measurements and IntelliVue Patient Monitors. The IntelliVue GuardianSoftware provides review, reporting, clinical documentation, remote viewing, operating, interfacing, storage, printing and predictive trend analytics, meaning trending, notification, calculations and clinical advisories including EWS deterioration status. The IntelliVue GuardianSoftware is a software only product. It is intended to be installed on customer supplied compatible off-the shelf information technology equipment that meet the technical requirements as specified by Philips.
The provided text is a 510(k) summary for the Philips IntelliVue Guardian Software, Revision C.1. This document primarily focuses on demonstrating substantial equivalence to a predicate device and does not contain detailed information about acceptance criteria or specific study results showing device performance in the way a clinical trial or algorithm validation study would.
The document states that the modified device has the same technological characteristics as the legally marketed predicate device and that "all test results showed substantial equivalence." It also mentions that "Testing involved software functional testing and regression testing on an integration and system level as well as testing from the hazard analysis," and "Testing as required by the hazard analysis was conducted and all specified pass/fail criteria have been met."
However, it does not provide quantitative performance metrics (e.g., sensitivity, specificity, AUC) or the methodology of a study that would typically be described with acceptance criteria and a detailed analysis of human-machine interaction or standalone AI performance. The device is a "Clinical Information Management System" that "collects and manages vital signs data" and provides "review, reporting, clinical documentation, remote viewing, operating, interfacing, storage, printing and predictive trend analytics." This type of device's "performance" is often assessed through software verification and validation, ensuring it accurately processes and displays data, rather than through a diagnostic accuracy study.
Therefore, many of the requested details about acceptance criteria, study sample sizes, expert ground truth establishment, MRMC studies, and standalone AI performance cannot be extracted from this document, as it describes a software system for data management and display, not an AI/ML diagnostic or predictive algorithm.
Based on the provided text, here is what can be inferred or stated:
1. A table of acceptance criteria and the reported device performance:
| Acceptance Criteria (Implied) | Reported Device Performance |
|---|---|
| Functional and Regression Testing Pass/Fail Criteria: | "All test results showed substantial equivalence." |
| - Accuracy of data collection, storage, and management. | "meets all safety and reliability requirements and performance claims." |
| - Correct operation of review, reporting, clinical documentation, remote viewing, operating, interfacing, storage, printing, and predictive trend analytics (trending, notification, calculations, clinical advisories, EWS deterioration status). | "confirmed the effectiveness of the implemented design risk mitigation measures." |
| Hazard Analysis Testing Pass/Fail Criteria: | "all specified pass/fail criteria have been met." |
| - Effectiveness of design risk mitigation measures. | |
| IEC 62304:2006 (Software life cycle processes) Compliance: | Verification according to this standard was conducted. |
| Safety and Reliability Requirements: | "meets all safety and reliability requirements and performance claims." |
2. Sample size used for the test set and the data provenance:
- The document does not specify a sample size for a test set in the context of clinical performance data (e.g., patient cases).
- The testing described is primarily software functional, regression, and hazard analysis testing, not a clinical study on patient data.
- Data provenance (country of origin, retrospective/prospective) is not mentioned as the study described is software verification and validation, not a clinical data study.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- This information is not applicable/provided in the context of a software verification and validation study. Ground truth in this context would be adherence to functional specifications and safety requirements, typically evaluated by software testers and quality engineers, not clinical experts for diagnostic accuracy.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- This information is not applicable/provided as there is no clinical test set requiring expert adjudication mentioned.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- An MRMC study was not performed or mentioned. The device is a "Clinical Information Management System" that supports data management and provides "predictive trend analytics," but not a diagnostic AI intended for human-AI synergistic performance evaluation in the manner of an MRMC study.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Standalone performance in the diagnostic sense (e.g., algorithm sensitivity/specificity) was not performed or described. The device's "performance" is in its ability to correctly collect, store, manage, and display data, and provide trend analysis, not in generating independent diagnostic interpretations.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For the software verification and validation, the "ground truth" would be the device's functional specifications, design requirements, and hazard analysis outcomes. There is no mention of clinical ground truth (e.g., pathology, outcomes) in this summary, as it's not a diagnostic AI.
8. The sample size for the training set:
- This information is not applicable/provided. The document describes a traditional software system, not a machine learning model that requires a training set.
9. How the ground truth for the training set was established:
- This information is not applicable/provided.
Ask a specific question about this device
(117 days)
The IntelliVue GuardianSoftware is indicated for use by healthcare providers whenever there is a need for the generation of a patient record.
The IntelliVue GuardianSoftware is intended for use in the collection, storage and management of data from Philips Cableless Measurements and Philips Patient Monitors that are connected through networks.
The IntelliVue GuardianSoftware (866009) is a Clinical Information Management System. It collects and manages vital signs data acquired from the IntelliVue Cableless Measurements and IntelliVue Patient Monitors. The IntelliVue GuardianSoftware provides trending, review, reporting, notification, clinical documentation, calculations, clinical advisories including EWS deterioration status, remote viewing and operating, interfacing, storage, and printing. The IntelliVue GuardianSoftware only product. It is intended to be installed on customer supplied compatible off-the shelf information technology equipment that meet the technical requirements as specified by Philips.
The IntelliVue GuardianSoftware can currently acquire physiological data from the following compatible measuring devices:
- Philips IntelliVue Measurements CL SpO2 Pod, CL NBP Pod, CL Resp Pod and l
- Philips IntelliVue Patient Monitors MP5 and MP5SC. -
The subject modification adds the Philips IntelliVue MX400/450 patient monitors and the Philips Suresigns VS3/VS4 patient monitors as additional optional Philips patient monitors to the list of measuring devices compatible with the IntelliVue GuardianSoftware and updates the versions of the supported SQL database versions.
To support the before described purposes the IntelliVue GuardianSoftware was modified to maintain a consistent numbering scheme. The modified common software revision is Rev.C.0
The provided document pertains to an FDA 510(k) premarket notification for the Philips IntelliVue Guardian Software Revision C.0, which is a Clinical Information Management System. However, the document does not contain specific details regarding acceptance criteria, reported device performance metrics, or the study used to prove the device meets such criteria.
The document focuses on:
- Administrative changes: Updating a previous SE determination letter to remove a secondary product code.
- Description of the device: Stating that the IntelliVue GuardianSoftware collects and manages vital signs data from Philips Cableless Measurements and IntelliVue Patient Monitors. It provides trending, review, reporting, notification, clinical documentation, calculations, clinical advisories (including EWS deterioration status), remote viewing and operating, interfacing, storage, and printing.
- Modifications: The subject modification adds Philips IntelliVue MX400/450 and Philips Suresigns VS3/VS4 patient monitors as compatible devices and updates supported SQL database versions.
- Technological characteristics: Stating that the modified device has the same technological characteristics as the predicate device (software-only, client-server architecture, runs on standard PC/Server with specified Microsoft OS and databases).
- Verification, Validation, and Testing: A general statement about software functional testing, regression testing, and hazard analysis testing, confirming that pass/fail criteria were met. It also mentions compliance with IEC 62304:2006 for software life cycle processes.
Therefore, I cannot provide the requested information. The document explicitly states: "The 510(k) submission was not re-reviewed" (page 0), and "Verification, validation, and testing activities establish the performance, functionality, and reliability characteristics of the subject modified devices with respect to the predicate. Testing involved software functional testing and regression testing on an integration and system level as well as testing from the hazard analysis." (page 6). It also states: "Pass/Fail criteria were based on the specifications cleared for the predicate devices and all test results showed substantial equivalence." (page 6).
This indicates that clinical performance data, specific acceptance criteria, or detailed study results are not included in this summary. The focus is on demonstrating substantial equivalence to a predicate device through software verification and validation, rather than proving performance against new, explicit acceptance criteria with detailed study data.
Ask a specific question about this device
Page 1 of 1