(18 days)
Indicated for use by health care professionals whenever there is a need for monitoring the physiological parameters of patients. Intended for monitoring, recording and alarming of multiple physiological parameters of adults, pediatrics and neonates in healthcare facilities. The MP20, MP30, MP40 and MP50 are additionally intended for use in transport situations within healthcare facilities.
ST Segment monitoring is restricted to adult patients only.
The transcutaneous gas measurement (tcp02 / tcpCO2) is restricted to neonatal patients only.
The Philips MP20, MP30, MP40, MP50, MP60, MP70, MP80 and MP90 IntelliVue Patient Monitors.
The Philips MP20, MP30, MP40, MP50, MP60, MP70, MP80 and MP90 IntelliVue Patient Monitors, Release E.03, are patient monitoring devices. The information provided does not contain specific acceptance criteria values or detailed performance metrics. The submission focuses on demonstrating substantial equivalence to previously cleared devices through verification, validation, and testing activities.
Here's an analysis based on the provided text, addressing the requested points:
1. Table of Acceptance Criteria and Reported Device Performance:
The document lacks a table explicitly stating acceptance criteria and corresponding reported device performance values for specific clinical metrics (e.g., sensitivity, specificity, accuracy for arrhythmia detection, ST-segment monitoring, or other physiological parameters).
The general statement provided is:
"Pass/Fail criteria were based on the specifications cleared for the predicate device and test results showed substantial equivalence. The results demonstrate that the Philips IntelliVue Patient Monitor meets all reliability requirements and performance claims."
This indicates that the acceptance criteria were the existing specifications of the predicate devices, and the device's performance met these specifications. However, the specific metrics (e.g., error margins, detection rates) are not quantified in this summary.
2. Sample Size Used for the Test Set and Data Provenance:
The document
- Does not specify the sample size used for the test set.
- Does not specify the data provenance (e.g., country of origin, retrospective or prospective) for any clinical data that might have been used in performance testing. The testing described appears to be primarily system-level, performance, and safety testing rather than a clinical study with patient data.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Experts:
The document
- Does not mention the use of experts to establish ground truth for a test set. The described testing focuses on engineering verification and validation against pre-defined specifications.
- Does not specify any qualifications for experts as none are mentioned in the context of ground truth establishment.
4. Adjudication Method for the Test Set:
The document
- Does not describe any adjudication method for a test set. This implies that if any human review was involved, it wasn't a formal adjudication process as typically seen in clinical studies for AI/CAD devices.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and the effect size:
- No MRMC comparative effectiveness study was mentioned or indicated. The submission pertains to a software update (Release E.03) for existing patient monitors and demonstrates equivalence through engineering and performance testing rather than a clinical effectiveness study comparing human readers with and without AI assistance. Therefore, no effect size of human readers improving with AI vs. without AI assistance is reported.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- The document describes "system level tests, performance tests, and safety testing." While these tests evaluate the device's functions, it's unclear if a specific "standalone" performance study in the context of an algorithm (i.e., measuring the algorithm's performance on clinical data without user interaction) was conducted and reported in this summary. The device itself is a monitor that presents data to a human, implying a human-in-the-loop for interpretation and action.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
The document does not explicitly state the type of ground truth used. Given the nature of the device (physiological parameter monitoring) and the testing described (system level, performance, safety), the ground truth for "performance tests" would likely be:
- Reference standards: Calibrated instruments or validated simulators for physiological parameters (e.g., heart rate, blood pressure, oxygen saturation, ECG signals).
- Pre-defined specifications: The accepted operating ranges and accuracy limits of the predicate devices.
8. The sample size for the training set:
- Not applicable / Not mentioned. The submission describes a software update for existing patient monitors. It does not refer to a machine learning or AI algorithm development that would typically involve a "training set." The testing performed is to ensure the new software release functions equivalently to the previous versions and meets specifications.
9. How the ground truth for the training set was established:
- Not applicable / Not mentioned. As there's no mention of a training set, the establishment of ground truth for such a set is not discussed.
§ 870.1025 Arrhythmia detector and alarm (including ST-segment measurement and alarm).
(a)
Identification. The arrhythmia detector and alarm device monitors an electrocardiogram and is designed to produce a visible or audible signal or alarm when atrial or ventricular arrhythmia, such as premature contraction or ventricular fibrillation, occurs.(b)
Classification. Class II (special controls). The guidance document entitled “Class II Special Controls Guidance Document: Arrhythmia Detector and Alarm” will serve as the special control. See § 870.1 for the availability of this guidance document.