(14 days)
Indications for Use (M3046A): For monitoring, recording and alarming of multiple physiological parameters of adults, pediatrics and neonates in hospital and/or medical transport environments.
Indications for Use (MP60 & MP70): Indicated for use by health care professionals whenever there is a need for monitoring the physiological parameters patients. Intended for monitoring, recording and alarming of multiple physiological parameters of adults, pediatrics and neonates in health care facilities.
The modification is primarily a software-based change that incorporates a Data Export function communication protocol to transfer data from the patient monitors to an external computer or external device (a data client). This protocol is a connection oriented, message based request/response protocol.
This document is a 510(k) summary for Philips Medical Systems' M3046A (M2/M3/M4) Compact Portable Patient Monitor and MP60 & MP70 IntelliVue Patient Monitors with Portal Technology. The submission is for a software-based change that incorporates a Data Export function communication protocol.
Here's an analysis of the provided text in relation to your request:
1. Table of Acceptance Criteria and Reported Device Performance & 7. Type of Ground Truth Used:
The document does not explicitly state specific acceptance criteria (e.g., sensitivity, specificity, accuracy thresholds with corresponding numerical targets) nor does it report detailed device performance metrics against such criteria.
The submission is for a software modification related to data export, and the verification testing mentioned is focused on "functional level tests and safety testing from the risk analysis." This suggests the primary focus of the testing was to ensure the new data export function worked as intended and did not introduce new safety hazards, rather than evaluating specific clinical performance metrics like arrhythmia detection accuracy.
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective):
The document states: "Verification testing activities were conducted to establish the performance and reliability characteristics of the new device. Testing involved functional level tests and safety testing from the risk analysis."
There is no information provided regarding:
- The sample size of any test set (e.g., number of patients, number of data points).
- Data provenance (country of origin, retrospective/prospective).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
There is no mention of experts or ground truth establishment in the context of a clinical validation test set. Given the nature of the submission (software change for data export) and the description of the testing, it's highly unlikely that clinical experts were used to establish a "ground truth" in the way they would be for a diagnostic algorithm.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
As no expert review for ground truth is described, there is no adjudication method provided.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
No MRMC comparative effectiveness study was conducted or reported. The device is a patient monitor with data export functionality, not an AI-assisted diagnostic tool that would typically involve human readers.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
The device itself is a patient monitor, which is inherently a standalone device. The software modification is for data export. The "verification testing activities" are likely evaluating the functionality of this data export and the overall system's safety and effectiveness. However, it's not a standalone AI algorithm in the contemporary sense. The document does not describe specific algorithmic performance metrics in isolation.
8. The sample size for the training set:
No training set is mentioned. This type of device and the described modification (data export) would not typically involve machine learning or a training set in the conventional sense.
9. How the ground truth for the training set was established:
As no training set is mentioned, there is no information on how ground truth was established for it.
In summary, the provided 510(k) summary focuses on demonstrating substantial equivalence to predicate devices for a software-based modification related to data export. It does not contain the detailed clinical validation study data (acceptance criteria, test set details, expert ground truth, MRMC studies, training set information) that would be expected for a novel diagnostic or AI-powered device. The "verification testing activities" described are more aligned with engineering performance and safety testing for the new functionality.
§ 870.1025 Arrhythmia detector and alarm (including ST-segment measurement and alarm).
(a)
Identification. The arrhythmia detector and alarm device monitors an electrocardiogram and is designed to produce a visible or audible signal or alarm when atrial or ventricular arrhythmia, such as premature contraction or ventricular fibrillation, occurs.(b)
Classification. Class II (special controls). The guidance document entitled “Class II Special Controls Guidance Document: Arrhythmia Detector and Alarm” will serve as the special control. See § 870.1 for the availability of this guidance document.