K Number
K991773
Date Cleared
1999-06-07

(13 days)

Product Code
Regulation Number
870.1025
Panel
CV
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The Hewlett-Packard family of patient monitor products is intended for monitoring, recording, and alarming of multiple physiological parameters. The devices are indicated for use in health care facilities by health care professionals whenever there is a need for monitoring the physiological parameters of adult, neonatal, and pediatric patients.

Device Description

The Hewlett-Packard family of Viridia Patient Monitors individually known as the M3000A/M3046A (Viridia M3/4). The common name is patient monitor. The modification consists of the addition of software that involves only the arrhythmia and ST measurement algorithm of the measurement computer processing unit of each device.

AI/ML Overview

The provided text does not contain detailed acceptance criteria or a specific study proving the device meets them. It primarily describes a 510(k) submission for a patient monitor, focusing on its substantial equivalence to previously cleared devices.

Here's an attempt to extract and infer information based on the limited text:

1. Table of Acceptance Criteria and Reported Device Performance

Acceptance CriteriaReported Device Performance
Maintain performance and reliability characteristics of the STAR algorithm.Substantial equivalence to predicate device specifications (implied).
Pass system-level tests.Tests conducted and passed (implied).
Pass integration tests.Tests conducted and passed (implied).
Pass safety testing from hazard analysis.Tests conducted and passed (implied).
Pass interference testing.Tests conducted and passed (implied).
Pass hardware testing.Tests conducted and passed (implied).

Note: The document states that "Pass/Fail criteria were based on the specifications cleared for the predicate device and test results showed substantial equivalence." This implies that the acceptance criteria for the new device were the same as those established for the predicate device, and the new device met them. However, the specific metrics or values for these criteria are not detailed.


2. Sample size used for the test set and the data provenance

The document mentions "bench studies" for verification, validation, and testing activities. However, it does not specify the sample size used for the test set or the data provenance (e.g., country of origin of the data, retrospective or prospective). There's no indication of patient data being used for these bench studies; rather, it suggests laboratory or simulated environments.


3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

This information is not provided in the document. The testing described focuses on the algorithm's performance against predicate device specifications, not on expert-adjudicated ground truth from patient data.


4. Adjudication method for the test set

This information is not provided in the document. Given the nature of "bench studies" and evaluation against predicate specifications, an adjudication method in the context of human expert review is unlikely to have been relevant.


5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done

No, a multi-reader multi-case (MRMC) comparative effectiveness study was not mentioned or implied. The testing described is focused on the device's adherence to its own specifications and equivalence to predicate devices, not on human reader performance with or without AI assistance.


6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

Yes, the testing described appears to be a standalone algorithm evaluation. The document states, "Verification, validation, and testing activities were conducted to establish the performance and reliability characteristics of the STAR algorithm using bench studies." This implies evaluating the algorithm's performance independent of human interaction.


7. The type of ground truth used

The "ground truth" for the testing described seems to be the "specifications cleared for the predicate device." The device's performance was compared against these established specifications, and "test results showed substantial equivalence." There is no mention of expert consensus, pathology, or outcomes data being used as ground truth for this particular submission's testing.


8. The sample size for the training set

The document does not provide information regarding a training set or its sample size. The focus is on the performance and reliability characteristics of an existing algorithm (STAR software) that has been modified, not on the development or training of a new AI model with new data.


9. How the ground truth for the training set was established

Since no training set is mentioned or implied for the modifications described in this 510(k) summary, how its ground truth was established is not applicable or provided. The document is about a modification to an already existing and presumably validated algorithm (STAR software).

§ 870.1025 Arrhythmia detector and alarm (including ST-segment measurement and alarm).

(a)
Identification. The arrhythmia detector and alarm device monitors an electrocardiogram and is designed to produce a visible or audible signal or alarm when atrial or ventricular arrhythmia, such as premature contraction or ventricular fibrillation, occurs.(b)
Classification. Class II (special controls). The guidance document entitled “Class II Special Controls Guidance Document: Arrhythmia Detector and Alarm” will serve as the special control. See § 870.1 for the availability of this guidance document.