K Number
K132036
Date Cleared
2014-07-01

(365 days)

Product Code
Regulation Number
870.2300
Panel
CV
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The CMS network transfers information between Hypervisor Central Monitoring System and other networked devices. It also allows information transfer between several CMS. Network connections consist of hardwired network cables and/or WLAN connections. CMS can be used for remote monitor management, storing, printing, reviewing or processing of information from networked devices, and it is operated by medical personnel in hospitals or medical institutions.

Telemetry Monitoring System is a sub-system of CMS, intended to obtain ECG and SpO2 physiological information from adult and pediatric patients, and send it to CMS via WMTS frequency within a defined coverage area.

Device Description

The CMS network is a kind of medical information system, which consists of different networked devices (which have separate 510(k) clearance). CMS is the primary maintainer of communication between other networked devices. It can store, print, review or process information from networked devices. It can also realize remote monitor management function to free doctors from clinical monitoring work and conduct centralized monitoring management.

AI/ML Overview

Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided 510(k) summary.

Important Note: The provided document describes a Central Monitoring System and a Telemetry Monitoring System, which are hardware and software systems for patient monitoring. This 510(k) summary does NOT involve Artificial Intelligence (AI) or machine learning. Therefore, many of the questions related to AI models, such as training sets, ground truth establishment for training, or MRMC studies for AI assistance, are not applicable to this submission.

The "study" described is a series of tests to demonstrate the safety and effectiveness of the traditional medical device, particularly its substantial equivalence to a predicate device.


Table of Acceptance Criteria and Reported Device Performance

Since this is not an AI/ML device, the "acceptance criteria" are not framed as specific performance metrics like sensitivity/specificity for a diagnostic algorithm. Instead, they are compliance with recognized standards and successful completion of various engineering and validation tests. The "reported device performance" is the statement that the device passed these tests.

Acceptance CriterionReported Device Performance (as stated in the document)
Compliance with Recognized Safety StandardsSuccessfully complies.
Compliance with Recognized Performance StandardsSuccessfully complies.
Compliance with Recognized Electromagnetic Compatibility (EMC) StandardsSuccessfully complies.
Risk Analysis Development and Hazard MitigationRisk analysis developed; mitigation documented.
Requirements Specification ReviewCompleted successfully.
Hardware TestingCompleted successfully.
Software TestingCompleted successfully.
Code Design & Code ReviewsCompleted successfully.
Environmental TestingCompleted successfully.
Safety TestingCompleted successfully.
Performance TestingCompleted successfully.
Hardware ValidationCompleted successfully.
Software ValidationCompleted successfully.
Substantial Equivalence to Predicate Device (K080192)Determined to be substantially equivalent in safety, effectiveness, and performance.

Study Details (Applicable to Traditional Medical Devices, Not AI)

  1. Sample size used for the test set and the data provenance:

    • Sample Size: Not specified in terms of patient data or case numbers typically associated with AI model testing. The "test set" here refers to the actual physical device and its software being subjected to various engineering and validation tests. These tests would involve specific test environments, simulated data inputs, and potentially real patient data in a controlled setting for certain functions (e.g., ECG or SpO2 signal acquisition). The document does not quantify this "sample size."
    • Data Provenance: Not explicitly stated. Given that it's a hardware/software system, testing would likely be performed in a controlled lab environment by the manufacturer (Shenzhen Mindray Bio-medical Electronics Co., LTD, China). There is no mention of external data collection (country of origin, retrospective/prospective) because it's not a diagnostic AI model.
  2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Not Applicable. For a traditional monitoring system, "ground truth" (e.g., in terms of a gold standard diagnosis) is not established by human experts in the same way it would be for an AI diagnostic algorithm. The system's performance is verified against engineering specifications, known signal properties, and the performance of the predicate device. If human review was involved for specific signals (e.g., ECG waveform review), it's not detailed in this summary.
  3. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    • Not Applicable. Adjudication methods like 2+1 or 3+1 are typically used for establishing ground truth for image interpretation or diagnostic tasks involving multiple human readers to resolve discrepancies. This is not relevant for the type of safety and performance testing described for this monitoring system.
  4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • Not Applicable. This device is not an AI system. Therefore, no MRMC study involving AI assistance would have been performed.
  5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • Partially Applicable / Misaligned Question. The device itself is a "standalone" system in terms of its functions (monitoring, data transfer, storage). Its performance for functions like ECG and SpO2 acquisition is "algorithm only" in the sense that the internal processing is automatic. However, the question typically refers to the standalone performance of an AI diagnostic algorithm. Since this is a monitoring system, its "standalone" performance would be assessed through the various engineering and validation tests (accuracy of readings, reliability of data transfer, alarm functionality, etc.). The document indicates these tests were performed.
  6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • Engineering Specifications and Predicate Device Performance. The "ground truth" for this monitoring system is its adherence to predefined engineering specifications, industry standards (e.g., for physiological measurement accuracy, network communication protocols), and its equivalency to the performance of its legally marketed predicate device (HYPERVISOR VI Central Monitoring System, K080192). For physiological parameters, "truth" would be established by calibrated measurement devices or validated simulators producing known signals.
  7. The sample size for the training set:

    • Not Applicable. This is not an AI/ML device that requires a "training set."
  8. How the ground truth for the training set was established:

    • Not Applicable. As this is not an AI/ML device, there is no training set or ground truth establishment for it in the context of machine learning.

§ 870.2300 Cardiac monitor (including cardiotachometer and rate alarm).

(a)
Identification. A cardiac monitor (including cardiotachometer and rate alarm) is a device used to measure the heart rate from an analog signal produced by an electrocardiograph, vectorcardiograph, or blood pressure monitor. This device may sound an alarm when the heart rate falls outside preset upper and lower limits.(b)
Classification. Class II (performance standards).