Search Results
Found 1 results
510(k) Data Aggregation
(92 days)
Central Monitoring System Software is intended to conduct centralized monitoring of adult, pediatrics and neonatal patient's vital sign information from compatible bedside monitors. The software collects, stores, displays and alarms the information provided on the bedside monitoring parameters include Electrocardiogram(ECG), Heart Rate(HR), Respiration(RESP), Pulse Oxygen Saturation(SpO2), Pulse Rate(PR), Non-invasive Blood Pressure (NIBP), Invasive Blood Pressure(IBP), Impedance Cardiograph(ICG), TEMP(Temperature), Carbon dioxide (CO2), Anesthetic Gas (AG), Fetal Heart Rate (FHR), Uterine contraction (TOCO) and Fetal Movement (FM) etc. It is intended to be used in the hospital or medical institutions, and it is not intended for home use.
M6000C central monitoring system software, the risk management object, can central monitor significant vital sign parameters of multi-patients, including ECG/HR, RESP, SpO2, PULSE, NIBP, IBP, ICG, TEMP, CO2, AG, FHR, TOCO and FM. It is connected by network with bedside units and receives data from bedside units. Then, the data are displayed on the screen or recorded or printed as per needs. When the monitored data exceed a set value, the central monitoring system software will start alarm system and gives out alarm to remind doctors and nurses for attention.
Here's a breakdown of the acceptance criteria and study information based on the provided text:
Acceptance Criteria and Reported Device Performance
The document does not explicitly state quantitative acceptance criteria or a direct comparison of the device's performance against such criteria. Instead, it focuses on non-clinical testing to verify design specifications and compliance with voluntary standards. The "Performance testing" mentioned is general and doesn't provide specific numerical results or target metrics.
Acceptance Criteria Category | Specific Criteria (from document) | Reported Device Performance (from document) |
---|---|---|
Compatibility | N/A (implied by testing) | Compatible with various bedside monitors |
Data Latency | N/A (implied by testing) | Acceptable for the clinical environment |
Software Verification | N/A (implied by testing) | Verified and Validated |
Risk Management | Compliance with EN ISO 14971:2012 | Complies with EN ISO 14971:2012 |
Usability Engineering | Compliance with IEC 62366:2007 | Complies with IEC 62366:2007 |
Software Lifecycle | Compliance with IEC 62304 | Complies with IEC 62304 |
Detailed Study Information:
-
A table of acceptance criteria and the reported device performance:
(See table above) -
Sample size used for the test set and the data provenance:
- Sample Size for Test Set: Not specified. The document mentions "Performance testing" but does not detail the number of test cases, patient data, or scenarios used in these tests.
- Data Provenance: Not specified. It's unclear if the tests used simulated data, existing patient data, or newly acquired data. The country of origin is also not mentioned.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
Not applicable. The document describes non-clinical performance and software verification testing, which typically do not involve expert-established ground truth in the same way clinical studies or diagnostic AI algorithms do. The "performance testing" seems to focus on functional verification rather than accuracy against a clinical reference standard. -
Adjudication method for the test set:
Not applicable, as no external experts or adjudication process against a ground truth is described for the functional and performance testing. -
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
No. The submission explicitly states: "No clinical study is included in this submission." Therefore, no MRMC study or AI assistance evaluation was performed. -
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
Yes, in a sense. The described "Performance testing" and "Software verification and validation testing" are evaluations of the device (software) itself, without direct human-in-the-loop involvement in the performance assessment against a clinical outcome. These tests confirm the software's functionality, compatibility, and data processing capabilities, rather than its diagnostic or interpretative accuracy in a clinical context. -
The type of ground truth used:
Not explicitly stated as "ground truth" in the diagnostic sense. For the functional and performance tests, the "ground truth" would be the expected behavior or output of the software and system based on its design specifications and standard requirements. For example, for data latency, the ground truth would be the defined acceptable latency rate. For compatibility, it would be successful communication with specified bedside monitors. -
The sample size for the training set:
Not applicable. This device is a Central Monitoring System, which collects, stores, displays, and alarms vital sign information. It is not an AI/ML algorithm that is "trained" on a dataset in the typical sense. It performs rule-based monitoring and alarm generation. -
How the ground truth for the training set was established:
Not applicable, as there is no training set for this type of monitoring system.
Ask a specific question about this device
Page 1 of 1