(101 days)
Mural Perinatal Surveillance is a perinatal monitoring system intended for electronic collection, display, and documentation of clinical data with optional features to store, export, annotate, calculate, and retrieve clinical data. Data is acquired from medical devices, electronic health records, or other data sources on a hospital's network. This device is intended for use by healthcare professionals in clinical support settings for obstetric patients during and after pregnancy.
This product is not intended to control of the medical devices providing data across the hospital network. All information or indications provided are intended to support the judgment of medical professionals and are not intended to be the sole source of information for decision making.
Mural Perinatal Surveillance is a software only, information management system designed for the obstetrical (OB) care environment. Its use covers patients during pregnancy, labor, birth and covers newborn documentation. The software interfaces with a healthcare facility's Electronic Medical Records (EMR) and patient monitoring network to collect, display and document relevant patient data.
The software combines patient surveillance and alarm capabilities with patient documentation and record keeping into a single application to support patient care for their complete obstetrical care journey.
The provided document is a 510(k) summary for the GE Medical Systems Information Technologies, Inc. Mural Perinatal Surveillance system. This document outlines the device's indications for use, comparison to a predicate device, and a summary of non-clinical tests.
However, it does not contain information about specific acceptance criteria and the study that proves the device meets those criteria, particularly regarding algorithmic performance. The document focuses on software validation, risk analysis, cybersecurity, and interoperability, confirming that the software was developed according to GE Healthcare's Quality Management System and relevant IEC standards.
The "Mural Perinatal Surveillance" is described as a software-only information management system intended for electronic collection, display, and documentation of clinical data, with optional features to store, export, annotate, calculate, and retrieve clinical data primarily for obstetric patients. It explicitly states that it is "not intended to control or alter any of the medical devices providing data across the hospital network" and that "all information or indications provided are intended to support the judgment of medical professionals and are not intended to be the sole source of information for decision making."
This indicates that the device functions as a data management and display tool, rather than an AI-driven diagnostic or prognostic tool that would require extensive performance studies with acceptance criteria based on metrics like sensitivity, specificity, or AUC, or comparative effectiveness studies with human readers.
Therefore, I cannot provide the requested information for acceptance criteria and a study proving the device meets those criteria in the context of AI performance, because such a study is not described in the provided document, nor would it typically be required for a device of this nature (a perinatal monitoring system for data management, not an AI for diagnosis or risk assessment). The "computed items & assessment tools" (Shoulder Dystocia Risk, Postpartum Hemorrhage Risk Score, and Bishop Score) are stated to be derived from "standard general computes widely accepted" or "well-established industry standards or evidence-based studies and peer-reviewed research journals," implying they are based on established clinical rules or formulas, not novel AI algorithms requiring new performance validation studies.
However, based on the provided text, I can infer the "acceptance criteria" related to the device's functions and the "study" (non-clinical tests) demonstrating its adherence to regulatory and quality standards.
Here's an interpretation based on the provided information, focusing on what is present in the document rather than what is absent related to AI performance:
Inferred Acceptance Criteria and Proof of Meeting Criteria for Mural Perinatal Surveillance
Given that Mural Perinatal Surveillance is a data management and display system, not a diagnostic AI, the acceptance criteria and proof of meeting them are primarily centered around its functionality, reliability, security, and adherence to quality systems.
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category | Specific Criteria/Goals Implicit in Document | Reported Device Performance (Summary from document) |
---|---|---|
Functional Performance | - Electronic collection, display, and documentation of clinical data. | - Successfully collects, displays, and documents clinical data. |
- Provides optional features for storing, exporting, annotating, calculating (based on established computes), and retrieving data.
- Software capabilities for clinical annotations & record archive demonstrated.
- Software capabilities for alarms demonstrated (capable of generating alarm conditions within the software). |
| - Data acquisition from medical devices, EHRs, etc. | - Acquires physiological data from compatible measuring devices (HL7, fetal monitors on a network). |
| Safety (Software & Cybersecurity) | - Software operates safely without unintended harm or error. | - Safety classification and performance testing in accordance with IEC 62304 Edition 1.1 2015 successfully completed. - Risk Analysis / Management Requirements Reviews successfully completed.
- Cybersecurity evaluated as recommended in the 2014 FDA guidance document, successfully completed. |
| Effectiveness (Software & Interoperability) | - Software performs as intended and integrates effectively with hospital systems. | - Software Verification and Software Validation successfully completed, confirming that software and user requirements have been met. - Interoperability evaluated as recommended in the 2017 FDA guidance document, successfully completed. |
| Usability | - Device is user-friendly for healthcare professionals. | - Usability Testing successfully completed. |
| Quality System & Regulatory Compliance | - Developed under a robust quality management system; adheres to relevant standards. | - Developed following the GE Healthcare Quality Management System (QMS). - Design Reviews successfully completed.
- Testing in accordance with IEC 60601-1-8 Edition 2.2 2020-07 for alarm functionality successfully completed. |
2. Sample size used for the test set and the data provenance
The document does not specify a "test set" in the context of a dataset for validating AI performance. Instead, it refers to "Non-Clinical Tests" which include software verification and validation activities. These tests typically involve:
- Unit testing: Testing individual components of the software.
- Integration testing: Testing the interaction between different software components.
- System testing: Testing the complete integrated system.
- Regression testing: Ensuring changes don't break existing functionality.
- Usability testing: Testing with intended users to ensure ease of use.
- Cybersecurity testing: Penetration testing, vulnerability scanning.
- Interoperability testing: Testing data exchange with other systems (e.g., HL7 interfaces).
The document does not specify the "sample size" of data records or patient cases used for these non-clinical tests, nor does it mention data provenance (country of origin, retrospective/prospective). This level of detail is typically not required for the type of software validation described, which focuses on code quality, functional correctness, and adherence to software engineering best practices and relevant standards rather than a clinical performance study.
3. Number of experts used to establish the ground truth for the test set and their qualifications
This information is not applicable as the document does not describe a study involving human experts establishing "ground truth" for a performance test set in the way it would be for an AI diagnostic algorithm. The acceptance criteria are based on software engineering principles, regulatory standards, and functional specifications, not on expert consensus on clinical data for diagnostic accuracy.
4. Adjudication method for the test set
This information is not applicable for the same reasons as point 3. There is no mention of adjudication for a clinical test set.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, and its effect size.
No, a multi-reader, multi-case (MRMC) comparative effectiveness study was not reported. This type of study is relevant for evaluating the impact of an AI (or other support tool) on human reader performance, typically in diagnostic imaging or similar fields. Since Mural Perinatal Surveillance is a data management and display system intended to support judgment rather than provide independent diagnostic conclusions, such a study would not be expected or relevant based on the information provided.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done.
The document implies that the "computed items & assessment tools" (e.g., Shoulder Dystocia Risk, Postpartum Hemorrhage Risk Score, Bishop Score) perform "standalone" calculations based on pre-defined rules/algorithms. However, these are based on "widely accepted" or "well-established industry standards," indicating they are deterministic calculations, not AI algorithms requiring standalone performance validation against ground truth data in the context of novel algorithm output. The document explicitly states the information is "not intended to be the sole source of information for decision making," meaning the system outputs are always viewed by human clinicians.
7. The type of ground truth used
For the non-clinical software verification and validation, the "ground truth" is typically defined by:
- Functional specifications: How the software is designed to behave.
- Requirements documents: What the software is supposed to do.
- Industry standards: Adherence to IEC standards (e.g., 62304 for software lifecycle, 60601-1-8 for alarms).
- Known good outputs: For calculations, the "ground truth" is the correct mathematical or rule-based output.
- Security best practices: For cybersecurity.
- Clinical workflows: For usability testing.
No "ground truth" in the sense of expert consensus, pathology, or outcomes data is described as being used for a performance study of the device's own outputs.
8. The sample size for the training set
This information is not applicable. The device is described as operating based on "standard general computes" and "complex computes derived directly from well-established industry standards or evidence-based studies and peer-reviewed research journals." This suggests rule-based or empirically derived algorithms rather than a machine learning model that requires a "training set."
9. How the ground truth for the training set was established
This information is not applicable for the same reason as point 8. There is no mention of a machine learning training set or associated ground truth establishment process.
§ 884.2740 Perinatal monitoring system and accessories.
(a)
Identification. A perinatal monitoring system is a device used to show graphically the relationship between maternal labor and the fetal heart rate by means of combining and coordinating uterine contraction and fetal heart monitors with appropriate displays of the well-being of the fetus during pregnancy, labor, and delivery. This generic type of device may include any of the devices subject to §§ 884.2600, 884.2640, 884.2660, 884.2675, 884.2700, and 884.2720. This generic type of device may include the following accessories: Central monitoring system and remote repeaters, signal analysis and display equipment, patient and equipment supports, and component parts.(b)
Classification. Class II (performance standards).