K Number
K173715
Device Name
AlertWatch:OB
Manufacturer
Date Cleared
2018-04-23

(140 days)

Product Code
Regulation Number
884.2740
Panel
OB
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

AlertWatch:OB is intended for use by clinicians for secondary monitoring of maternal patients in the labor and delivery unit. AlertWatch:OB is a maternal surveillance system that combines data from validated electronic medical record systems, and displays them in one place. Once alerted by AlertWatch:OB, the clinician must refer to the primary monitor, device, or data source before making a clinical decision.

Device Description

AlertWatch:OB is a secondary monitoring system used by OB nurses, obstetricians, and OB anesthesiologists to monitor women in the Labor and Delivery (L&D) unit. The purpose of the program is to synthesize a wide range of maternal patient data and inform clinicians of potential problems. Once alerted, the clinician is instructed to refer to the primary monitoring device or EMR before making a clinical decision. AlertWatch:OB should only be connected to EMR systems that have been validated for use with AlertWatch:OB. AlertWatch, LLC performs the validation for each installation site.

AI/ML Overview

Here's an analysis of the provided text regarding the AlertWatch:OB device, addressing the requested information:

Key Takeaway: The provided document is a 510(k) summary for AlertWatch:OB, which is primarily demonstrating substantial equivalence to a predicate device (AlertWatch:OR) based on similar intended use and technological characteristics. As such, it does not contain typical acceptance criteria and a study proving the device meets those criteria in the way one might expect for a de novo device or an AI/ML product with novel performance claims. Instead, the focus is on verification and validation of the software and human factors.


1. Table of Acceptance Criteria and Reported Device Performance

The document does not explicitly present a table of quantitative acceptance criteria (e.g., sensitivity, specificity, accuracy targets) and corresponding reported device performance values in the context of clinical accuracy for the AlertWatch:OB's core function of identifying patient issues. This is because its claim is for "secondary monitoring" and not for primary diagnostic capabilities or automated decision-making.

However, the document does describe performance activities related to usability, software functionality, and the establishment of "default limits and thresholds."

Acceptance Criteria CategoryReported Device Performance
Software Functionality (V&V)"Verification of AlertWatch:OR was conducted to ensure that the product works as designed, and was tested with both constructed data and data from the EMR. Validation was conducted to check the design and performance of the product." (Implies successful completion against internal specifications).
Wireless Co-existence"Wireless Co-existence testing was performed to establish that the wireless components work effectively in the hospital environment." (Implies effective operation in the intended environment).
Usability / Human FactorsFormative Study: Conducted to identify and fix usability problems.
Summative Study (17 users): "The results of the study showed that users with minimal training were able to successfully perform critical tasks and use the device for its intended purpose – to clarify clinical information and support information access." (Implies successful usability).
Clinical Validity of Default LimitsPhase 1 (References): "AlertWatch sought out definitive published studies that highlighted appropriate limits for certain patient conditions."
Phase 2 (Expert Committee): "Obstetricians and OB anesthesia physicians at the University of Michigan Health System...reviewed the limits, provided feedback, and reviewed the final results."
Phase 3 (External Experts): "External group of four anesthesiology and OB anesthesia experts...All approved the clinical limits." (Implies clinical consensus and validity of the set limits).

2. Sample Size Used for the Test Set and Data Provenance

  • Software Verification & Validation: The document mentions "constructed data and data from the EMR" for testing but does not specify sample sizes for these test sets or their provenance (e.g., country of origin, retrospective/prospective).
  • Human Factors Study: 17 users were recruited for the summative usability study. The provenance of these users (e.g., hospital staff, general population) is not specified, nor is the origin of the data used in the usability testing (likely simulated or de-identified data).
  • Clinical Data (main performance study): "Not applicable. Clinical studies are not necessary to establish the substantial equivalence of this device." This implies no specific clinical test set was used to establish performance metrics like sensitivity/specificity for identifying patient issues for the 510(k) submission.

3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications

Given the "Not applicable" for clinical data, there isn't a "ground truth" establishment in the traditional sense of a diagnostic or predictive AI device for a clinical test set.

However, for the establishment of default limits and thresholds:

  • An "Expert Committee" of Obstetricians and OB anesthesia physicians from the University of Michigan Health System were involved. Their specific number is not given, but their roles are.
  • An "External Experts" group of four anesthesiology and OB anesthesia experts provided final review. Their specific qualifications (e.g., years of experience) are not detailed beyond their specialty.

4. Adjudication Method for the Test Set

Not applicable, as no formal clinical "test set" with a need for adjudicated ground truth (e.g., for disease presence/absence) was used for direct device performance evaluation in the supplied document. For the "default limits and thresholds," the process involved multiple stages of expert review and approval, implying a consensus-based approach rather than formal adjudication of individual cases.


5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

No MRMC comparative effectiveness study was done or reported in the document. The device is not making claims about improving human reader performance but rather providing secondary monitoring and information synthesis.


6. Standalone Performance Study (Algorithm Only)

The document primarily focuses on the software's functionality and its role as a "secondary monitoring system" that synthesizes existing EMR data and alerts clinicians. While "Software Verification and Validation Testing" was conducted to ensure it "works as designed" and "performed a series of calculations," it does not present a standalone performance study in terms of quantifiable clinical metrics (e.g., sensitivity, specificity for detecting specific conditions) for the algorithm itself. The device is intended to be used by clinicians who then refer to primary sources.


7. Type of Ground Truth Used

  • For Software Verification & Validation: Likely internal functional specifications and expected outputs based on "constructed data and data from the EMR."
  • For Default Limits and Thresholds:
    • Published medical literature/references.
    • Expert consensus among obstetricians, OB anesthesia physicians (University of Michigan Health System), and an external group of four anesthesiology and OB anesthesia experts. This is the closest to a "ground truth" described, but it pertains to the establishment of the parameters the device uses, not the evaluation of the device's output against a diagnostic truth.

8. Sample Size for the Training Set

The document does not mention a "training set" in the context of an AI/ML algorithm that learns from data to make predictions or categorizations. The AlertWatch:OB appears to be a rule-based or threshold-based system that processes data according to predefined clinical limits.


9. How the Ground Truth for the Training Set Was Established

Not applicable, as there is no mention of a training set or machine learning algorithm that requires a "ground truth" for learning in this 510(k) summary. The "ground truth" for the device's operational parameters (default limits) was established via literature review and expert consensus.

§ 884.2740 Perinatal monitoring system and accessories.

(a)
Identification. A perinatal monitoring system is a device used to show graphically the relationship between maternal labor and the fetal heart rate by means of combining and coordinating uterine contraction and fetal heart monitors with appropriate displays of the well-being of the fetus during pregnancy, labor, and delivery. This generic type of device may include any of the devices subject to §§ 884.2600, 884.2640, 884.2660, 884.2675, 884.2700, and 884.2720. This generic type of device may include the following accessories: Central monitoring system and remote repeaters, signal analysis and display equipment, patient and equipment supports, and component parts.(b)
Classification. Class II (performance standards).