K Number
K210160
Device Name
AlertWatch:AC
Manufacturer
Date Cleared
2021-09-10

(232 days)

Product Code
Regulation Number
870.2300
Panel
CV
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

AlertWatch:AC is intended for use by physicians for secondary monitoring of ICU patients. AlertWatch:AC is also intended for use by physicians providing supplemental remote support to bedside care teams in the management and care of ICU patients. AlertWatch:AC is not intended for use in monitoring pediatric or neonatal patients. AlertWatch:AC is a software system that combines data from the electronic medical record, networked physiologic monitors, and ancillary systems, and displays them on a dashboard view of the unit and patient. The clinical decision support is generated to aid in understanding the patient's current condition and changes over time. Once alerted by AlertWatch:AC, the physician must refer to the primary monitor, device or data source before making a clinical decision.

Device Description

AlertWatch:AC is a secondary monitoring system used by physicians to monitor adult patients in an ICU environment. The purpose of the device is to synthesize a wide range of patient data and inform physicians of potential problems. Once alerted, the physician is instructed to refer to the primary monitoring device or EMR before making a clinical decision. The software design includes a default set of rules and alerts that can be configured by the hospital during the installation process. AlertWatch:AC is intended to supplement, not replace, a hospital's primary EMR. The device retrieves data from the electronic medical record (EMR) system and networked physiologic monitors, integrates this data, and performs a series of calculations to assess potential clinical issues. The information is conveyed both via organ colors and messages in the alert panel. Any alert can also be configured to send pages to physicians assigned to the patient.

AI/ML Overview

The provided text describes the 510(k) clearance for AlertWatch:AC, a secondary monitoring system for ICU patients. However, it does not contain information about acceptance criteria or a specific study that proves the device meets such criteria in terms of the accuracy or performance of its clinical decision support algorithms.

The document focuses on regulatory compliance, outlining the device's intended use, technological comparison to a predicate device, and various verification and validation activities (software V&V, human factors study, default limits review, and wireless co-existence testing).

Therefore, I cannot provide the requested information regarding a table of acceptance criteria and reported device performance using figures like sensitivity, specificity, or AUC, nor can I detail the sample size, ground truth establishment, or expert qualifications for such a study, because this information is not present in the provided text.

Based on the document, here's what can be inferred or explicitly stated about the device's validation:

  1. A table of acceptance criteria and the reported device performance: Not available in the provided text. The document refers to "software verification and validation testing" and "performance testing" but does not provide specific quantitative acceptance criteria or results for the clinical decision support functionality (e.g., accuracy of alerts). It states that "the results of performance testing demonstrate that the subject device performs in accordance with specifications and meets user needs and intended uses," but no specifics are given.

  2. Sample sizes used for the test set and the data provenance:

    • Software Verification and Validation Testing: Performed with "both constructed data and data from the EMR." No specific sample size for the "data from the EMR" portion is provided.
    • Human Factors Study:
      • Summative usability study: 18 users.
      • Summative usability study on verbal alarm signals: 15 users.
    • Data Provenance: Not explicitly stated beyond "data from the EMR." No geographical origin (e.g., country) or retrospective/prospective nature is specified.
  3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Expert Committee for Default Limits and Thresholds: "Acute care physicians at the University of Michigan Health System." The exact number of physicians is not given, but it implies multiple experts. Their specific qualifications (e.g., years of experience, board certifications) are not detailed beyond "acute care physicians."
  4. Adjudication method for the test set: Not explicitly mentioned for any testing related to the clinical decision support's accuracy. For the "Expert Committee" review of default limits, it states clinicians "reviewed the limits, provided feedback, and reviewed the final results," implying a consensus-based approach without detailing a specific adjudication method like 2+1 or 3+1.

  5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done: No, the document explicitly states "Clinical Data: Not applicable. Clinical studies are not necessary to establish the substantial equivalence of this device." This indicates that no MRMC study comparing human readers with and without AI assistance was performed or presented.

  6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done: The document mentions "Software verification and validation testing... ensure that the product works as designed, and was tested with both constructed data and data from the EMR." While this implies algorithm-only testing as part of V&V, it doesn't present it as a separate performance study with metrics suitable for standalone performance (e.g., sensitivity, specificity for specific conditions detected by the algorithm). The device is positioned as clinical decision support where the physician "must refer to the primary monitor, device or data source before making a clinical decision," suggesting it's not intended for standalone diagnostic use.

  7. The type of ground truth used:

    • For the "Default Limits and Thresholds": Ground truth was established by "Review of References" (published studies) and "Expert Committee" (consensus/feedback from acute care physicians). This suggests a form of expert consensus and literature-based validation for the rule-based alerts.
    • For "Software Verification and Validation Testing" using EMR data: The method for establishing ground truth for this EMR data is not described.
  8. The sample size for the training set: Not applicable. The AlertWatch:AC is described as a "software system that combines data... and performs a series of calculations to assess potential clinical issues." It uses "a default set of rules and alerts" and "established patient risk and acuity calculations (SOFA and SIRS)." This indicates a rule-based or calculational system rather than a machine learning model that would typically have a "training set." Therefore, no training set size is mentioned.

  9. How the ground truth for the training set was established: Not applicable, as it's not a machine learning model with a distinct training set. The "default limits and thresholds" and "established patient risk and acuity calculations" are based on literature review and expert consensus rather than labelled training data for an AI model.

§ 870.2300 Cardiac monitor (including cardiotachometer and rate alarm).

(a)
Identification. A cardiac monitor (including cardiotachometer and rate alarm) is a device used to measure the heart rate from an analog signal produced by an electrocardiograph, vectorcardiograph, or blood pressure monitor. This device may sound an alarm when the heart rate falls outside preset upper and lower limits.(b)
Classification. Class II (performance standards).