Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K210160
    Device Name
    AlertWatch:AC
    Manufacturer
    Date Cleared
    2021-09-10

    (232 days)

    Product Code
    Regulation Number
    870.2300
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    AlertWatch, Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AlertWatch:AC is intended for use by physicians for secondary monitoring of ICU patients. AlertWatch:AC is also intended for use by physicians providing supplemental remote support to bedside care teams in the management and care of ICU patients. AlertWatch:AC is not intended for use in monitoring pediatric or neonatal patients. AlertWatch:AC is a software system that combines data from the electronic medical record, networked physiologic monitors, and ancillary systems, and displays them on a dashboard view of the unit and patient. The clinical decision support is generated to aid in understanding the patient's current condition and changes over time. Once alerted by AlertWatch:AC, the physician must refer to the primary monitor, device or data source before making a clinical decision.

    Device Description

    AlertWatch:AC is a secondary monitoring system used by physicians to monitor adult patients in an ICU environment. The purpose of the device is to synthesize a wide range of patient data and inform physicians of potential problems. Once alerted, the physician is instructed to refer to the primary monitoring device or EMR before making a clinical decision. The software design includes a default set of rules and alerts that can be configured by the hospital during the installation process. AlertWatch:AC is intended to supplement, not replace, a hospital's primary EMR. The device retrieves data from the electronic medical record (EMR) system and networked physiologic monitors, integrates this data, and performs a series of calculations to assess potential clinical issues. The information is conveyed both via organ colors and messages in the alert panel. Any alert can also be configured to send pages to physicians assigned to the patient.

    AI/ML Overview

    The provided text describes the 510(k) clearance for AlertWatch:AC, a secondary monitoring system for ICU patients. However, it does not contain information about acceptance criteria or a specific study that proves the device meets such criteria in terms of the accuracy or performance of its clinical decision support algorithms.

    The document focuses on regulatory compliance, outlining the device's intended use, technological comparison to a predicate device, and various verification and validation activities (software V&V, human factors study, default limits review, and wireless co-existence testing).

    Therefore, I cannot provide the requested information regarding a table of acceptance criteria and reported device performance using figures like sensitivity, specificity, or AUC, nor can I detail the sample size, ground truth establishment, or expert qualifications for such a study, because this information is not present in the provided text.

    Based on the document, here's what can be inferred or explicitly stated about the device's validation:

    1. A table of acceptance criteria and the reported device performance: Not available in the provided text. The document refers to "software verification and validation testing" and "performance testing" but does not provide specific quantitative acceptance criteria or results for the clinical decision support functionality (e.g., accuracy of alerts). It states that "the results of performance testing demonstrate that the subject device performs in accordance with specifications and meets user needs and intended uses," but no specifics are given.

    2. Sample sizes used for the test set and the data provenance:

      • Software Verification and Validation Testing: Performed with "both constructed data and data from the EMR." No specific sample size for the "data from the EMR" portion is provided.
      • Human Factors Study:
        • Summative usability study: 18 users.
        • Summative usability study on verbal alarm signals: 15 users.
      • Data Provenance: Not explicitly stated beyond "data from the EMR." No geographical origin (e.g., country) or retrospective/prospective nature is specified.
    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Expert Committee for Default Limits and Thresholds: "Acute care physicians at the University of Michigan Health System." The exact number of physicians is not given, but it implies multiple experts. Their specific qualifications (e.g., years of experience, board certifications) are not detailed beyond "acute care physicians."
    4. Adjudication method for the test set: Not explicitly mentioned for any testing related to the clinical decision support's accuracy. For the "Expert Committee" review of default limits, it states clinicians "reviewed the limits, provided feedback, and reviewed the final results," implying a consensus-based approach without detailing a specific adjudication method like 2+1 or 3+1.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done: No, the document explicitly states "Clinical Data: Not applicable. Clinical studies are not necessary to establish the substantial equivalence of this device." This indicates that no MRMC study comparing human readers with and without AI assistance was performed or presented.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done: The document mentions "Software verification and validation testing... ensure that the product works as designed, and was tested with both constructed data and data from the EMR." While this implies algorithm-only testing as part of V&V, it doesn't present it as a separate performance study with metrics suitable for standalone performance (e.g., sensitivity, specificity for specific conditions detected by the algorithm). The device is positioned as clinical decision support where the physician "must refer to the primary monitor, device or data source before making a clinical decision," suggesting it's not intended for standalone diagnostic use.

    7. The type of ground truth used:

      • For the "Default Limits and Thresholds": Ground truth was established by "Review of References" (published studies) and "Expert Committee" (consensus/feedback from acute care physicians). This suggests a form of expert consensus and literature-based validation for the rule-based alerts.
      • For "Software Verification and Validation Testing" using EMR data: The method for establishing ground truth for this EMR data is not described.
    8. The sample size for the training set: Not applicable. The AlertWatch:AC is described as a "software system that combines data... and performs a series of calculations to assess potential clinical issues." It uses "a default set of rules and alerts" and "established patient risk and acuity calculations (SOFA and SIRS)." This indicates a rule-based or calculational system rather than a machine learning model that would typically have a "training set." Therefore, no training set size is mentioned.

    9. How the ground truth for the training set was established: Not applicable, as it's not a machine learning model with a distinct training set. The "default limits and thresholds" and "established patient risk and acuity calculations" are based on literature review and expert consensus rather than labelled training data for an AI model.

    Ask a Question

    Ask a specific question about this device

    K Number
    K173715
    Device Name
    AlertWatch:OB
    Manufacturer
    Date Cleared
    2018-04-23

    (140 days)

    Product Code
    Regulation Number
    884.2740
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    AlertWatch, Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AlertWatch:OB is intended for use by clinicians for secondary monitoring of maternal patients in the labor and delivery unit. AlertWatch:OB is a maternal surveillance system that combines data from validated electronic medical record systems, and displays them in one place. Once alerted by AlertWatch:OB, the clinician must refer to the primary monitor, device, or data source before making a clinical decision.

    Device Description

    AlertWatch:OB is a secondary monitoring system used by OB nurses, obstetricians, and OB anesthesiologists to monitor women in the Labor and Delivery (L&D) unit. The purpose of the program is to synthesize a wide range of maternal patient data and inform clinicians of potential problems. Once alerted, the clinician is instructed to refer to the primary monitoring device or EMR before making a clinical decision. AlertWatch:OB should only be connected to EMR systems that have been validated for use with AlertWatch:OB. AlertWatch, LLC performs the validation for each installation site.

    AI/ML Overview

    Here's an analysis of the provided text regarding the AlertWatch:OB device, addressing the requested information:

    Key Takeaway: The provided document is a 510(k) summary for AlertWatch:OB, which is primarily demonstrating substantial equivalence to a predicate device (AlertWatch:OR) based on similar intended use and technological characteristics. As such, it does not contain typical acceptance criteria and a study proving the device meets those criteria in the way one might expect for a de novo device or an AI/ML product with novel performance claims. Instead, the focus is on verification and validation of the software and human factors.


    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly present a table of quantitative acceptance criteria (e.g., sensitivity, specificity, accuracy targets) and corresponding reported device performance values in the context of clinical accuracy for the AlertWatch:OB's core function of identifying patient issues. This is because its claim is for "secondary monitoring" and not for primary diagnostic capabilities or automated decision-making.

    However, the document does describe performance activities related to usability, software functionality, and the establishment of "default limits and thresholds."

    Acceptance Criteria CategoryReported Device Performance
    Software Functionality (V&V)"Verification of AlertWatch:OR was conducted to ensure that the product works as designed, and was tested with both constructed data and data from the EMR. Validation was conducted to check the design and performance of the product." (Implies successful completion against internal specifications).
    Wireless Co-existence"Wireless Co-existence testing was performed to establish that the wireless components work effectively in the hospital environment." (Implies effective operation in the intended environment).
    Usability / Human FactorsFormative Study: Conducted to identify and fix usability problems.
    Summative Study (17 users): "The results of the study showed that users with minimal training were able to successfully perform critical tasks and use the device for its intended purpose – to clarify clinical information and support information access." (Implies successful usability).
    Clinical Validity of Default LimitsPhase 1 (References): "AlertWatch sought out definitive published studies that highlighted appropriate limits for certain patient conditions."
    Phase 2 (Expert Committee): "Obstetricians and OB anesthesia physicians at the University of Michigan Health System...reviewed the limits, provided feedback, and reviewed the final results."
    Phase 3 (External Experts): "External group of four anesthesiology and OB anesthesia experts...All approved the clinical limits." (Implies clinical consensus and validity of the set limits).

    2. Sample Size Used for the Test Set and Data Provenance

    • Software Verification & Validation: The document mentions "constructed data and data from the EMR" for testing but does not specify sample sizes for these test sets or their provenance (e.g., country of origin, retrospective/prospective).
    • Human Factors Study: 17 users were recruited for the summative usability study. The provenance of these users (e.g., hospital staff, general population) is not specified, nor is the origin of the data used in the usability testing (likely simulated or de-identified data).
    • Clinical Data (main performance study): "Not applicable. Clinical studies are not necessary to establish the substantial equivalence of this device." This implies no specific clinical test set was used to establish performance metrics like sensitivity/specificity for identifying patient issues for the 510(k) submission.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications

    Given the "Not applicable" for clinical data, there isn't a "ground truth" establishment in the traditional sense of a diagnostic or predictive AI device for a clinical test set.

    However, for the establishment of default limits and thresholds:

    • An "Expert Committee" of Obstetricians and OB anesthesia physicians from the University of Michigan Health System were involved. Their specific number is not given, but their roles are.
    • An "External Experts" group of four anesthesiology and OB anesthesia experts provided final review. Their specific qualifications (e.g., years of experience) are not detailed beyond their specialty.

    4. Adjudication Method for the Test Set

    Not applicable, as no formal clinical "test set" with a need for adjudicated ground truth (e.g., for disease presence/absence) was used for direct device performance evaluation in the supplied document. For the "default limits and thresholds," the process involved multiple stages of expert review and approval, implying a consensus-based approach rather than formal adjudication of individual cases.


    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No MRMC comparative effectiveness study was done or reported in the document. The device is not making claims about improving human reader performance but rather providing secondary monitoring and information synthesis.


    6. Standalone Performance Study (Algorithm Only)

    The document primarily focuses on the software's functionality and its role as a "secondary monitoring system" that synthesizes existing EMR data and alerts clinicians. While "Software Verification and Validation Testing" was conducted to ensure it "works as designed" and "performed a series of calculations," it does not present a standalone performance study in terms of quantifiable clinical metrics (e.g., sensitivity, specificity for detecting specific conditions) for the algorithm itself. The device is intended to be used by clinicians who then refer to primary sources.


    7. Type of Ground Truth Used

    • For Software Verification & Validation: Likely internal functional specifications and expected outputs based on "constructed data and data from the EMR."
    • For Default Limits and Thresholds:
      • Published medical literature/references.
      • Expert consensus among obstetricians, OB anesthesia physicians (University of Michigan Health System), and an external group of four anesthesiology and OB anesthesia experts. This is the closest to a "ground truth" described, but it pertains to the establishment of the parameters the device uses, not the evaluation of the device's output against a diagnostic truth.

    8. Sample Size for the Training Set

    The document does not mention a "training set" in the context of an AI/ML algorithm that learns from data to make predictions or categorizations. The AlertWatch:OB appears to be a rule-based or threshold-based system that processes data according to predefined clinical limits.


    9. How the Ground Truth for the Training Set Was Established

    Not applicable, as there is no mention of a training set or machine learning algorithm that requires a "ground truth" for learning in this 510(k) summary. The "ground truth" for the device's operational parameters (default limits) was established via literature review and expert consensus.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1