Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K120320
    Date Cleared
    2012-08-14

    (194 days)

    Product Code
    Regulation Number
    870.1130
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Clinical Application's intended use is to retrospectively receive, display and store monitored vital signs parameters and related data. Additionally, it can send configuration information to Watermark home monitoring devices. Watermark devices include the Connected Care Mobile Application and MiPal. The configuration information may include a patient's vitals collection schedule and parameters to be collected. The Clinical Application displays the data and system alerts for review and interpretation by a healthcare professional. The Clinical Application is not intended for emergency use or real-time monitoring.

    Device Description

    The Connected Care Clinical Application is a cloud based, web software system. It is accessed from commercially available PC systems with a web browser and minimum performance specifications consistent with typical PC hardware and equipment specifications. The Clinical Application accepts data from Watermark Patient Monitors.

    The Connected Care Clinical Application is a medical device data system that receives, stores, and displays data received from Watermark home monitoring devices. Additionally, it can send configuration information to Watermark home monitoring devices. Watermark devices include the Mobile Application and MiPal. The configuration information may include a patient's vitals collection schedule and parameters to be collected.

    AI/ML Overview

    Here's an analysis of the provided text regarding the acceptance criteria and study for the Watermark Medical Connected Care Clinical Application (K120320):

    The provided document describes a Medical Device Data System (MDDS). For such systems, the "acceptance criteria" are not typically framed in terms of clinical performance metrics like sensitivity, specificity, or accuracy compared to a ground truth label. Instead, the acceptance criteria revolve around software validation and functional requirements. The "study" that proves the device meets these criteria is the software validation process itself.

    Based on the provided text, here's the information categorized:

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria (Implied from text)Reported Device Performance
    Functional Requirements:
    • Receives vital signs parameters and related data
    • Stores vital signs parameters and related data
    • Displays vital signs parameters and related data
    • Sends configuration information to Watermark home monitoring devices (Mobile Application and MiPal)
    • Configuration information includes patient vitals collection schedule and parameters
    • Displays data and system alerts for review and interpretation by a healthcare professional | The software validation results demonstrated that the Clinical Application performed within its specifications and functional requirements for software. |
      | Compliance with Guidelines and Standards:
    • Adherence to FDA reviewer's guides for medical device software | The software validation results demonstrated that the Clinical Application was in compliance with the guidelines and standards referenced in the FDA reviewer's guides. |
      | Intended Use:
    • For retrospective review
    • Not for emergency
    • Not for real-time monitoring | The device's performance aligned with its stated intended use for retrospectively receiving, displaying, and storing monitored vital signs and related data for review and interpretation by a healthcare professional, and sending configuration information. |

    2. Sample size used for the test set and the data provenance

    • Test Set Sample Size: Not explicitly stated. The document refers to "software validation results," which implies a series of tests, but not a specific sample size of medical cases or data points.
    • Data Provenance: Not explicitly stated. Given the device's function (receiving data from Watermark home monitoring devices), the data would originate from these devices. The document does not specify country of origin or whether the data used for validation was retrospective or prospective.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • This information is not applicable in the traditional sense for this type of device (MDDS). The "ground truth" for an MDDS primarily relates to whether the software correctly receives, stores, displays, and transmits data as per its specifications, not whether it correctly labels or diagnoses a medical condition. The validation would involve comparing the displayed data against the received data, and the transmitted configuration against the entered configuration. This typically involves software testers or quality assurance personnel verifying data integrity and functionality.

    4. Adjudication method for the test set

    • Not applicable in the traditional sense. Since the validation is software-centric (data integrity and functionality), adjudication by medical experts for discrepant interpretations wouldn't be relevant. Software testing typically involves predefined test cases with expected outcomes. Any discrepancies would be bugs to be fixed and re-tested, not adjudicated.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No, an MRMC comparative effectiveness study was not done. This type of study is relevant for devices that assist in diagnosis or interpretation (e.g., AI for radiology). The Connected Care Clinical Application is an MDDS that primarily handles data management and display; it does not involve AI for interpretation or diagnosis. Therefore, there is no effect size related to human reader improvement with or without AI.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

    • Yes, in essence, the software validation for functional correctness is a standalone evaluation. The device itself is "software only" in its function, receiving and displaying data. Its performance is judged on whether it correctly executes its specified functions (receiving, storing, displaying, transmitting data) independent of human interpretation of clinical outcomes. The "algorithm" here refers to the software's logic for handling data.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • The "ground truth" for this MDDS would be the expected output or behavior of the software, based on its design specifications. This means:
      • Data Integrity: The data received matches the data sent from the monitoring devices.
      • Data Storage: The stored data accurately reflects the received data.
      • Data Display: The displayed data accurately reflects the stored data according to display specifications.
      • Configuration Transmission: The configuration sent to the devices matches the configuration entered into the system.
    • This ground truth is established by software requirements specifications and design documents, against which the validated system's performance is measured.

    8. The sample size for the training set

    • Not applicable. This device is an MDDS and does not employ machine learning or AI models that require a training set. Its functionality is based on deterministic software logic, not on learning from data.

    9. How the ground truth for the training set was established

    • Not applicable, as there is no training set for this device.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1