Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K230214
    Device Name
    Huma RPM (RPM)
    Manufacturer
    Date Cleared
    2023-06-02

    (127 days)

    Product Code
    Regulation Number
    870.2300
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    Reference Devices :

    K211949

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Huma platform is a modular software as a medical device (SaMD) which may utilize compatible devices and software to obtain data collated via a mobile app or web app and delivered to the clinician via a web portal or web app where it may be viewed to drive clinical management. It is intended to be used for the physiological and non-physiological intermittent or spot-check monitoring of all condition patients in professional healthcare facilities, such as clinics, hospitals or skilled nursing facilities, or in the patient's home setting. It is intended for the monitoring of patients by trained healthcare professionals.

    Device Description

    The Huma RPM is a digital remote patient monitoring platform that empowers patients to better manage their own health. The modular solution tracks symptoms and vital signs, flags deterioration, incorporates telemedicine functionality and can be connected to other medical devices.

    Data from the App is surfaced and analyzed in the Portal which has flagging for easy triage and decision making. The Portal enables clinicians to safely monitor patients, spot deterioration and intervene early to improve outcomes and avoid unnecessary clinic, outpatient and hospital attendance.

    The portal can display information about individual patients and visualize data trends. Clinicians can add notes and collaborate with colleagues to ensure the patient receives the optimal care they need. The platform also has messaging capability and telemedicine for patient video consultations. The platform can be integrated with EHR systems and existing patient portals. The clinician portal also allows for Role-Based-Access-control (RBAC) based on a person's role within the healthcare facility according to data view rights for different healthcare system personnel.

    AI/ML Overview

    The provided document is a 510(k) summary for the Huma RPM device. It largely focuses on establishing substantial equivalence to predicate devices based on intended use and technological characteristics, and describes software development, testing, and compliance with standards. However, it does not contain details about specific acceptance criteria, a comparative effectiveness study (MRMC), standalone performance (algorithm only), or how ground truth was established for a test set or training set.

    The document discusses "Acceptance Testing" as part of their robust software development process, but this refers to internal quality assurance for functional requirements rather than a clinical performance study with predefined metrics. The "Anomalies" section states "No anomalies were discovered during the verification/validation testing," which is a statement about internal software quality, not clinical performance.

    Therefore, many of the requested details cannot be extracted directly from this 510(k) summary. I will answer based on the information available and explicitly state when information is not present in the provided text.


    Acceptance Criteria and Study Proving Device Meets Acceptance Criteria

    Based on the provided 510(k) summary (K230214) for Huma RPM, the document primarily focuses on establishing substantial equivalence to predicate devices through a comparison of intended use and technological characteristics, along with verification and validation of the software development process. It does not present a detailed clinical study with specific quantitative acceptance criteria for device performance (e.g., sensitivity, specificity, accuracy) relative to a ground truth.

    The "Performance Testing" section describes a robust software development process including Unit and Integration Testing, Acceptance Testing (internal quality assurance), Demo Smoke Testing, and Demo Sanity Testing. It also lists compliance with various FDA guidances and international standards related to software, risk management, cybersecurity, usability, and alarm systems. However, these are descriptions of the software development lifecycle and compliance efforts, not a clinical study demonstrating performance against specific clinical acceptance criteria.

    1. Table of Acceptance Criteria and Reported Device Performance

    As a clinical performance study with explicit quantitative acceptance criteria is not described in the provided document, a table of such criteria and performance metrics cannot be generated. The document primarily relies on demonstrating that the device is "substantially equivalent" to established predicate devices based on its intended use and technological features.

    The internal Acceptance Testing mentioned (within "Performance Testing") is described as:

    • "completed by entering test cases, organizing test suites, executing test runs, and tracking their results, all through a robust web interface."
    • "followed a centralized test management concept that helped in easy communication and enabled cross-checking of tasks across the Quality Acceptance Testers."
    • "for all agreed requirements were executed on the Quality Acceptance environment."
    • The "Anomalies" section states: "No anomalies were discovered during the verification/validation testing."

    This indicates system-level functional and non-functional requirements were tested, but no specific performance statistics are provided.

    2. Sample Size Used for the Test Set and Data Provenance

    This information is not provided in the document. The document describes software validation and verification but does not detail a clinical test set with patient data.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications

    This information is not provided in the document. Since a clinical test set requiring expert ground truth is not described, this information is not applicable from the provided text.

    4. Adjudication Method for the Test Set

    This information is not provided in the document, as a clinical test set requiring adjudication is not described.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size

    This information is not provided in the document. The filing focuses on demonstrating substantial equivalence of the device's function (data display, flagging, analytics) to predicate devices, rather than a comparative effectiveness study of human readers with vs. without AI assistance. The described "Performance Testing" is related to software verification and validation, not clinical efficacy or comparative effectiveness.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    The document states: "The platform may also provide data analytics, risk scores and static algorithms that may assist in the assessment of risk prediction, diagnosis, disease monitoring and prognostication." However, it does not contain details of a standalone performance study for these algorithms (e.g., how accurate the risk predictions are, or diagnostic performance metrics) against a defined ground truth. The overall device is described as "delivered to the clinician via a web portal or web app where it may be viewed to drive clinical management" and "intended for the monitoring of patients by trained healthcare professionals," indicating a human-in-the-loop context.

    7. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, etc.)

    This information is not provided in the document, as a clinical study with an established ground truth is not detailed. For the internal "Acceptance Testing," the "ground truth" would be the predefined functional and non-functional requirements of the software.

    8. The Sample Size for the Training Set

    This information is not provided in the document. While the platform mentions "data analytics, risk scores and static algorithms," there is no mention of a machine learning model that would require a "training set" in the context of this 510(k) summary. The algorithms are referred to as "static algorithms," suggesting pre-defined rules rather than adaptive machine learning models.

    9. How the Ground Truth for the Training Set Was Established

    This information is not provided in the document, as a training set for machine learning is not mentioned.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1