Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K161767
    Date Cleared
    2017-01-27

    (214 days)

    Product Code
    Regulation Number
    870.2450
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K151736

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The IntelliVue GuardianSoftware is indicated for use by healthcare providers whenever there is a need for generation of a patient record.

    The IntelliVue GuardianSoftware is intended for use in the collection, storage and management of data from Philips specified measurements and Philips Patient Monitors that are connected through networks.

    Device Description

    The IntelliVue GuardianSoftware (866009) is a Clinical Information Management System. It collects and manages vital signs data acquired from the IntelliVue Cableless Measurements and IntelliVue Patient Monitors. The IntelliVue GuardianSoftware provides review, reporting, clinical documentation, remote viewing, operating, interfacing, storage, printing and predictive trend analytics, meaning trending, notification, calculations and clinical advisories including EWS deterioration status. The IntelliVue GuardianSoftware is a software only product. It is intended to be installed on customer supplied compatible off-the shelf information technology equipment that meet the technical requirements as specified by Philips.

    AI/ML Overview

    The provided text is a 510(k) summary for the Philips IntelliVue Guardian Software, Revision C.1. This document primarily focuses on demonstrating substantial equivalence to a predicate device and does not contain detailed information about acceptance criteria or specific study results showing device performance in the way a clinical trial or algorithm validation study would.

    The document states that the modified device has the same technological characteristics as the legally marketed predicate device and that "all test results showed substantial equivalence." It also mentions that "Testing involved software functional testing and regression testing on an integration and system level as well as testing from the hazard analysis," and "Testing as required by the hazard analysis was conducted and all specified pass/fail criteria have been met."

    However, it does not provide quantitative performance metrics (e.g., sensitivity, specificity, AUC) or the methodology of a study that would typically be described with acceptance criteria and a detailed analysis of human-machine interaction or standalone AI performance. The device is a "Clinical Information Management System" that "collects and manages vital signs data" and provides "review, reporting, clinical documentation, remote viewing, operating, interfacing, storage, printing and predictive trend analytics." This type of device's "performance" is often assessed through software verification and validation, ensuring it accurately processes and displays data, rather than through a diagnostic accuracy study.

    Therefore, many of the requested details about acceptance criteria, study sample sizes, expert ground truth establishment, MRMC studies, and standalone AI performance cannot be extracted from this document, as it describes a software system for data management and display, not an AI/ML diagnostic or predictive algorithm.

    Based on the provided text, here is what can be inferred or stated:

    1. A table of acceptance criteria and the reported device performance:

    Acceptance Criteria (Implied)Reported Device Performance
    Functional and Regression Testing Pass/Fail Criteria:"All test results showed substantial equivalence."
    - Accuracy of data collection, storage, and management."meets all safety and reliability requirements and performance claims."
    - Correct operation of review, reporting, clinical documentation, remote viewing, operating, interfacing, storage, printing, and predictive trend analytics (trending, notification, calculations, clinical advisories, EWS deterioration status)."confirmed the effectiveness of the implemented design risk mitigation measures."
    Hazard Analysis Testing Pass/Fail Criteria:"all specified pass/fail criteria have been met."
    - Effectiveness of design risk mitigation measures.
    IEC 62304:2006 (Software life cycle processes) Compliance:Verification according to this standard was conducted.
    Safety and Reliability Requirements:"meets all safety and reliability requirements and performance claims."

    2. Sample size used for the test set and the data provenance:

    • The document does not specify a sample size for a test set in the context of clinical performance data (e.g., patient cases).
    • The testing described is primarily software functional, regression, and hazard analysis testing, not a clinical study on patient data.
    • Data provenance (country of origin, retrospective/prospective) is not mentioned as the study described is software verification and validation, not a clinical data study.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • This information is not applicable/provided in the context of a software verification and validation study. Ground truth in this context would be adherence to functional specifications and safety requirements, typically evaluated by software testers and quality engineers, not clinical experts for diagnostic accuracy.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    • This information is not applicable/provided as there is no clinical test set requiring expert adjudication mentioned.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • An MRMC study was not performed or mentioned. The device is a "Clinical Information Management System" that supports data management and provides "predictive trend analytics," but not a diagnostic AI intended for human-AI synergistic performance evaluation in the manner of an MRMC study.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    • Standalone performance in the diagnostic sense (e.g., algorithm sensitivity/specificity) was not performed or described. The device's "performance" is in its ability to correctly collect, store, manage, and display data, and provide trend analysis, not in generating independent diagnostic interpretations.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • For the software verification and validation, the "ground truth" would be the device's functional specifications, design requirements, and hazard analysis outcomes. There is no mention of clinical ground truth (e.g., pathology, outcomes) in this summary, as it's not a diagnostic AI.

    8. The sample size for the training set:

    • This information is not applicable/provided. The document describes a traditional software system, not a machine learning model that requires a training set.

    9. How the ground truth for the training set was established:

    • This information is not applicable/provided.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1