Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K151366
    Date Cleared
    2015-10-30

    (162 days)

    Product Code
    Regulation Number
    870.2450
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Philips CS770 IntelliSpace Critical Care and Anesthesia

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Intended for use in the data collection, storage, and management with independent bedside devices, and ancillary systems that are connected either directly or through networks. The device is indicated for use whenever there is a need for generation of a patient record and computation of drug dosage in all areas patient care is given in the hospital, including critical care and anesthesia areas.

    Device Description

    The CS770 IntelliSpace Critical Care and Anesthesia software (Release H.O), a modified version of the IntelliVue Clinical Information Portfolio which was last cleared under premarket notification number K100272. ICCA is a software only product used for charting and data management, offers clinical decision and clinical workflow support for critical care environments, intra-operative anesthesia, and the anesthesia-critical care continuum. Integrating information from patient vital sign monitors and ancillary bedside devices, hospital systems such as CPOE and laboratory, and clinical documentation, ICCA uses advisories and evidence-based medicine bundles to provide information to clinicians. In addition ICCA provides a powerful Data Analysis and Reporting (DAR) database and reporting toolset for the critical care and anesthesia environments.

    AI/ML Overview

    This document describes a 510(k) premarket notification for the Philips CS770 IntelliSpace Critical Care and Anesthesia, Release H.0. The submission primarily focuses on establishing substantial equivalence to a previously cleared device (IntelliVue Clinical Information Portfolio, Release E.0, K100272) rather than presenting a performance study against specific acceptance criteria for a novel functionality.

    Therefore, the requested details regarding acceptance criteria, device performance, study types (MRMC, standalone), sample sizes, expert qualifications, and ground truth establishment are not explicitly provided or applicable in the traditional sense of a clinical performance study for this type of 510(k) submission.

    This 510(k) is for a software modification and upgrade, with the primary "study" being the verification, validation, and testing activities to ensure the modified device still meets the safety and performance of its predicate.

    Here's how to address the requested information based on the provided text:

    1. A table of acceptance criteria and the reported device performance

    Since this submission is about demonstrating substantial equivalence and not a new diagnostic algorithm with specific performance metrics (e.g., sensitivity, specificity), there isn't a direct table of clinical acceptance criteria and reported device performance in that sense.

    However, the "acceptance criteria" here relate to meeting the specifications of the predicate device and the successful execution of non-clinical testing.

    Acceptance Criteria (Implied)Reported Device Performance (Summary from text)
    Device functions as intended"Meets all reliability requirements and performance claims"
    Maintains predicate's safety and effectiveness"Test results showed substantial equivalence"
    No new safety concerns introduced by modifications"Substantial equivalence" implies no new unacceptable risks
    Compatibility with updated operating systemsVerified for compatibility with Windows Server 2008 R2, 2012 R2, SQL Server 2008 R2/2012
    Proper implementation of new features (Medical Reference, Patient Data Security, Formulary Upload, UI Streamlining, Discharge Management Report, Medication Order co-signing)Verified and validated through testing activities

    2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document does not specify a "test set" in the context of clinical data for performance evaluation. The testing involved "system level tests, performance tests, and performance testing from hazard analysis." These are likely internal software verification and validation tests rather than clinical trials with patient data.

    • Sample Size: Not specified (likely refers to test cases designed for software testing rather than patient cases).
    • Data Provenance: Not applicable, as it's not a clinical data study. It describes internal software testing.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    Not applicable. "Ground truth" in this context would implicitly be the established functional and performance specifications of the predicate device, against which the modified software's behavior is compared during testing. "Experts" might be involved in defining these specifications or executing tests, but they are not described as adjudicating clinical ground truth for a test set.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Not applicable. This type of adjudication method is used in studies where multiple human readers or algorithms interpret medical data to establish a consensus or ground truth label. The provided documentation does not describe such a study.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No. An MRMC comparative effectiveness study was not done. The device is a clinical information management system, not an AI-assisted diagnostic tool.

    6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done

    Not applicable. The device is a software system for data management and clinical workflow support, not a standalone algorithm performing a diagnostic task. Its "performance" is measured by its ability to correctly process and display information, and support clinical decisions, within a human-in-the-loop context.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    Not applicable. For this software modification, the "ground truth" for non-clinical testing would be the expected output and behavior based on the established specifications derived from the predicate device's functionality and regulatory requirements.

    8. The sample size for the training set

    Not applicable. This device is not described as using machine learning or AI models that require a "training set." It is a traditional software system.

    9. How the ground truth for the training set was established

    Not applicable, as there is no training set for an AI/ML model mentioned.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1