Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K161686
    Device Name
    F&P InfoSmart
    Date Cleared
    2017-01-24

    (221 days)

    Product Code
    Regulation Number
    868.5905
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    InfoSmart™ is a software application for use with compatible Fisher & Paykel Healthcare OSA Flow generators. It allows for remote collection and management of device usage and therapeutic information. It also allows for remote therapy reporting and adjustment of device settings by a clinician.

    Device Description

    InfoSmart™ is a software reporting tool which provides reports on sleep therapy data including compliance, AHI, leak and pressure. This software can be used to report on data from compatible Fisher & Paykel Healthcare medical devices. The software enables the Health Service Provider to: Access and review a patient's compliance and efficacy reports. Change device settings Manage equipment information. Manage patient information Share the above data with other health service providers and organisations involved in a patient's therapy. InfoSmart™ may be provided as an on-premises software application, or a web application. Data from a compatible device can be transferred to InfoSmart™ in a number of ways; including a serial cable, a USB stick, or wirelessly though a communications module. Data is transferred to a central database from which it can be accessed and displayed on the health service provider's computer.

    AI/ML Overview

    The provided text does not contain detailed acceptance criteria or a study proving the device meets those criteria with specific performance metrics. The document is a 510(k) summary for the F&P InfoSmart™ software, outlining its substantial equivalence to a predicate device (InfoGSM).

    However, it does mention general categories of performance testing. Based on the available information, here's an attempt to answer your questions:

    1. Table of acceptance criteria and the reported device performance

    The document states: "All tests confirmed that the software met the predetermined acceptance criteria." However, the specific quantitative acceptance criteria and detailed reported performance metrics are not provided. The testing categories are:

    Acceptance Criteria CategoryReported Device Performance (General)
    FunctionalityMet predetermined acceptance criteria
    ReportingMet predetermined acceptance criteria
    Device compatibilityMet predetermined acceptance criteria

    The "Performance testing" section also explicitly states the focus for each:

    • Reporting: "Report testing focuses on report accuracy, ensuring all data processing performed by the software is accurate, and that this information is correctly reflected in therapy reports."
    • Functionality: "Functional acceptance testing covers the functional requirements of the product, ensuring all functions and features perform according to specification."
    • Device compatibility testing: "ensures all supported devices function correctly with the software and that data is uploaded from the device. Testing also ensures that device settings can be changed by the software and that these changes are accurately reflected within the device."

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document does not specify the sample size for any test sets, nor does it provide information on the data provenance (country of origin, retrospective/prospective).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    This information is not provided in the document. The device is a software application for managing and reporting therapy data and device settings, not an AI or diagnostic device that typically requires expert-established ground truth for its performance assessment in a clinical context.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    This information is not provided in the document.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No such study was conducted or reported. This device is a data management and reporting tool, not an AI for diagnostic image interpretation or similar tasks that would typically involve an MRMC study.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The "Performance testing" section describes non-clinical testing of functionality, reporting, and device compatibility. This testing assesses the algorithm's performance in terms of data processing, report accuracy, and interaction with compatible devices, which could be considered standalone performance in the context of this type of software. However, no specific "standalone" study is explicitly named or detailed beyond these general testing categories.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    Given the nature of the device (software for collecting, managing, reporting usage/therapeutic information, and adjusting device settings), the "ground truth" would likely involve:

    • Expected data values: Comparing processed data to source data from the OSA flow generators to ensure accuracy.
    • Expected functionality: Verifying that software features work as designed according to specifications.
    • Expected device behavior: Ensuring that settings changes communicated by the software are correctly applied by the compatible devices.

    The document states "report accuracy" and "all data processing performed by the software is accurate," implying a comparison to known or calculated correct values.

    8. The sample size for the training set

    The document does not mention a "training set." This type of software, while complex, is not typically described as using machine learning models that require training sets in the same way an AI diagnostic algorithm would. Its development would involve traditional software engineering and testing.

    9. How the ground truth for the training set was established

    As no training set is mentioned in the context of a machine learning model, this question is not applicable based on the provided text.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1