Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K190800
    Manufacturer
    Date Cleared
    2020-02-06

    (315 days)

    Product Code
    Regulation Number
    868.1890
    Reference & Predicate Devices
    Predicate For
    Why did this record match?
    Reference Devices :

    K162515, K071533

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Q-NRG & Q-NRG+ Portable Metabolic Monitors are indicated for the measurement of Resting Energy Expenditure (REE) for spontaneously breathing and (Q-NRG+ only) ventilated patients, within the following populations:

    • Spontaneously breathing subjects >15 kg (33 lb) when using a canopy;

    • Spontaneously breathing subjects age >6 and > 10 kg (22 lb) when using a face mask;

    • Ventilated subjects age >10 and >10 kg (22 1b).

    The Q-NRG & Q-NRG+ Portable Metabolic Monitors are intended to be used in professional healthcare facilities only (Iimited to ICUs for ventilated patients).

    Device Description

    The Q-NRG and Q-NRG+ devices are Portable Metabolic Monitors, designed for the measurement of resting energy expenditure (REE) in both spontaneously breathing and mechanically ventilated patients.

    AI/ML Overview

    The provided text describes the Cosmed Q-NRG & Q-NRG+ Portable Metabolic Monitors, a medical device for measuring Resting Energy Expenditure (REE). The document highlights the device's technical specifications and comparisons to a predicate device (Quark RMR Metabolic Cart) to establish substantial equivalence.

    However, the provided text does not contain the specific information required to answer your questions regarding acceptance criteria and the study proving the device meets those criteria in the context of an AI/human comparative effectiveness study. The document focuses on regulatory clearance and device performance and safety against general medical device standards.

    Here's a breakdown of why the information you're looking for is not present:

    • No mention of AI/machine learning: The device is described as a "Portable Metabolic Monitor," suggesting direct measurement or calculation based on sensor data, not an AI or machine learning algorithm requiring human-in-the-loop performance studies.
    • No stated acceptance criteria for an AI algorithm: The "Accuracy Validation" section mentions internal validation protocols for mask, canopy, and ventilator measurement accuracy, with specific percentages for Ventilation, VO2, and VCO2 (e.g., Ventilation: <2% or 100 ml/min). These are likely the device's performance specifications, but not "acceptance criteria" in the context of an AI's performance against a specific clinical task.
    • No details on a "test set" for an AI model: The accuracy validation refers to internal protocols, but not a defined test set for evaluating an AI model.
    • No information on experts for ground truth, adjudication, or MRMC studies: These elements are specific to AI performance validation, which is not described for this device.
    • No mention of training set size or ground truth for training set: Again, these are AI-specific details that are not in the document.

    Based on the provided text, I can only provide the device's stated performance specifications, not acceptance criteria for an AI model or details of an AI performance study.

    Here's the closest information I can extract regarding performance:

    1. A table of acceptance criteria and the reported device performance:

    MetricQ-NRG & Q-NRG+ Performance
    Measurement Accuracy
    Ventilation<2% or 100 ml/min
    VO2±3% or 5 ml/min
    VCO2±3% or 5 ml/min

    (Note: These are performance specifications, not explicitly "acceptance criteria" in the AI validation sense. The document states accuracy validation was done "according to internal validation protocol," implying these are the targets the device met.)

    The following information cannot be found in the provided text:

    • Sample size used for the test set and the data provenance.
    • Number of experts used to establish the ground truth for the test set and their qualifications.
    • Adjudication method for the test set.
    • If a multi-reader multi-case (MRMC) comparative effectiveness study was done, or the effect size of human readers improving with AI vs. without AI assistance.
    • If a standalone (algorithm only without human-in-the-loop performance) was done.
    • The type of ground truth used (expert consensus, pathology, outcomes data, etc.) for an AI model.
    • The sample size for the training set.
    • How the ground truth for the training set was established.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1