Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K103094
    Date Cleared
    2011-05-17

    (210 days)

    Product Code
    Regulation Number
    870.1435
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K082308

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Vigileo APCO/Oximetry Monitor is indicated for continuously measuring hemodynamic parameters such as cardiac output and oximetry to assess oxygen delivery and consumption. When connected to an Edwards oximetry catheter, the monitor measures oximetry in adults and pediatrics. The monitor also displays parameters, such as stroke volume and stroke volume variation, used to assess fluid status and vascular resistance. The Vigileo APCO/Oximetry Monitor may be used in all setting in which critical care is provided.

    Device Description

    The Vigileo Arterial Pressure Cardiac Output (APCO)/Oximetry Monitor (Vigileo Monitor) is a microprocessor-based instrument. When used with the FloTrac sensor, the Vigileo Monitor continuously measures key parameters of arterial pressure cardiac output (CO), cardiac index (CI), oxygen delivery (DO2), oxygen delivery index (DO2I), stroke volume (SV), stroke volume variation (SVV), stroke volume index (SVI), systemic vascular resistance (SVR) and systemic vascular resistance index (SVRI). When used with Edwards oximetry catheters, the Vigileo Monitor measures central venous oxygen saturation (ScvO2) and mixed venous oxygen saturation (SvO2). The instrument software has been revised to enhance the SVV algorithm, improve the GUI and add compatibility with additional external devices for data output.

    AI/ML Overview

    The provided text describes the Vigileo Arterial Pressure Cardiac Output (APCO)/Oximetry Monitor. This submission is a 510(k) for a revised version of an existing device, focusing on software enhancements rather than a new clinical application. As such, the study described is a comparison to a predicate device, aiming to demonstrate substantial equivalence, rather than a clinical trial establishing effectiveness against acceptance criteria in the way a novel device might.

    Here's an analysis of the available information:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document does not explicitly state formal "acceptance criteria" in terms of specific performance metrics (e.g., accuracy, sensitivity, specificity, or error bounds for CO/SVV measurements) that the device must meet against a predefined gold standard. Instead, the focus is on substantial equivalence to a predicate device (K082308, an earlier version of the Vigileo Arterial Pressure Cardiac Output/Oximetry Monitor).

    The reported device performance is described in a qualitative manner:

    Performance AspectReported Performance
    Comparative Analysis (Clinical Data)"Verification and validation testing was conducted to compare the performance and functionality of the pending and the predicate devices. This testing regimen included side-by-side bench and pre-clinical studies, and comparative analysis of clinical data. The Vigileo Monitor has been shown to be safe and effective and substantially equivalent to the cited predicate device for its intended use in the OR and ICU environments."
    Functional/Safety Testing (Software, Mech, Elec, Bench, Pre-clinical)"The Vigileo Monitor has successfully undergone functional and performance testing, including software verification and validation, mechanical and electrical testing, bench studies, pre-clinical animal studies, comparison testing of clinical cases, and clinical usability. The Vigileo Monitor has been shown to be safe and effective and substantially equivalent to the cited predicate devices for their intended use in the OR and ICU environments."

    2. Sample Size Used for the Test Set and Data Provenance:

    The document mentions "comparison testing of clinical cases" and "comparative analysis of clinical data" but does not provide specific numbers for the sample size (e.g., number of patients, number of data points) used in these clinical comparisons.

    The data provenance (country of origin, retrospective/prospective) is not specified.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:

    This information is not provided. Given that the study is a comparison to a predicate device for substantial equivalence with minor software revisions, it's unlikely that independent experts were used to establish a new "ground truth" for the test set in the same way one might for a diagnostic imaging AI with unknown pathology. The "ground truth" for comparison would likely be the measurements obtained from the predicate device itself, or potentially highly accurate invasive measurements (e.g., thermodilution cardiac output) if those were used to validate both the predicate and the revised device in parallel. The document does not specify this.

    4. Adjudication Method for the Test Set:

    This information is not provided. As above, for a device modification showing substantial equivalence to a predicate, a complex adjudication process by multiple human annotators is less likely to be applicable.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done:

    No, an MRMC comparative effectiveness study was not done. The device itself is a monitor for physiological parameters, not a diagnostic imaging aid that human readers interpret. Therefore, the concept of "human readers improving with AI vs. without AI assistance" does not apply to this device. The "AI" here refers to algorithms for calculating hemodynamic parameters, not an assistance tool for human interpretation.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    Yes, the primary evaluation described is a standalone performance assessment. The Vigileo Monitor, with its enhanced SVV algorithm, directly generates numerical outputs for various hemodynamic parameters. The testing involved comparing these outputs from the revised device against those from the predicate device (and potentially against a gold standard method if available, though not explicitly stated as such for this submission). There is no "human-in-the-loop" aspect to the output generation from the device itself.

    7. The Type of Ground Truth Used:

    The document implies that the ground truth for comparison was the measurements obtained from the predicate device (K082308). It states "comparative analysis of clinical data" between the "pending and the predicate devices." While it's possible that a more invasive, established gold standard (like thermodilution for cardiac output) was also used in the original validation of the predicate device, it's not explicitly stated as the ground truth for this particular submission's comparison. The software enhancements were validated against the behavior of the previous software version.

    8. The Sample Size for the Training Set:

    This information is not provided. The document makes no mention of a "training set" in the context of machine learning, which is a modern concept for AI. The "SVV algorithm enhancement" is more likely a traditional algorithmic improvement rather than a deep learning model trained on a large dataset. Therefore, the concept of a "training set" as understood in current AI contexts is unlikely to apply directly here.

    9. How the Ground Truth for the Training Set was Established:

    As per point 8, the concept of a "training set" for a machine learning model is not explicitly mentioned or implied. If the SVV algorithm was "enhanced," it would likely have been refined based on established physiological principles and potentially validated against existing physiological data or expert consensus on wave-form analysis, rather than through a machine learning training process with a distinct ground truth dataset. The document doesn’t provide details on the specific method of algorithm enhancement or its associated ground truth establishment.

    Ask a Question

    Ask a specific question about this device

    K Number
    K100709
    Date Cleared
    2010-12-07

    (270 days)

    Product Code
    Regulation Number
    870.1435
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K082308, K072735

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The EV1000 Platform is indicated for use primarily for critical care patients in which the balance between cardiac function, fluid status and vascular resistance needs constant or intermittent assessment. The EV1000 Platform may be used in all settings in which critical care is provided.

    Device Description

    The EV1000 Platform consists of two components: a databox and a monitor. The databox is where all incoming signals are processed. It contains all the algorithms for parameter calculation. It is mounted to the patient bedside or to an IV pole. The databox has mounts so that pressure transducers, FloTrac sensors, and the VolumeView System can be attached to it. The databox has no graphical user interface. It connects, via an Ethernet cable, to the EV1000 monitor.

    The EV1000 monitor is a panel PC with a touchscreen interface. It is connected to the databox by an Ethernet cable. It is intended to be mounted to an IV pole. The monitor does not process any data; its sole function is to act as a user and communication interface.

    The EV1000 Platform, when connected to the VolumeView system, intermittently measures or calculates intermittent cardiac output, intermittent cardiac index. , intermittent stroke volume. intermittent stroke volume index, systemic vascular resistance, and systemic vascular resistance index.

    When connected to a FloTrac sensor, the platform continuously measures or calculates arterial pressure cardiac output, stroke volume, stroke volume index, stroke volume variation, systemic vascular resistance, and systemic vascular resistance index.

    AI/ML Overview

    The provided FDA 510(k) summary for the EV1000 Platform focuses on its substantial equivalence to predicate devices based on functional and safety testing. It does not contain the kind of detailed information typically found in acceptance criteria and efficacy studies for AI/algorithm-based diagnostic devices, particularly regarding quantitative performance metrics, sample sizes for test/training sets, expert ground truth adjudication, or comparative effectiveness studies with human readers.

    Here's why and what can be inferred:

    Key Takeaway: The EV1000 Platform is a hardware monitoring system that calculates physiological parameters using algorithms. It is not an AI/ML diagnostic device in the modern sense that would typically have the requested data (e.g., diagnostic accuracy, sensitivity, specificity, or reader studies for image interpretation).

    The document states: "It contains all the algorithms for parameter calculation." This implies that the algorithms perform calculations based on physiological signals, rather than interpreting complex data like medical images or making a diagnosis. Therefore, the "acceptance criteria" would likely be around the accuracy of these calculations compared to a known standard or the predicate devices, and functional safety.


    Based on the provided text, here's what can be extracted and what cannot:

    1. Table of Acceptance Criteria and Reported Device Performance:

    Acceptance Criteria CategoryReported Device Performance (from text)Notes / Inferences
    Functional/Safety Equivalence"The EV1000 Platform has successfully undergone functional testing. This product has been shown to be equivalent to the predicate devices."The specific functional tests and the quantitative metrics for "equivalence" are not detailed in this summary. It's a general statement that the device functions as intended and safely, similar to its predecessors.
    Intended Use"The EV1000 Platform is indicated for use primarily for critical care patients in which the balance between cardiac function, fluid status and vascular resistance needs constant or intermittent assessment. The EV1000 Platform may be used in all settings in which critical care is provided."This describes the scope of application, not a performance metric.
    Comparative Analysis"The EV1000 Platform has been demonstrated to be as safe and effective as the predicate devices for their intended use."Similar to functional/safety, this is a summary statement. The underlying data demonstrating "as safe and effective" is not provided.
    Parameter Calculation (e.g., Cardiac Output, Stroke Volume)Not explicitly stated in quantitative terms (e.g., accuracy, precision relative to a gold standard).The document states it "measures or calculates intermittent cardiac output, intermittent cardiac index...". The acceptance criteria for these calculations would typically involve comparison against a reference method (e.g., thermodilution for cardiac output). However, these details are absent from the summary.

    Detailed Breakdown of Other Requested Information:

    2. Sample Size Used for the Test Set and Data Provenance:

    • Sample Size: Not specified in the provided text.
    • Data Provenance: Not specified. Given the nature of a medical device submission, it would likely involve clinical data, but its origin (e.g., country, specific hospitals) and whether it was retrospective or prospective are not mentioned.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications:

    • Number of Experts: Not applicable/not specified. For this type of device (physiological parameter calculation), "ground truth" would be established by reference methods or validated sensors, not by expert interpretation of data. If there were a need for expert review of device output for clinical acceptability, it is not mentioned.
    • Qualifications: Not applicable/not specified.

    4. Adjudication Method for the Test Set:

    • Adjudication Method: Not applicable/not specified. Adjudication methods (like 2+1, 3+1) are typically used when multiple human readers interpret complex data, such as medical images, to establish a consensus ground truth. Here, output parameters are calculated numbers.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done:

    • MRMC Study: No, it was not done. MRMC studies are specifically designed to assess the diagnostic efficacy of a system (often AI-assisted) by comparing multiple human readers' performance with and without the system's help on multiple cases. This device is a physiological monitoring platform, not an image interpretation or diagnostic aid in that context.
    • Effect Size of Human Readers Improvement: Not applicable, as no MRMC study was conducted.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study Was Done:

    • Standalone Study: The summary implies that the "functional testing" and "comparative analysis" against predicate devices would assess the algorithms within the system directly. However, the details of how this was done (e.g., comparison of calculated parameters against a gold standard or a reference device on a dataset) are not provided. It simply states the device "has been demonstrated to be as safe and effective."

    7. The Type of Ground Truth Used:

    • Ground Truth: For a device calculating physiological parameters, the ground truth would typically be established by:
      • Reference Methods: Such as thermodilution for cardiac output, or direct arterial line measurements for arterial pressure, using independently validated instruments.
      • Predicate Device Comparison: Performance relative to the legally marketed predicate devices would be a primary comparison point for demonstrating substantial equivalence.
      • The specific type of ground truth used to validate the accuracy of the calculated parameters is not detailed in this 510(k) summary.

    8. The Sample Size for the Training Set:

    • Training Set Sample Size: Not specified. This device calculates parameters using algorithms, meaning it's likely based on established physiological models and signal processing rather than machine learning algorithms that require a "training set" in the common sense (i.e., for learning from annotated data). If machine learning was used, the training set size would be crucial, but it's not mentioned.

    9. How the Ground Truth for the Training Set Was Established:

    • Ground Truth for Training Set: Not applicable/not specified. As above, this device's algorithms are likely model-based, not learned from a large annotated training set. If there were development data, its ground truth establishment is not described.

    In summary: The provided 510(k) summary for the EV1000 Platform is for a physiological monitoring device that calculates parameters. The information it contains aligns with demonstrating "substantial equivalence" based on functional and safety testing compared to predicate devices, rather than the detailed performance metrics and study designs (like MRMC or reader studies) typically associated with modern AI/ML diagnostic tools focused on pattern recognition or complex data interpretation. The summary lacks the granularity for the acceptance criteria and study details that would be present for an AI product.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1