Search Results
Found 1 results
510(k) Data Aggregation
(230 days)
Hypotension Decision Assist is indicated to acquire, process and display arterial pressure and other key cardiovascular characteristics of adult patients who are at least eighteen years of age that are undergoing surgery where their arterial pressure is being continuously monitored by a vital-signs monitor. It is indicated for use to assist anesthesia healtheare professionals manage the blood pressure, hemodynamic stability and the cardiovascular system during such surgery.
Hypotension Decision Assist (HDA) is a clinical decision support Software as a Medical Device (SaMD) that is installed upon a medically-rated touch-screen computer. HDA connects to a multi-parameter patient monitor supplied by other manufacturers, from which it acquires vital signs data continuously including the arterial blood pressure waveform and cardiovascular-related numeric parameters.
HDA continually processes this data to display, in graphical charts and numeric format, vital signs data and derived variables including mean arterial pressure (MAP), heart rate, systolic and diastolic blood pressure, cardiac output and systemic vascular resistance. HDA compares MAP to user set targets to indicate when MAP is above or below the target range. It allows the user to mark the administration of vasopressors and volume challenges to the MAP trend.
Here's a breakdown of the requested information based on the provided text, focusing on the acceptance criteria and the study that proves the device meets them:
1. A table of acceptance criteria and the reported device performance
The document primarily focuses on demonstrating substantial equivalence to a predicate device and meeting various technical and safety standards, rather than defining specific numerical performance acceptance criteria for clinical outcomes. However, it does highlight areas of verification.
| Acceptance Criterion Type | Reported Device Performance (Verification Method) |
|---|---|
| System Functionality | Verified: Interactivity of the system interface and ability to process and display physiologic parameters for intended use. |
| Measurement Accuracy | Verified: Accuracy across the intended measuring range for each physiologic parameter, demonstrated via bench testing following IEC 60601-2-34 Edition 3.0 2011-05. Demonstrated substantial equivalence to reference devices. Verified equivalent performance when connected to specified vital signs monitors. |
| Artifact Detection | Verified: Capability to detect signal artifacts and anomalies that could impact performance, demonstrated via bench testing. |
| Predicate Comparison (Cardiac Output & SVR Events) | Comparable Performance: Demonstrated comparable performance to the predicate device with respect to the detection of cardiac output and systemic vascular resistance events via bench testing. |
| Power Interruption Tolerance | Verified: Tolerates sudden power interruption without data loss or change in operating mode, demonstrated via bench testing following IEC 60601-2-34 Edition 3.0 2011-05. |
| Summative Usability | Fulfilled Needs: Demonstrated that HDA fulfills the needs of its intended users, following FDA guidance "Applying Human Factors and Usability Engineering to Medical Devices." |
| Software Verification & Validation (moderate level of concern) | Compliant: Documentation provided in accordance with FDA guidance for software in medical devices. |
| Electrical Safety & Electromagnetic Compatibility (EMC) | Compliant: Complies with FDA recognized standards ES60601-1-2005/(R)2012 and A1:2012 for safety and IEC60601-1-2:2014 for EMC. |
2. Sample size used for the test set and the data provenance
- Sample Size for Test Set: Not explicitly stated as a number of patients or cases for clinical performance assessment. The "test set" primarily refers to hardware and software testing.
- Data Provenance: "Patient data sets obtained from internationally recognized databases" were used for the original system verification and "bench testing performed to compare the performance of HDA to the predicate device." The data was "representative of the range of data input and signal quality that will be encountered in the intended use population and environment of use of the device." No specific countries of origin or whether the data was retrospective or prospective are mentioned beyond being from "internationally recognized databases."
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Not applicable. The document states that "Clinical studies were not performed." Therefore, there was no expert consensus or ground truth established by human experts for a clinical test set in the traditional sense. The "ground truth" for the bench testing was derived from established standards (e.g., IEC 60601-2-34) and comparison to predicate/reference device measurements.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not applicable. Since no clinical studies were performed, there was no adjudication of clinical outcomes by multiple experts.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No. A MRMC comparative effectiveness study was not performed. The device is a clinical decision support software, not an AI for image interpretation that would typically involve human readers. Clinical studies involving human users were not performed.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, in a sense. The "Performance Data" section describes "Measurement accuracy verification," "Artefact Detection Verification," and "Predicate comparison testing" which evaluate the algorithm's direct output and processing capabilities against established standards or predicate device outputs. This represents a standalone performance evaluation of the algorithms and software functionality, rather than human-in-the-loop performance.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The ground truth for the device's technical performance verification (e.g., measurement accuracy) was established through:
- Bench testing methodologies derived from recognized standards (e.g., IEC 60601-2-34).
- Comparison to predicate or reference devices' established performance for specific parameters (e.g., cardiac output, SVR events, physiological parameter derivation).
- Pre-defined specifications for artifact detection and power interruption tolerance.
For claims of "clinical decision support" or "assisting healthcare professionals," the ground truth implicitly relies on the widely accepted understanding that accurate display and processing of vital signs aid clinical decision-making, rather than a specific clinical outcome study being performed with this device.
8. The sample size for the training set
The document does not explicitly mention a "training set" in the context of machine learning or AI model development. The device is described as "clinical decision support software" that "continually processes this data." If machine learning was used implicitly, no details are provided about its training data. The "patient data sets obtained from internationally recognized databases" were used for "original system verification" and "bench testing," which might imply they were used for validation or testing, but not necessarily for training a model.
9. How the ground truth for the training set was established
Not applicable, as a clear "training set" and its ground truth establishment are not described in the provided text. The device's functionality appears to be primarily based on processing established physiological parameters and rules, rather than learning from a labeled training dataset in the AI sense.
Ask a specific question about this device
Page 1 of 1