(170 days)
The Decisio Health Patient Dashboard is a decision support device indicated for aggregating, displaying, and managing physiologic and other patient information. This information is generated by third party medical devices and patient information systems. The device performs automated calculations on patient data collected by third party devices based on approved clinical protocols at patient care facilities.
The Decisio Health Patient Dashboard is intended for use by clinicians in healthcare facilities.
The Decisio Health Patient Dashboard ("Patient Dashboard") is a data aggregation and visualization software device. The Patient Dashboard is designed to display patient information, facility specific care protocols, and visual cues to care providers on a single display device. The Patient Dashboard is configured to receive patient data through the facility's Electronic Medical Record system and display information to the user on a patient monitor, computer, or a mobile device. Data received through the EMR includes input from various sources within the hospital, including manually entered data into the EMR (e.g., laboratory data), vital signs monitors, ventilators, IV pumps, and Foley catheter devices. The data the Patient Dashboard receives are then stored, filtered, and displayed through the Patient Dashboard web browser application. The Patient Dashboard is customized to individual facility's care as it is programmed with the facility's treatment protocols, which dictate the information that is displayed relative to those protocols. The device performs automated calculations on patient data collected by third party devices based on approved clinical protocols at patient care facilities.
This document (K142106) describes the Decisio Health Patient Dashboard, a decision support device designed to aggregate, display, and manage patient information from third-party medical devices and information systems.
Here's an analysis of the provided information regarding acceptance criteria and the supporting study:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly present a table of specific acceptance criteria (e.g., specific accuracy thresholds for calculations, or display fidelity metrics) with corresponding numerical device performance results as one might expect for a diagnostic or classification device.
However, based on the Testing in Support of Substantial Equivalence Determination section, the general acceptance criteria can be inferred as:
Acceptance Criteria Category | Description | Reported Device Performance |
---|---|---|
Functional Performance | Device performs as intended per its specifications, including: |
- Receiving data from the EMR.
- Processing patient data according to a facility protocol.
- Displaying the data as expected.
- Performing automated calculations on patient data based on approved clinical protocols. | "Unit, integration, and system level testing demonstrated that the Patient Dashboard meets its specifications, including receiving data from the EMR, processing patient data according to a facility protocol, and displaying the data as expected." "Performance testing verified that the device performs as intended." (No specific numerical metrics are provided for these performance aspects in the summary.) |
| Usability/ Human Factors | The device design meets its intended use, and the user can interpret the displayed information as intended. | "Additionally, a human factors and usability study has been performed with the Patient Dashboard... The results of the study confirmed that the Patient Dashboard design meets its intended use and the user can interpret the displayed information as intended." (No specific quantitative usability metrics or error rates are provided, only a general confirmation of meeting the goal.) |
| Safety & Effectiveness | Differences in technological characteristics between the Patient Dashboard and predicate devices do not raise new issues of safety or effectiveness. | "The differences in technological characteristics between the Patient Dashboard and the predicate devices do not raise new issues of safety or effectiveness." and "The differences in technological characteristics have been analyzed and addressed through software performance testing, and human factors and usability testing." (This is a qualitative statement of equivalence, not a direct performance metric.) |
Note: This submission is for a Class II regulatory clearance (510(k)), which focuses on demonstrating substantial equivalence to a legally marketed predicate device rather than independently proving safety and effectiveness through extensive clinical trials like a PMA. Therefore, the level of detail on specific performance metrics might be less extensive than for some other device types.
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size:
- Functional Testing (Software Performance Testing): The document states "Unit, integration, and system level testing." It does not specify a numerical sample size for data points or test cases used in this testing.
- Human Factors and Usability Study: 45 clinicians were involved.
- Data Provenance: The document does not specify the country of origin for the data used in functional testing. For the human factors study, it implies the clinicians are from "healthcare facilities" where the device is intended for use, but no specific geographic location is mentioned. The study is prospective in nature, as it was conducted specifically for the device submission.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- Functional Testing: The document does not mention external experts establishing ground truth for functional software testing. This type of testing typically relies on predefined specifications and expected outputs generated by developers or quality assurance engineers.
- Human Factors and Usability Study: The "clinicians" (45 of them) served as the "experts" or primary evaluators in this study by providing feedback on the device's design and interpretability. Their specific qualifications (e.g., tenure, specialty) are not detailed beyond "clinicians." The "ground truth" for usability is their ability to correctly interpret displayed information and interact with the device as intended.
4. Adjudication Method for the Test Set
- Functional Testing: Not applicable in the context of expert adjudication. Software testing typically involves verifying outputs against expected results defined by specifications.
- Human Factors and Usability Study: The document does not describe an explicit adjudication method (like 2+1 or 3+1 consensus). The "results of the study confirmed" suggests a summary of observations and feedback from the 45 clinicians led to the conclusion about the design's suitability and interpretability. It's likely a qualitative assessment or a design qualification activity rather than a ground truth establishment by external adjudicators for patient data.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No, a MRMC comparative effectiveness study was not explicitly done or reported. The human factors study involved multiple clinicians but was focused on usability and interpretability of the device's output, not on comparing their performance with and without the AI assistance in a diagnostic or clinical decision-making context with specific clinical outcomes. The device is a "decision support device" that aggregates and displays information; it's not described as an AI that provides specific diagnostic interpretations that would typically warrant a full MRMC study.
- Effect Size of Human Readers Improvement with AI vs. Without AI Assistance: Not applicable, as an MRMC study comparing human performance with and without the device's assistance was not conducted or reported.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study
- Yes, a form of standalone performance was assessed through "Unit, integration, and system level testing." This software performance testing directly addressed how the algorithm (the "Patient Dashboard" software) "receives data from the EMR, processes patient data according to a facility protocol, and displays the data as expected" and "performs automated calculations." This specifically describes the algorithm's functional performance in isolation of direct human interpretative decision-making during the test itself (though it's designed for human use). The output of these tests would be objective verification against specifications.
7. Type of Ground Truth Used
- Functional Testing (Software Performance): The ground truth for this testing would be the predefined specifications for data reception, processing rules (facility protocols), calculation outputs, and display requirements. This implies the ground truth is established by internal engineering and quality assurance standards.
- Human Factors and Usability Study: The "ground truth" (or success criteria) for this study was the intended use and user interpretability, as confirmed by the clinicians. This is a form of expert consensus/feedback on usability and functional clarity, rather than a clinical ground truth like pathology or direct outcomes data.
8. Sample Size for the Training Set
- The document does not mention a training set sample size. This device is described as a "data aggregation and visualization software device" that "performs automated calculations on patient data collected by third party devices based on approved clinical protocols." This suggests it's primarily a rule-based system or a display system for pre-existing calculations/protocols, rather than a machine learning model that requires a "training set" in the conventional sense. If there are any "automated calculations," they are based on "approved clinical protocols," implying rules configured by clinical experts rather than learned from a dataset.
9. How the Ground Truth for the Training Set Was Established
- As a training set is not explicitly mentioned or implied for a machine learning model, this question is not applicable. The device's "automated calculations" are stated to be based on "approved clinical protocols at patient care facilities," meaning these protocols themselves serve as the 'ground truth' or rules governing the calculations, established by clinical expertise at the local facility level, not derived from a training dataset.
§ 870.2300 Cardiac monitor (including cardiotachometer and rate alarm).
(a)
Identification. A cardiac monitor (including cardiotachometer and rate alarm) is a device used to measure the heart rate from an analog signal produced by an electrocardiograph, vectorcardiograph, or blood pressure monitor. This device may sound an alarm when the heart rate falls outside preset upper and lower limits.(b)
Classification. Class II (performance standards).