(353 days)
AlertWatch:OR is intended for use by clinicians for secondary monitoring of patients within operating rooms. AlertWatch:OR combines data from networked physiologic monitors, anesthesia information management medical records and displays them in one place. AlertWatch:OR can only be used with both physiological monitors and AIMS versions that have been validated by AlertWatch. Once alerted, you must refer to the primary monitor or device before making a clinical decision.
AlertWatch:OR is a display and secondary alert system used by the anesthesiology staff - residents, CRNA's, and attending anesthesiologists - to monitor patients in operating rooms. The purpose of the program is to synthesize a wide range of patient data and inform clinicians of potential problems that might lead to immediate or long-term complications. Once alerted, the clinician is instructed to refer to the primary monitoring device before making a clinical decision. AlertWatch:OR should only be connected to AIMS systems and physiologic monitors that have been validated for use with Alert Watch:OR. Alert Watch, Inc. performs the validation for each installation site.
Here's a breakdown of the acceptance criteria and the study details for the AlertWatch:OR device, based on the provided document:
The document does not explicitly state formal acceptance criteria with specific performance metrics (e.g., sensitivity, specificity, accuracy thresholds). Instead, the performance testing section describes verification and validation activities designed to ensure the product works as designed, meets its stated requirements, and is clinically useful.
1. Table of Acceptance Criteria and Reported Device Performance
As specific numerical acceptance criteria (e.g., sensitivity > X%, specificity > Y%) are not provided, the table below reflects the described performance testing outcomes.
Acceptance Criterion (Implicit from Study Design) | Reported Device Performance (from "Performance Testing" section) |
---|---|
Verification: Analysis Output Accuracy | Produced desired output for each rule/algorithm using constructed data. |
Verification: Data Display Accuracy | Produced desired display for each test case using constructed data. |
Verification: Data Collector Functionality | Live Collector and Data Collector returned correct data from the EMR. |
Verification: Product Functionality with Historical Data | Product worked as designed using a set of cases from actual patients. |
Validation: Design Review & Software Requirements Specification (SRS) Accuracy | Process and various inputs for creating the product design (SRS) were reviewed. SRS was reviewed for clinical accuracy. |
Validation: Clinical Utility | Clinical utility of the product was validated by analyzing case outcomes. |
Validation: Human Factors | Summative Human Factors study conducted to demonstrate the device meets user needs. |
Overall Performance Claim | The results of the verification and validation activities demonstrate that the AlertWatch:OR complies with its stated requirements and meets user needs and intended uses. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Verification (Step 4: Historical Data): "a set of cases from actual patients" - The exact number of cases is not specified.
- Data Provenance: "data from the EMR" and "a set of cases from actual patients." The document does not specify the country of origin, nor explicitly whether it was retrospective or prospective, though "historical data" strongly implies retrospective data.
3. Number of Experts Used to Establish Ground Truth and Qualifications
The document does not specify the number of experts used or their qualifications for establishing ground truth, and it does not explicitly describe a ground truth establishment process involving experts in the traditional sense (e.g., for diagnostic accuracy). The "clinical accuracy" review of the software requirements specification implies expert involvement, but details are missing.
4. Adjudication Method for the Test Set
The document does not describe an adjudication method (such as 2+1, 3+1) for the test set.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
A multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly mentioned or described. The study focused on the device's functionality and utility rather than a direct comparison of human readers with and without AI assistance to quantify improvement.
6. Standalone (Algorithm Only) Performance Study
The verification steps, particularly "Verify the analysis output" and "Verify the data display" using "constructed data," and "Verify the product with historical data," indicate that the algorithm's performance was evaluated in a standalone manner (without human-in-the-loop) to ensure it performs "as designed" and produces "desired output/display." However, these are functional verifications rather than a typical standalone diagnostic performance study with metrics like sensitivity, specificity, or PPV/NPV.
7. Type of Ground Truth Used
The ground truth for the performance testing appears to be established by:
- "Desired output" based on the "Software Requirements Specification" for constructed data tests (functional verification).
- "Works as designed" when tested with "a set of cases from actual patients" (implies comparison to expected system behavior based on its design, rather than a clinical outcome or expert diagnosis acting as a gold standard).
- "Clinical utility... by analyzing case outcomes" suggests that real-world patient outcomes were used to assess the value of the alerts generated. This hints at an outcome-based ground truth for the validation of clinical utility, but details are scarce.
8. Sample Size for the Training Set
The document does not specify the sample size for a training set. The descriptions of verification and validation do not refer to machine learning model training. The device seems to operate based on "rules/algorithms in the Software Requirements Specification" rather than a trained AI model.
9. How the Ground Truth for the Training Set Was Established
As there's no mention of a dedicated training set or a machine learning model requiring such, this information is not applicable based on the provided text. The device likely relies on predefined rules and algorithms.
§ 870.1025 Arrhythmia detector and alarm (including ST-segment measurement and alarm).
(a)
Identification. The arrhythmia detector and alarm device monitors an electrocardiogram and is designed to produce a visible or audible signal or alarm when atrial or ventricular arrhythmia, such as premature contraction or ventricular fibrillation, occurs.(b)
Classification. Class II (special controls). The guidance document entitled “Class II Special Controls Guidance Document: Arrhythmia Detector and Alarm” will serve as the special control. See § 870.1 for the availability of this guidance document.