(90 days)
Intended Use Statement:
The eCareManager System is a software tool intended for use by trained medical staff providing supplemental remote support to bedside care teams in the management and care of in-hospital patients. The software collects, stores and displays clinical data obtained from the electronic medical record, patient monitoring systems and ancillary systems connected through networks. Using this data, clinical decision support notifications are generated that aid in understanding the patient's current condition and changes over time. The eCareManager System does not provide any alarms. It is not intended to replace bedside vital signs alarms or proactive patient care from clinicians.
All information and notifications provided by the eCareManager System are intended to support the judgement of a medical professional and are not intended to be the sole source of information for decision making.
Indications for Use Statement:
The eCareManager software is indicated for use in hospital environment or remote locations with clinical professionals. It is not indicated for home use.
The eCareManager system is a software platform that enables enterprise telehealth. The system includes interface features to acquire patient data from the electronic medical record and bedside devices which can be shared between the bedside and remote care teams. Population management and communication features facilitate a collaborative approach to delivery of in-patient care. The system's clinical decision support features further aid in the proactive delivery of care. Using data received from the hospital's systems, clinical decision support algorithms provide cues that assist in the early detection of changes in patient condition.
The provided document is a 510(k) premarket notification for a software device called eCareManager 4.1. It details the device's intended use, comparison with a predicate device (eCareManager 4.0), and summarizes performance testing. However, this document does not contain the specific acceptance criteria or detailed results of a study proving the device meets those criteria, as typically found in a clinical study report for an AI/ML medical device.
The eCareManager system described is a "telehealth software system" that provides "clinical decision support notifications" based on collected clinical data. It is explicitly stated that the system "does not provide any alarms" and "is not intended to replace bedside vital signs alarms or proactive patient care from clinicians," nor is it "intended to be the sole source of information for decision making." This suggests its role is primarily informational and supportive, not diagnostic or a direct intervention.
Given the nature of the device as clinical decision support software without direct diagnostic or therapeutic action, the FDA submission focuses on showing substantial equivalence to a previous version and that the changes do not raise new safety or effectiveness concerns, rather than proving a specific performance metric against a "ground truth" as would be done for an AI diagnostic device.
Therefore, much of the requested information regarding detailed acceptance criteria, specific performance metrics (like sensitivity, specificity, AUC), sample sizes for test sets, data provenance, expert adjudication, MRMC studies, standalone performance, and ground truth establishment is not present in this 510(k) summary.
The document states:
- "Changes to the Automated Acuity Score calculation have been validated using clinical data collected under an observational, non-human subject evaluation. The evaluation demonstrated substantial equivalence of the modified calculation with the unmodified, predicate device version."
- "Test results demonstrated that eCareManager software release 4.1 meets all device specifications and user needs."
This indicates that some form of validation was done, but the specifics of how "substantial equivalence" was demonstrated in terms of precise metrics for the "Automated Acuity Score" or what "device specifications and user needs" were measured are not provided in this public summary.
Based on the available information:
1. A table of acceptance criteria and the reported device performance
The document does not provide a table of explicit acceptance criteria with numerical performance targets and corresponding reported device performance values (e.g., sensitivity, specificity, accuracy, etc.) for AI/ML performance. Instead, it states that "Test results demonstrated that eCareManager software release 4.1 meets all device specifications and user needs" and that the "evaluation demonstrated substantial equivalence of the modified calculation with the unmodified, predicate device version." These are high-level conclusions without quantified metrics for specific performance criteria.
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size for Test Set: Not specified. The document mentions "clinical data collected under an observational, non-human subject evaluation" was used to validate changes to the Automated Acuity Score.
- Data Provenance: Not specified (e.g., country of origin).
- Retrospective or Prospective: "Retrospective" is mentioned for "Vital Signs Monitoring" under "Clinical Decision Support Features" in Table 5-1, but it's unclear if this refers to the data used for the validation study or just a general characteristic of how vital signs are handled by the system. The validation itself is described as "observational, non-human subject evaluation," which typically implies retrospective analysis of existing data.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
Not applicable/Not specified. Given the device's function as clinical decision support that generates "notifications" and "cues" to "aid in understanding the patient's current condition," and explicitly "not intended to be the sole source of information for decision making," the ground truth wouldn't typically be established by expert consensus on, for example, image interpretation, but rather on the clinical condition of the patient as recorded in their EMR or other systems. The validation focused on the "Automated Acuity Score," and it's not clear that human experts were involved in establishing a "ground truth" for this algorithm's output, beyond perhaps verifying consistency with existing clinical assessments or outcomes data.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not applicable/Not specified. There's no indication of any expert adjudication process for the "clinical data" used in validation.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No such study appears to have been performed or reported in this summary. The device is a "telehealth software system" providing "clinical decision support notifications," not an AI diagnostic tool intended to assist human readers differentiate between medical conditions from images or other complex data. The validation focused on the "substantial equivalence of the modified calculation with the unmodified, predicate device version."
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The "Automated Acuity Score" calculation was validated using "clinical data." This suggests a standalone evaluation of the algorithm's output against some measure (likely derived from the clinical data it processes or against the predicate's output), although the specific metrics used are not stated. The device functions as a software tool that generates notifications, meaning its core function is algorithm-driven, so an algorithm-only evaluation of its calculations (like the Acuity Score) would be inherent to its validation.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
Not explicitly stated. For the "Automated Acuity Score," the validation aimed to demonstrate "substantial equivalence... with the unmodified, predicate device version." This implies that the "ground truth" for the new version's performance was its consistency or agreement with the predicate's output or a clinical outcome measure derived from the "clinical data" itself that the score is meant to reflect (e.g., patient condition changes, length of stay, etc.). The summary only states "clinical data collected under an observational, non-human subject evaluation."
8. The sample size for the training set
Not applicable/Not specified. The document describes the device validation for a new version (eCareManager 4.1) against a predicate (eCareManager 4.0). It validates "changes to the Automated Acuity Score calculation." This would typically involve re-training or fine-tuning the algorithm, but the size of any training data used for the development of this calculation (or the updated parameters) is not disclosed in this summary, which focuses on validation of the product.
9. How the ground truth for the training set was established
Not applicable/Not specified. As mentioned, the document describes product validation for an updated version, not the initial development or training process.
§ 870.2300 Cardiac monitor (including cardiotachometer and rate alarm).
(a)
Identification. A cardiac monitor (including cardiotachometer and rate alarm) is a device used to measure the heart rate from an analog signal produced by an electrocardiograph, vectorcardiograph, or blood pressure monitor. This device may sound an alarm when the heart rate falls outside preset upper and lower limits.(b)
Classification. Class II (performance standards).