Search Results
Found 7 results
510(k) Data Aggregation
(125 days)
The AITRICS-VC is stand-alone software intended to analyze patient data sourced from an EHR (Electronic Health Record) system and display data of in-hospital patient data whose vital signs, blood test results, or conventional early warning scores such as MEWS, and qSOFA exceed predefined thresholds. It is not intended to replace bedside patient monitors or clinical decision-making. It is not indicated for use in high-acuity environments like the ICU or operating rooms for acutely or critically ill patients. It may be used by clinicians to aid in understanding a patient's current condition and changes over time. The AITRICS-VC is solely indicated for use in the general ward of a hospital environment.
AITRICS-VC is a clinical decision support software that receives in-hospital patient information, including physiological parameters such as vital signs and blood test results from EHR and conducts rule-based calculations for conventional early warning scores, which include NEWS (National Early Warning Score), MEWS (Modified Early Warning Score), and qSOFA (quickSOFA).
The AITRICS-VC screens patients who meet predefined criteria based on single values of physiological parameters and early warning scores. It displays a multi-patient dashboard and detailed pages for individual patients via a web browser on clinicians' PCs.
Here's an analysis of the AITRICS-VC device based on the provided FDA 510(k) summary, structured to address your specific questions.
Currently, this document lacks specific details on the acceptance criteria and performance of the AITRICS-VC, as well as the specifics of any studies conducted to validate its performance. The document focuses on regulatory approval based on substantial equivalence to a predicate device and adherence to harmonized non-clinical standards.
Therefore, many sections of your request cannot be fully answered with the provided information. However, I will extract and present all available relevant information and note where data is missing.
Description of Acceptance Criteria and Device Performance
The provided document describes the AITRICS-VC as a standalone software intended to analyze patient data from an EHR and display data when vital signs, blood test results, or conventional early warning scores (MEWS, NEWS, qSOFA) exceed predefined thresholds.
Crucially, the document explicitly states: "No new issues of safety or effectiveness are introduced as a result of using this device. This device does not require clinical data." This indicates that a detailed clinical performance study with acceptance criteria and reported performance metrics was not deemed necessary by the FDA for this 510(k) clearance, as the device's function is primarily to aggregate and display data based on predefined thresholds, not to make diagnostic or treatment recommendations or to generate novel risk scores.
Given this, the "acceptance criteria" primarily relate to the software's functionality, safety, and adherence to specified standards, rather than direct clinical efficacy metrics.
1. Table of Acceptance Criteria and Reported Device Performance
Based on the provided document, specific quantitative acceptance criteria (e.g., sensitivity, specificity, accuracy) and corresponding reported device performance metrics from a clinical study are NOT available. The document focuses on non-clinical performance data and substantial equivalence to a predicate device.
| Acceptance Criteria Category | Specific Criteria (as inferred from document) | Reported Device Performance (as inferred from document) |
|---|---|---|
| Software Life Cycle | Compliance with IEC 62304 Ed 1.1 2015-06 | Passed non-clinical tests |
| Usability Engineering | Compliance with IEC 62366-1 Ed 1.1 2020-06 | Passed non-clinical tests |
| Alarm Systems (if applicable) | Compliance with IEC 60601-1-8 Ed 2.2 2020-07 | Passed non-clinical tests |
| Functional Performance | Accurately analyze and display patient data when vital signs, blood test results, or conventional early warning scores exceed predefined thresholds. | Implied to function as intended based on non-clinical testing and regulatory clearance. Specific performance metrics are not provided. |
| Safety and Effectiveness | No new issues of safety or effectiveness compared to predicate device. | Demonstrated substantial equivalence and adherence to standards. |
2. Sample size used for the test set and the data provenance
The document states, "This device does not require clinical data." Therefore, there is no mention of a clinical "test set" in terms of patient data used for performance evaluation in a hypothesis-driven study.
The "tests" mentioned are non-clinical and relate to software engineering, usability, and alarm systems. No sample size for a patient-based test set is provided.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Not applicable. As no clinical "test set" and corresponding performance study are described, there is no mention of experts establishing ground truth for such a set. The device operates on predefined thresholds for existing clinical scores and vital signs, not on novel diagnostic interpretations requiring expert consensus.
4. Adjudication method for the test set
Not applicable. No clinical test set or adjudication process is mentioned.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No. The document does not mention any MRMC comparative effectiveness study. The AITRICS-VC is described as a display tool that aids clinicians in understanding a patient's condition, not as an AI that directly assists in diagnosis or interpretation to improve human reader performance in a quantifiable way.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The device is inherently "standalone" in that it is software only ("AITRICS-VC is stand-alone software"). However, its intended use is to "display data" for clinicians "to aid in understanding a patient's current condition." It is explicitly stated that it is "not intended to replace bedside patient monitors or clinicians' clinical decision-making."
Therefore, its performance is as an information display system based on predefined rules/thresholds, rather than a diagnostic algorithm generating an output that would typically undergo standalone performance evaluation like sensitivity and specificity. The non-clinical tests confirm that the software "performs as intended," which implies standalone functionality as designed.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
Not applicable in the context of a clinical performance study. The "ground truth" for the device's function is the predefined thresholds for vital signs, blood test results, and conventional early warning scores (MEWS, NEWS, qSOFA). The software's "truth" is whether it correctly identifies and displays patients whose data meet these established clinical thresholds.
8. The sample size for the training set
Not explicitly mentioned in the document for the AITRICS-VC's core functionality. The AITRICS-VC identifies patients where "vital signs, blood test results, or conventional early warning scores such as MEWS, NEWS, and qSOFA exceed predefined thresholds." These scores (MEWS, NEWS, qSOFA) are established clinical tools, and their underlying models (if any) would have been "trained" on historical data as part of their initial development, separate from the AITRICS-VC software. The AITRICS-VC itself seems to implement these existing rule-based calculations.
If there is any machine learning component beyond rule-based calculations that involved training, the document does not elaborate on it or provide a training set size.
9. How the ground truth for the training set was established
Not applicable/Not mentioned. Similar to the training set size, the "ground truth" for the existing conventional early warning scores (MEWS, NEWS, qSOFA) would have been established during their development. The AITRICS-VC's function is to apply these scores and thresholds, not to develop new ones.
Ask a specific question about this device
(145 days)
NAVOYCDS is intended to be used with patient data from already cleared patient monitoring devices which measure respiratory rate, systolic blood pressure, and Glasgow Coma Scale (GCS) in adult patients being monitored in a healthcare facility. The device uses an algorithm to calculate a qSOF A score (also known as quickSOFA) which indicates adult patients with suspected infection who are at greater risk for a poor outcome. It uses three criteria, assigning one point for low blood pressure (SBP ≤100 mmHg), high respiratory rate (≥22 breaths per min), or altered mentation (Glasgow coma scale <15).
NAVOYCDS is an adjunct to and is not intended to replace vital signs monitoring.
NAVOYCDS is intended to provide additional information for use during patient monitoring in a healthcare facility. NAVOYCDS is not intended for making clinical decisions regarding patient treatment or for diagnostic purposes.
The device is intended for an adult population.
NAVOYCDS consists of:
• A gateway module for handling clinical data supporting the HL7 format FHIR.
• A qSOFA score module to calculate patients' qSOFA scores and offer their last 72 hours history data.
· A web-based dashboard to render patients' qSOFA scores in a visually distinctive way depending on their value and enable intended users to view patients with suspected infection.
The NAVOYCDS system works in the following sequence:
- · Receive patient data from the HL7 gateway system.
- · Extract 3 vital signs from the patient data and store them in the database.
· The Glasgow Coma Scale (GCS) of a patient is stored in the database when a user submits it.
· NAVOYCDS calculates the qSOFA score automatically and stores the result in the database.
· NAVOYCDS delivers 3 vital signs. gSOFA score, GCS to users.
· The qSOFA score is presented in a visually distinctive way depending on the value and enables users to view patients with suspected infection.
The provided document is a 510(k) summary for the device NAVOYCDS, a software that calculates qSOFA scores. Based on the content, here's an analysis of the acceptance criteria and study information:
Acceptance Criteria and Reported Device Performance
The document states that NAVOYCDS uses an algorithm to calculate a qSOFA score based on three criteria: low blood pressure (SBP ≤100 mmHg), high respiratory rate (≥22 breaths per minute), or altered mentation (Glasgow Coma Scale <15). However, no specific acceptance criteria (e.g., accuracy, sensitivity, specificity thresholds) are defined in the provided text, nor is there a table comparing acceptance criteria to reported device performance metrics. The document focuses on the method of calculation rather than quantitative performance.
Study Information
It's explicitly stated that Clinical Performance Testing was not needed in this 510(k) to support the substantial equivalence of the subject device to the predicate device. This means there was no standalone clinical study conducted to establish the device's diagnostic or predictive performance against a ground truth.
Therefore, the following details related to clinical studies are not applicable or provided in this document:
- Sample size used for the test set and the data provenance: Not applicable as no clinical performance study was conducted.
- Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable.
- Adjudication method (e.g., 2+1, 3+1, none) for the test set: Not applicable.
- If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: Not applicable. The device calculates a score based on fixed criteria, it's not an AI assisting human readers.
- If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Not applicable. The device's function is a direct calculation of the qSOFA score based on input vital signs, not a complex algorithm requiring standalone performance validation against clinical outcomes. The document indicates "Non-clinical Performance Testing involved system-level tests, performance tests and safety testing based on hazard analysis," but this refers to software validation and not clinical performance.
- The type of ground truth used (expert consensus, pathology, outcomes data, etc): Not applicable, as no external ground truth was used for clinical performance validation of the qSOFA calculation itself. The qSOFA score is a defined clinical scoring system.
- The sample size for the training set: Not applicable. As the device calculates a defined clinical score (qSOFA) based on specified input criteria, it is not a machine learning model that undergoes training on a dataset.
- How the ground truth for the training set was established: Not applicable, as there is no training set for this type of rule-based calculation.
In summary, the NAVOYCDS device calculates a pre-defined clinical score (qSOFA) based on established criteria. The FDA clearance for this device appears to be based on demonstrating that the software accurately calculates this score according to its definition and meets software development and safety standards, rather than proving the clinical effectiveness or diagnostic accuracy of the qSOFA tool itself (which is already established in clinical practice). Therefore, traditional clinical performance study details (like sensitivity, specificity, expert ground truth, etc.) are not present in this 510(k) summary.
Ask a specific question about this device
(121 days)
The SpassageQ is intended to be used with patient data from already cleared patient monitoring devices which measure respiratory rate, systolic blood pressure, and Glasgow Coma Scale (GCS) in patients being monitored in a healthcare facility. The device provides qSOFA score (also known as quickSOFA) which indicates patients with suspected infection who are at greater risk for a poor outcome. It uses three criteria, assigning one point for low blood pressure (SBP_100 mmHg), high respiratory rate (≥22 breaths per min), or altered mentation (Glasgow coma scale<15).
The SpassageQ is an adjunct to and is not intended to replace vital signs monitoring. The device is intended to provide additional information for use during patient monitoring in a healthcare facility. The device is not intended for making clinical decisions regarding patient treatment or for diagnostic purposes.
The device is intended for an adult population.
The SpassageQ consists of:
- An automated algorithm to calculate data and generate qSOFA score and alarm when it is needed.
- · An HL7 message receiver to handle incoming connection attempts from HL7 gateway systems, parse HL7 messages, and check the validity of HL7 messages.
- · A qSOFA score module to calculate patients' qSOFA scores and offer their last 72 hours history data.
- · A web-based dashboard to render patients' qSOFA scores in a visually distinctive way depending on their value and enable intended users to be notified of patients with suspected infection.
The SpassageQ system works in the following sequence:
- · Receive patient data from the HL7 gateway system.
- · Extract 6 vital signs from the patient data and store them in the database.
- The Glasgow Coma Scale (GCS) of a patient is stored in the database when a user submits it.
- · SpassageQ calculates the qSOFA score automatically and stores the result in the database.
- · SpassageQ delivers 6 vital signs, qSOFA score, GCS to users.
- · When the patient's qSOFA score is 2 points or higher, the users are notified of the patient through a visual alarm, and the alarm shall be reviewed by the qualified practitioner.
This FDA 510(k) summary for the SpassageQ device does not explicitly describe acceptance criteria or a dedicated study for proving the device meets performance claims through clinical evaluation.
The document states that "Clinical testing was not needed in this 510(k) to support the substantial equivalence of the subject device to the predicate device." Therefore, information regarding acceptance criteria, reported performance, sample sizes, ground truth establishment, or multi-reader studies for device performance evaluation is absent from this submission.
The "Performance Data" section primarily focuses on non-clinical performance and adherence to various ISO and IEC standards related to risk management, software lifecycle, usability, and alarm systems. These are essential for software development and safety but do not assess the accuracy or effectiveness of the qSOFA score calculation itself directly against a clinical outcome.
Based on the provided text, here's what can be extracted and what cannot:
1. A table of acceptance criteria and the reported device performance
- Cannot be provided. The document does not define specific acceptance criteria for the qSOFA score calculation's accuracy or outcome prediction. It also does not report specific performance metrics for the device's ability to identify patients at greater risk for a poor outcome, as no clinical performance study was submitted.
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Cannot be provided. No clinical test set description is available. The device receives patient data from "already cleared patient monitoring devices."
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
- Cannot be provided. No clinical test set with expert-established ground truth was presented.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Cannot be provided. No clinical test set with ground truth adjudication was presented.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- Cannot be provided. No MRMC study was conducted or reported.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Partially answerable, but not with performance metrics. The SpassageQ is a standalone software device that calculates the qSOFA score automatically. However, there's no reported standalone performance study assessing its accuracy against a clinical ground truth. The device effectively performs its function (calculating qSOFA) as a standalone algorithm based on input vital signs.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- Cannot be provided for device performance. No ground truth for evaluating the qSOFA score's predictive ability was used in a performance study for this 510(k). The qSOFA score itself is a standardized calculation based on specific vital sign thresholds, and the "ground truth" for its calculation is simply correct arithmetic. The clinical relevance of the qSOFA score is established in medical literature, not via a specific clinical study for this device's submission.
8. The sample size for the training set
- Cannot be provided. The SpassageQ calculates a rule-based score (qSOFA) and does not appear to use a machine learning model that requires a "training set" in the traditional sense. Its function is based on predefined criteria (SBP ≤ 100 mmHg, RR ≥ 22 bpm, GCS < 15).
9. How the ground truth for the training set was established
- Not applicable. As the device calculates a rule-based score (qSOFA), there is no training set or ground truth establishment for a machine learning model. The qSOFA criteria themselves are the "ground truth" for the calculation.
Summary of what's described in the document regarding Device Performance:
The SpassageQ's performance data focused on non-clinical aspects:
- System-level tests: To verify the software's functionality and that it calculates data as expected.
- Performance tests: Likely referring to software performance (e.g., speed, reliability) rather than clinical accuracy.
- Safety testing based on hazard analysis: To identify potential risks.
- Cybersecurity issues: Addressed to ensure data security.
- Adherence to standards: ISO 14971 (risk management), IEC 62304 (software life-cycle), IEC 60601-1-6 & IEC 62366-1 (usability), IEC 60601-1-8 (alarm systems).
The document explicitly states "Clinical testing was not needed in this 510(k) to support the substantial equivalence of the subject device to the predicate device." This indicates that the FDA considered the device's function (calculating a well-established score) and its non-clinical performance sufficient to demonstrate substantial equivalence without requiring a specific clinical validation study of the qSOFA score's predictive accuracy by the device itself. The clinical utility of the qSOFA score is presumed based on existing medical literature.
Ask a specific question about this device
(443 days)
The T3 Platform™ software features the T3 Data Aggregation & Visualization software module version 5.0 and the T3 Adult Risk Analytics Engine software module version 1.0.
The T3 Data Aggregation & Visualization software module is intended for the recording and display of multiple physiological parameters of the adult, pediatric, and neonatal patients from supported bedside devices. The software module is not intended for alarm notification or waveform display, nor is it intended to control any of the independent bedside devices to which it is connected. The software module is intended to be used by healthcare professionals for the following purposes:
- . To remotely consult regarding a patient's status, and
- To remotely review other standard or critical near real-time patient data in order to aid in . clinical decisions and deliver patient care in a timely manner.
The T3 Data Aggregation & Visualization software module can display numeric physiologic data captured by other medical devices:
- · Airway flow, volume, and pressure
- · Arterial blood pressure (invasive and non-invasive, systolic, diastolic, and mean)
- · Bispectral index (BIS, signal quality index, suppression ratio)
- · Cardiac Index
- . Cardiac output
- Central venous pressure .
- . Cerebral perfusion pressure
- · End-tidal CO2
- · Heart rate
- · Heart rate variability
- Intracranial pressure .
- . Left atrium pressure
- Oxygen saturation (intravascular, regional, SpO2) .
- · Premature ventricular counted beats
- · Pulmonary artery pressure (systolic, diastolic, and mean)
- · Pulse pressure variation
- · Pulse Rate
- · Respiratory rate
- · Right atrium pressure
- · Temperature (rectal, esophageal, tympanic, blood, core, nasopharyngeal, skin)
- · Umbilical arterial pressure (systolic, diastolic, and mean)
The T3 Data Aggregation & Visualization software module can display laboratory measurements including arterial and venous blood gases, complete blood count, and lactic acid.
The T3 Data Aggregation & Visualization software module can display information captured by the T3 Adult Risk Analytics Engine software module.
The T3 Adult Risk Analytics Engine software module calculates the Adult IDO2 Index for inadequate delivery of oxygen. The Adult IDO2 Index is indicated for use by health care professionals with post-surgical patients 18 years of age or older under intensive care and not on Mechanical Circulatory Support. The Adult IDO2 Index is derived by mathematical manipulations of the physiologic data and laboratory measurements received by the T3 Data Aggregation & Visualization software module. When the Adult IDO2 Index is increasing, it means that there is an increasing risk of madequate oxygen delivery and attention should be brought to the patient. The Adult IDO2 Index presents partial quantitative information about the patient's cardiovascular condition, and no therapy or drugs can be administered based solely on the interpretation statements.
The Tracking, Trajectory, Trigger (73) intensive care unit software solution allows clinicians and quality improvement teams in the ICU to aggregate data from multiple sources, store it in a database for analysis, and view the streaming data. System features include:
- · Customizable display of physiologic parameters over entire patient stay
- Configurable annotation
- Web-based visualization that may be used on any standard browser
- Minimal IT footprint
- Software-only solution no new bedside hardware required
- Highly reliable and robust operation
- Auditable data storage
Here's a breakdown of the acceptance criteria and study details for the T3 Platform™ software:
1. Table of Acceptance Criteria and Reported Device Performance
| Acceptance Criteria | Reported Device Performance |
|---|---|
| Discriminatory Power | Met acceptance criteria |
| Range Utilization | Met acceptance criteria |
| Resolution/Limitation | Met acceptance criteria |
| Robustness | Met acceptance criteria |
2. Sample Size and Data Provenance for Test Set
- Sample Size: 4251 mixed venous oxygen saturation measurements from 634 patients.
- Data Provenance: Clinical data from different clinical sites in the US (retrospective). The data were obtained by the T3 Platform software and were de-identified.
3. Number of Experts and Qualifications for Ground Truth Establishment (Test Set)
This information is not explicitly provided in the document. The document states that the Adult IDO2 Index was "retrospectively computed on all de-identified patients" and "evaluated against the same acceptance criteria as the supportive predicate device," implying an objective ground truth related to mixed venous oxygen saturation, but not explicitly stating an expert consensus process for the test set.
4. Adjudication Method for Test Set
This information is not explicitly provided in the document. The evaluation was against objective acceptance criteria and not described as involving human adjudication.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
A multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly mentioned for this device. The study focused on the standalone performance of the Adult IDO2 Index against established acceptance criteria.
6. Standalone Performance Study
Yes, a standalone study was done. The Adult IDO2 Index algorithm's performance was validated using clinical datasets.
7. Type of Ground Truth Used
The ground truth used was based on mixed venous oxygen saturation measurements, a physiological parameter that the Adult IDO2 Index aims to reflect (likelihood of inadequate delivery of oxygen, specifically defined as mixed venous oxygen saturation below a threshold of 50%).
8. Sample Size for Training Set
The document does not explicitly state the sample size used for the training set. It mentions that the Adult Risk Analytics Engine version 1.0 employs the "same model of human physiology as the one utilized in Risk Analytics Engine version 8.0 cleared under K213230 for the computation of the IDO2 index in pediatric patients (0 to 12 years of age), however, the physiology model has been modified to extend the age-based parameterization." This suggests that a previous model was adapted, but details of the original or updated training data size are not provided.
9. How Ground Truth for Training Set Was Established
The document does not explicitly describe how the ground truth for the training set was established. It states that the Adult Risk Analytics Engine version 1.0 "employs the same model of human physiology" as its predecessor, which was "modified to extend the age-based parameterization." This implies the model was developed based on physiological principles and likely validated against clinical data that included mixed venous oxygen saturation measurements. However, the specific process for establishing ground truth during the training phase is not detailed.
Ask a specific question about this device
(262 days)
The Biovitals Analytic Engine (BA Engine) is intended to be used with continuous biometric data from already cleared sensors measuring heart rate, respiratory rate, and activity in ambulatory patients being monitored in a healthcare facility or at home, during periods of minimal activity. The device learns the correlation between multiple vital signs during the patient's daily activity and builds an individualized biometric signature which is dynamically updated based on incoming data. The device computes a time series Biovitals Index (BI), which reflects changes in the patient's measured vital signs from their measured baseline, which is derived from the individualized biometric signature of the patient.
The BA Engine is a cloud-based software engine, intended to be an adjunct to and is not intended to replace vital signs monitoring. The BI is intended for daily intermittent, retrospective review by a qualified practitioner. The BA Engine is intended to provide additional information for use during routine patient monitoring. The BI is not intended for making clinical decisions regarding patient treatment or for diagnostic purposes.
The device is intended for an adult population.
Biovitals Analytics Engine consists of:
- An automated proprietary algorithm to analyze data and generate Biovitals Index.
- A cloud-based database to store the input, intermedium output and the final output
- A web application programming interface (API) which handle the continuous physiology data.
- A web application programming interface (API) query the databases and get output.
- A web dashboard to render the BA Engine output in a continuous graph format, which can help intended users to monitor a patient's Biovitals Index.
Biovitals Analytics Engine works in the following sequence:
- Accept input data via secure API;
- Analyze the input data using Biovitals Analytics Engine proprietary algorithm, which generate Biovitals Index:
- Personal physiology signature data base initialization (at the early stage on the algorithm when the engine learns the patients and builds the personal baseline)
- Biovitals Index calculation
- Biovitals Analytics engine generates the Biovitals Index which is a time series scalar value from 0 to 1.
- The output of BA Engine is stored in a cloud-based database.
- The output API queries the databases and gets the output, and the output shall be reviewed by the qualified practitioner via a dashboard.
The Biovitals Analytics Engine (BA Engine) computes a time series Biovitals Index (BI) that reflects changes in a patient's vital signs from their measured baseline. The device's performance was evaluated through clinical testing to show its correlation with changes in the relationship among vital sign parameters as assessed by a panel of physicians.
Here's a breakdown of the requested information:
1. Table of Acceptance Criteria and Reported Device Performance
| Acceptance Criteria | Reported Device Performance |
|---|---|
| Correlation with changes in vital sign parameters (PPA) | The Biovitals Index (BI) was correlated to the changes in relationship among vital sign parameters, with a lower bound of the 95% confidence interval of the positive percent agreement (PPA) greater than 0.7. |
| Within-subject variability for BI categories | Low (BI ≤ 0.3): 0.11 Moderate (0.3 < BI ≤ 0.7): 0.27 Significant (BI > 0.7): 0.17 |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: 50 subjects.
- Data Provenance: The subjects were patients presenting at an Emergency Department and were deemed appropriate for home monitoring. The document does not specify the country of origin, but given the FDA review, it is likely from the US, or data collected in a manner suitable for US regulatory submission. The study appears to be prospective as it describes patients presenting at an ED and being monitored.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications
- Number of Experts: A panel of three physicians.
- Qualifications: The document only states "a panel of three physicians" and does not provide specific qualifications (e.g., years of experience, sub-specialty).
4. Adjudication Method for the Test Set
- The document states that the performance was compared "against a panel of three physicians evaluating the changes in the relationship among the patients' vital sign parameters." This implies a consensus or majority opinion (e.g., 2+1, where at least two physicians agree). However, the specific adjudication method (e.g., 2+1, 3+1, none) is not explicitly stated.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done
- No, a formal MRMC comparative effectiveness study was not explicitly described for evaluating how human readers improve with AI vs. without AI assistance. The study described focuses on the standalone performance of the BI in correlating with physician assessments of vital sign changes, rather than comparing human reader performance with and without AI assistance. The device is intended as an "adjunct" for "daily intermittent, retrospective review," suggesting it provides additional information to a practitioner, which could imply human-in-the-loop, but the study design does not directly evaluate this human improvement.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
- Yes, a standalone performance evaluation of the algorithm was conducted. The study evaluated the Biovitals Index (BI) directly against the assessment of a panel of physicians. The PPA (Positive Percent Agreement) of the BI itself was measured against the physician panel's assessment, indicating a standalone performance evaluation.
7. The Type of Ground Truth Used
- The ground truth used was expert consensus / clinical assessment from a panel of three physicians. They evaluated the "changes in the relationship among the patients' vital sign parameters."
8. The Sample Size for the Training Set
- The document does not specify the sample size used for the training set. It only describes the methodology for establishing the personalized baseline: "The baseline is initially established using data from the first 24 hours and updated periodically as new data is received." And for the within-subject variability calculation: "The expected within-subject variability was computed using data from a clinical study involving 50 emergency department patients that were deemed appropriate for home monitoring." This suggests the 50 patients mentioned for the test set might also be involved in data used for variability computation, but it doesn't describe the training set for the core algorithm that generates the BI.
9. How the Ground Truth for the Training Set Was Established
- The document does not explicitly describe how ground truth (or the equivalent "gold standard") for the training set was established. Instead, it explains how the BA Engine "learns the correlation between multiple vital signs during the patient's daily activity and builds an individualized biometric signature which is dynamically updated based on incoming data." This is described as a personalized, dynamic baseline for each patient, rather than a pre-established, universal ground truth from a large training dataset. The algorithm "learns" from the individual patient's own physiological data to establish their personal baseline.
Ask a specific question about this device
(233 days)
The T3 Software is intended for the recording and display of multiple physiological parameters of adult, pediatric and neonatal patients from supported bedside devices. T3 is not intended for alarm notification or waveform display, nor is it intended to control any of the independent bedside devices to which it is connected. T3 is intended to be used by healthcare professionals for the following purposes:
- To remotely consult regarding a patient's status, and
- To remotely review other standard or critical near real-time patient data in order to aid in clinical decisions and deliver patient care in a timely manner.
T3 can display numeric physiologic data captured by other medical devices: - Airway flow, volume and pressure
- Arterial blood pressure (invasive and non-invasive, systolic, diastolic, and mean)
- Bispectral index (BIS, signal quality index, suppression ratio)
- Cardiac Index
- Cardiac output
- Central venous pressure
- Cerebral perfusion pressure
- End-tidal CO2
- Heart rate
- Heart rate variability
- Intracranial pressure
- Left atrium pressure
- Oxygen saturation (intravascular, regional, SpO2)
- Premature ventricular counted beats
- Pulmonary artery pressure (systolic, diastolic, and mean)
- Pulse pressure variation
- Pulse Rate
- Respiratory rate
- Right atrium pressure
- Temperature (rectal, esophageal, tympanic, blood, core, nasopharyngeal, skin)
- Umbilical arterial pressure (systolic, diastolic, and mean)
It can also display laboratory measurements including arterial and venous blood count, and lactic acid.
T3 includes a Patient Risk Analytics Engine that calculates an index (the Inadequate Oxygen Delivery Index) that is indicated for use by health care professionals with post-surgical neonatal patients weighing 2 kg or more under intensive care. The Inadequate Oxygen Delivery Index is derived by mathematical manipulations of the physiologic data and laboratory measurements received by T3. When the index is elevated, it means that there is increased risk of inadequate oxygen delivery and attention should be brought to the index presents partial quantitative information about the patient's cardiovascular condition, and no therapy or drugs can be administered based solely on the interpretation statements.
WARNING: T3 Software is not an active patient monitoring system. It is intended to supplement and not replace any part of the hospital's device monitoring. Do not rely on the T3 Software Solution as the sole source of patient status information.
The Tracking, Trajectory, Trigger (T3) intensive care unit software solution allows clinicians and quality improvement teams in the ICU to aggregate data from multiple sources, store it in a database for analysis, and view the streaming data in real-time. System features include:
- Customizable display of physiologic parameters over entire patient stay.
- Configurable annotation
- Web-based visualization that may be used on any standard browser
- Minimal IT footprint
- Software-only solution no new bedside hardware required
- Highly reliable and robust operation.
- Auditable data storage.
The subject device is a modification of the T3 Software that includes Risk Analytics Engine that computes an Inadequate Oxygen Delivery Index (IDO2). The IDO2 Index is derived by mathematical manipulations of the physiologic data and laboratory measurements received by T3. This index provides an interpretation of how different the patient's physiologic measures are from normality.
The provided text describes the T3 Software, version 2.0.1, which includes a Patient Risk Analytics Engine calculating an Inadequate Oxygen Delivery Index (IDO2). The document is a 510(k) summary, making it a submission to the FDA for market clearance, stating that the new version is substantially equivalent to previous versions and other predicate devices.
Here's an analysis of the acceptance criteria and the study that proves the device meets them, based only on the provided text:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly present specific, quantitative acceptance criteria for the IDO2 Index. Instead, it relies on demonstrating that the device's software functions as intended and that the IDO2 Index correlates with changes in physical status, similar to a predicate device.
| Acceptance Criteria (Implicit from document) | Reported Device Performance |
|---|---|
| Software Functionality: The software records and displays multiple physiological parameters accurately from supported bedside devices. | "Software verification and validation testing was conducted for the subject device... The results of this testing demonstrate the safety and effectiveness of the subject T3 software product (Ver. 2.0.1) is comparable to that of the predicate T3 software products (Ver. 1.9)." (Implies successful operation and comparable accuracy to predicate for data display and recording) |
| IDO2 Index Calculation & Interpretation: The Patient Risk Analytics Engine calculates the IDO2 Index correctly based on mathematical manipulations, and when elevated, indicates an "increased risk of inadequate oxygen delivery." (Accuracy of calculation and qualitative interpretation) | "The Inadequate Oxygen Delivery Index is derived by mathematical manipulations of the physiologic data and laboratory measurements received by T3. When the index is elevated, it means that there is increased risk of inadequate oxygen delivery..." (The document states how it's derived and what an elevated index means, implying it performs this function as designed.) |
| Clinical Correlation of IDO2 Index: The IDO2 Index correlates with changes in the patient's physical status. | "Additionally, validation study results using clinical data gathered in the intended patient population demonstrate the IDO2 Index included in the subject device correlates with changes in the patient's physical status, as does the Visensia Index." (Directly states correlation observed in a validation study.) |
| Safety and Effectiveness: The device is safe and effective and raises no new questions of safety or effectiveness compared to predicate devices. | "The results of this testing demonstrate the safety and effectiveness of the subject T3 software product (Ver. 2.0.1) is comparable to that of the predicate T3 software products (Ver. 1.9) and the Visensia device." and "No new questions of safety or effectiveness are raised as a result of the differences when compared to the predicate device and the data provided in the submission show that the subject device is substantially equivalent to the legally-marketed predicate devices." |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document states: "Additionally, validation study results using clinical data gathered in the intended patient population demonstrate the IDO2 Index included in the subject device correlates with changes in the patient's physical status, as does the Visensia Index."
- Sample Size: The sample size for the "validation study" is not specified in the provided text.
- Data Provenance: The country of origin of the data is not specified. The text only mentions "clinical data gathered in the intended patient population." It is also not specified if the data was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
The document does not provide any information regarding the number of experts used or their qualifications for establishing ground truth in the validation study. The ground truth refers to "changes in the patient's physical status," but how this was determined by experts is not detailed.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
The document does not specify any adjudication method used for the test set.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not describe a multi-reader multi-case (MRMC) comparative effectiveness study. The study focused on the correlation of the IDO2 Index with patient physical status. There is no mention of human readers or their improvement with or without AI assistance. The device is intended to aid clinical decisions and provide quantitative information, but not in a comparative effectiveness study involving human interpretation.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
Yes, the validation study appears to be a standalone performance study of the algorithm. The statement "validation study results... demonstrate the IDO2 Index... correlates with changes in the patient's physical status" indicates an evaluation of the algorithm's output (the IDO2 Index) against an independent measure of patient status, without involving human-in-the-loop performance measurement.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The ground truth used is "changes in the patient's physical status." The specific method for determining "changes in the patient's physical status" (e.g., expert consensus, specific clinical criteria, or other outcomes data) is not detailed in the provided text.
8. The sample size for the training set
The document does not specify a separate "training set" or its sample size. The description focuses on the validation of the IDO2 Index, which implies the algorithm's parameters were already established.
9. How the ground truth for the training set was established
Since a "training set" is not mentioned, the document does not provide information on how ground truth for any training set was established. The IDO2 Index is described as being "derived by mathematical manipulations of the physiologic data and laboratory measurements," suggesting a formulaic or rule-based derivation rather than a machine learning model trained on labeled data in the context usually implied by "training set ground truth."
Ask a specific question about this device
(276 days)
The Personalized Physiology Engine (PPA Engine) is intended to be used with data from already cleared sensors measuring physiological parameters, including heart rate, respiratory rate, and activity in ambulatory patients being monitored in a healthcare facility or at home. The device provides a time series Multivariate Change Index (MCI) which indicates whether the relationships among the patient's monitored vital signs change from those measured at baseline, which has been derived from measurements previously obtained during routine activities of daily living. The MCI is based on an integrated computation evaluating changes in the parameters and their relationships to each other.
The PPA Engine is an adjunct to and is not intended to replace vital signs monitoring. The MCI is intended for daily intermittent, retrospective review by a qualified practitioner. The PPA Engine is intended to provide additional information for use during routine patient monitoring. The MCI is not intended for making clinical decisions regarding patient treatment or for diagnostic purposes.
The PhysIQ Personalized Physiology Analytics Engine ("PPA Engine") is a computerized analysis software program that is designed for detecting change in the relationships among the patient's vital signs throughout dynamic physical activity, based on data input from multi-parameter vital sign monitoring devices. The PPA Engine first "learns" a patient's personalized baseline, defined by the relationship among the vital signs derived from measurements obtained during routine activities of daily living. Once the baseline vital sign relationships are established, it analyzes the subsequent data to assess how the relationships among the vital signs incoming during the monitoring period compare to the established baseline. The PPA Engine can analyze data collected wherever the patient is monitored, reflecting a patient's activities of daily living. The device is intended for monitoring ambulatory patients.
The PPA Engine requires vital sign inputs of Heart Rate (HR), Respiration Rate (RR) and Activity (ACT) (body motion). The PPA Engine can accept input from commercial vital sign monitors or combinations of monitors that can provide multivariate observations of these vital signs.
The PPA Engine calculates the Multivariate Change Index (MCI), a scalar index between 0 and 1, which represents the likelihood that the relationships among the patient's vital signs are different from those at baseline, which was established during routine activities of daily living. An MCI value closer to zero (0) indicates that the monitored relationships among the vital signs are similar to the learned baseline. An MCI value closer to one (1) indicates that the patient's monitored relationships among the vital signs are likely to be different from the learned baseline.
The MCI is also presented as a time series (MCI over time) and it is intended to for retrospective review by the clinician The MCI is not intended to replace standard patient monitoring. Rather, it was designed to supplement standard monitoring of ambulatory patients.
The provided 510(k) summary for the Personalized Physiology Analytics Engine (PPA Engine) indicates a substantial equivalence determination based on various testing. However, it does not explicitly provide a table of acceptance criteria with corresponding performance metrics in the format requested. The document describes the types of testing performed and the conclusions drawn from those tests, but not specific quantifiable targets or results.
Here's a breakdown of the available information based on your request:
1. A table of acceptance criteria and the reported device performance
The provided document does not contain a table with specific acceptance criteria (e.g., sensitivity, specificity, accuracy thresholds) and their corresponding reported device performance metrics. Instead, it states that the device "performs as intended per its specifications" and that its output "correlates with changes in the relationships among vital signs compared with baseline."
Summary of Device Performance (as reported implicitly):
- Bench Testing:
- Verification testing confirmed the device meets its specifications.
- Validation testing showed correlation of MCI with changes in vital sign relationships compared to baseline.
- Clinical Testing (Healthy Volunteers):
- Demonstrated that the MCI correlates with changes in monitored vital sign relationships compared to the subject's baseline, fulfilling its intended use.
2. Sample sizes used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
- Test Set Sample Size: Not explicitly stated. The document refers to "human physiological data collected" from a "perturbed clinical data study" and a "simulator data study" for bench testing validation, and "healthy volunteer studies" for clinical testing. The number of participants in these studies is not provided.
- Data Provenance:
- Bench Testing: "Perturbed clinical data study" (unspecified origin) and "Simulator data study."
- Clinical Testing: "Healthy volunteer studies were conducted under an IRB-approved non-significant risk protocol" (suggests prospective data collection). The country of origin is not specified but given the FDA submission, it's likely within the US or compliant with US standards.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
Not provided. The document describes "changes in the relationships among vital signs compared with baseline" as the ground truth concept, often influenced by physiological events rather than expert interpretation of an image or signal.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
Not applicable. The ground truth for this device appears to be based on physiological changes, not expert interpretation requiring adjudication.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC comparative effectiveness study is mentioned. The device is described as providing a "Multivariate Change Index (MCI)" for "daily intermittent, retrospective review by a qualified practitioner" as an adjunct to vital signs monitoring. It explicitly states it is not intended for making clinical decisions regarding patient treatment or for diagnostic purposes, which suggests it's not a primary diagnostic tool to be compared in an MRMC study.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
Yes, the testing described appears to be a standalone performance evaluation of the algorithm. The "Perturbed clinical data study" and "Simulator data study" for bench testing and the "healthy volunteer studies" for clinical testing assessed how the MCI output correlated with physiological changes, thus evaluating the algorithm's performance in generating the MCI.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The ground truth used appears to be objective physiological changes or perturbations.
- For bench testing, this involved validating correlation of MCI "with changes in the relationships among vital signs compared to baseline."
- For clinical testing, "healthy volunteer studies... during a trip with a substantial altitude change (causing natural perturbation in relationships among vital signs)" were used, and results demonstrated the MCI correlates with "change in the monitored relationships among the vital signs compared to the subject's baseline."
8. The sample size for the training set
Not provided. The document mentions the PPA Engine "learns" a patient's personalized baseline from "measurements previously obtained during routine activities of daily living," but it does not specify the sample size or duration of this "learning" phase for general model development, nor does it distinguish between training and testing sets in a conventional machine learning sense for the submitted evidence. The learning described is patient-specific baseline establishment.
9. How the ground truth for the training set was established
The document describes a personalized baseline for each patient, established from "measurements previously obtained during routine activities of daily living." This implies that the system identifies a "normal" or baseline state for an individual based on their own physiological data during typical activities. There is no mention of external expert labeling or a separate "ground truth" for a training set in the typical sense of a supervised learning model, as the device's normality is patient-specific and learned from their own data.
Ask a specific question about this device
Page 1 of 1