Search Results
Found 14 results
510(k) Data Aggregation
(118 days)
HEWLETT-PACKARD GMBH
The Hewlett-Packard Viridia M3/M4 (M3000A/M3046A) Patient Monitoring System, Rel.B is intended for monitoring, recording, and alarming of multiple physiological parameters of adults, pediatrics, and neonates in the hospital and medical transport environments.
The Hewlett-Packard Viridia Patient Monitor M3000A/M3046A with M3015A (Viridia M3/M4, Rel. B.). The modification is the addition of a firmware and software based change that involves the addition of the M3015A Module to the portable Viridia M3/M4 Patient Monitor System to allow sidestream CO2, and a second invasive blood pressure and temperature measurements with the unit.
The provided text is a 510(k) summary for the Hewlett-Packard Viridia Patient Monitor M3000A/M3046A with M3015A. It describes a modification to an existing device, which involves adding the M3015A Module to allow sidestream CO2, and a second invasive blood pressure and temperature measurements.
However, the summary does not contain the detailed information required to fill out your request, which typically applies to AI/ML or diagnostic devices with specific performance metrics. This document describes a hardware and firmware modification to a patient monitor, and its testing focuses on compliance with standards and equivalence to predicate devices rather than specific performance metrics like accuracy, sensitivity, or specificity for a diagnostic algorithm.
Therefore, many of the requested fields cannot be extracted from this document.
Here's a breakdown of what can and cannot be answered based on the provided text:
1. A table of acceptance criteria and the reported device performance:
- Acceptance Criteria: The text states, "Pass/Fail criteria were based on standards, where applicable, and on the specifications cleared for the predicate devices." However, it does not detail these specific criteria (e.g., specific accuracy ranges for CO2, blood pressure, or temperature).
- Reported Device Performance: The text generally states, "The test results showed substantial equivalence." It does not provide quantitative performance metrics for the added functionalities (CO2, second invasive BP, temperature).
Acceptance Criteria | Reported Device Performance |
---|---|
Based on standards and specifications of predicate devices (details not provided) | Test results showed substantial equivalence. (Specific quantitative results not provided) |
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):
- Sample Size for Test Set: Not specified. The document mentions "simulated systems" but gives no numbers.
- Data Provenance: Not specified. It only mentions "simulated systems."
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):
- Not applicable/Not specified. This type of device (patient monitor) does not typically involve expert review for ground truth in the same way a diagnostic imaging AI would. Testing would involve calibrated reference instruments.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not applicable/Not specified. Adjudication methods are typically relevant for human-interpretable results, which is not the primary focus of this device's validation.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No. This is not an AI/ML diagnostic device and such studies would not be relevant.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Not applicable. This is a hardware/firmware modification to a patient monitor, not an AI algorithm. Its performance is inherent in its measurement capabilities.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc):
- Likely calibrated reference instruments and physical standards. The text mentions "simulated systems," which implies controlled inputs with known, precise values against which the device's measurements would be compared.
8. The sample size for the training set:
- Not applicable/Not specified. This device does not use machine learning in a way that would typically involve a "training set" for an algorithm. Its operation is based on established physiological measurement principles and programmed logic.
9. How the ground truth for the training set was established:
- Not applicable. No training set as described for an AI/ML model.
Ask a specific question about this device
(205 days)
HEWLETT-PACKARD GMBH
The Hewlett-Packard Viridia M3/M4 Patient Monitoring System, Rel.B is intended for monitoring, recording, and alarming of multiple physiological parameters' of adults, pediatrics, and neonates in the hospital and medical transport environments.
- List of supported measurements ECG (a) Respiration (b) Invasive blood pressure (c) Non-invasive blood pressue ( વ ) SpO2 and Pleth (e) Temperature ( ) ) CO2 ( વે )
The modification is the addition of a firmware and software based change that involves the addition of the M3016A Module to the portable Viridia M3/M4 Patient Monitor System to allow CO-Pressure, and Temperature measurements with the unit.
The provided text describes a 510(k) summary for the Hewlett-Packard Viridia Patient Monitor M3000A/M3046A with M3016A, which is primarily a patient monitoring system. The information focuses on the substantial equivalence of the new device (with the M3016A module) to previously cleared HP devices.
However, the provided text does not contain the detailed information required to fill out all the sections of your request regarding acceptance criteria and a specific study proving the device meets those criteria, especially in the context of an AI/algorithm-driven device. The document is for a patient monitor and its new module, not an AI diagnostic tool as implied by many of your questions (e.g., ground truth, MRMC study, standalone algorithm performance).
Based on the available text, here's what can be extracted:
1. A table of acceptance criteria and the reported device performance
The document broadly states: "Pass/Fail criteria were based on standards, where applicable, and on the specifications cleared for the predicate devices. The test results showed substantial equivalence." This is a very high-level statement and does not provide specific numerical acceptance criteria or performance metrics for parameters like sensitivity, specificity, accuracy, or other typical AI performance measures.
Acceptance Criteria (General) | Reported Device Performance |
---|---|
Based on standards, where applicable. | Test results showed substantial equivalence to predicate devices. |
Based on specifications cleared for predicate devices. | Test results showed substantial equivalence to predicate devices. |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The text states: "Verification, validation, and testing activities were conducted to establish the performance and reliability characteristics of the new module using simulated systems."
This indicates that a real-world patient test set was not used. No sample size for a test set is mentioned, nor is data provenance or whether it was retrospective/prospective, as it was based on simulated systems.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
Not applicable. Ground truth from experts would be relevant for diagnostic devices interpreting findings, not for a patient monitor measuring physiological parameters with simulated systems.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not applicable. This is relevant for expert-adjudicated ground truth in diagnostic studies, which was not performed here.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
Not applicable. This device is a patient monitor, not an AI-assisted diagnostic tool that would involve human readers interpreting images or data with and without AI assistance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The device is a patient monitor. Its "performance" refers to its ability to accurately measure physiological parameters. The text suggests testing was done on the system and its components. While it functions without human "interpretation" of its measurements in the diagnostic sense, it's not an algorithm in the sense of a standalone diagnostic AI. Its tests were against established specifications.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
For the verification and validation, the "ground truth" or reference was likely derived from the specifications and standards used for the simulated systems, ensuring the device accurately measures the intended physiological parameters within acceptable tolerances. This is based on technical specifications rather than clinical ground truth like pathology or expert consensus.
8. The sample size for the training set
Not applicable. This device is a patient monitor, not an AI model that requires a training set in the machine learning sense. The "simulated systems" were used for testing, not training.
9. How the ground truth for the training set was established
Not applicable, as there was no training set in the AI/machine learning context.
Ask a specific question about this device
(127 days)
HEWLETT-PACKARD GMBH
The Hewlett-Packard Viridia Component Monitoring System, (M1175A/76A/77A), and Viridia 24/26 (M1205A) Rev.K with EASI™ ST segment Monitoring is indicated for: Assessment of real time ST segment analysis in adult patients. Assessment is indicated for the
The new device consists of a software enhancement enabling the CMS system to accommodate an electrode placement pattern allowing signals for deriving the 12-lead electrocardiogram from the 5-lead EASI™ electrode system. The EASI™ capability is fully compatible with the existing HP ECG frontend modules M1001A/B or M1002A/B.
The provided text is a 510(k) summary for the Hewlett-Packard Viridia Component Monitoring System with EASI™ ST Segment measurement. It focuses on demonstrating substantial equivalence to a predicate device rather than presenting a detailed study proving performance against specific acceptance criteria for a novel device. As such, much of the requested information regarding detailed study design, ground truth establishment, expert involvement, and sample sizes for training/test sets is not explicitly present in the provided document.
However, I can extract what is available and indicate where information is missing based on the document.
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state quantitative acceptance criteria or detailed performance metrics from a specific study comparing the new device against such criteria. Instead, it relies on the concept of "substantial equivalence" to a predicate device, implying that its performance is comparable.
Acceptance Criteria (Not Explicitly Stated for this 510(k)) | Reported Device Performance (Implied from Substantial Equivalence) |
---|---|
(Specific quantitative thresholds for ST segment measurement accuracy, sensitivity, specificity, etc., are not provided in the document) | The device's measurement technology and transmission of ECG signals are "similar" and "essentially the same" as legally marketed predicate devices. This implies that the performance is considered equivalent to those predicate devices. |
2. Sample Size Used for the Test Set and Data Provenance
This information is not provided in the document. The 510(k) focuses on the technological equivalence and intended use rather than a detailed clinical validation study with a specified test set.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
This information is not provided in the document. Since a detailed test set and ground truth establishment methodology are not described, the involvement of experts for this purpose is also absent from the summary.
4. Adjudication Method for the Test Set
This information is not provided in the document. Without a described test set and ground truth establishment, an adjudication method is not mentioned.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size
This information is not provided in the document. The submission is about a software enhancement to a monitoring system, not a diagnostic aid that would typically involve human readers. Therefore, an MRMC study is not relevant to this type of device and is not mentioned.
6. If a Standalone (Algorithm Only) Performance Study Was Done
The document describes a "software enhancement enabling the CMS system to accommodate an electrode placement pattern allowing signals for deriving the 12-lead electrocardiogram from the 5-lead EASI™ electrode system." It also states that "monitoring technology identical to that used in the predicates." This implies that the core ST segment measurement algorithm is either the same as or very similar to the predicate and that the focus is on the EASI™ electrode system's ability to provide signals for this existing algorithm. However, a standalone performance study with specific metrics for the algorithm itself (isolated from the entire monitoring system) is not explicitly detailed in this summary. The "study" here is more about demonstrating that the EASI™ technology can provide equivalent input for the existing ST segment measurement.
7. The Type of Ground Truth Used
Given the nature of the device (ST segment monitoring via ECG), the ground truth for such a device, if a study were conducted, would typically be a clinically accepted standard for ECG interpretation (e.g., expert cardiologists' readings of standard 12-lead ECGs obtained simultaneously, or sometimes correlational studies with other diagnostic tests). However, the document does not explicitly state the type of ground truth used for any performance evaluation, as it relies on substantial equivalence.
8. The Sample Size for the Training Set
This information is not provided in the document. As the device is an enhancement to an existing system, and the technology is described as "identical to that used in the predicates," it's possible that no "training set" in the context of a new machine learning algorithm was used. If a training set was used for the original development of the ST segment algorithm, that information is not part of this 510(k) summary.
9. How the Ground Truth for the Training Set Was Established
This information is not provided in the document, for the reasons outlined in point 8.
Ask a specific question about this device
(90 days)
HEWLETT-PACKARD GMBH
The Hewlett-Packard Viridia CMS Patient Monitoring System, Rel.K, with M1027A EEG Measurement Module is intended for measurement and display of the electroencephalogram of adults, pediatrics, and neonates in the Operating Room and intermediate/critical care environments.
The modification is the addition of new applications software and firmware that involves the addition of the M1027A EEG Module to the HP M1175A/76A/77A Component Monitoring System to allow the measurement of electroencephalographic signals.
The provided text describes a 510(k) summary for the Hewlett-Packard Viridia M1175A/76A/77A Component Monitoring System with M1027A EEG module. However, it does not contain detailed acceptance criteria or a study design structured in the way requested by the prompt for a device performance evaluation. The document primarily focuses on the device's substantial equivalence to predicate devices and describes general validation and testing activities.
Here's an attempt to extract and infer information based on the provided text, highlighting the missing details:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
System level tests pass | Test results showed substantial equivalence. |
Integration tests pass | Test results showed substantial equivalence. |
Environmental tests pass | Test results showed substantial equivalence. |
Safety testing from hazard analysis passes | Test results showed substantial equivalence. |
Interference testing passes | Test results showed substantial equivalence. |
Hardware testing passes | Test results showed substantial equivalence. |
Standards compliance | Pass/Fail criteria based on standards, where applicable. |
Specifications cleared for predicate devices met | Pass/Fail criteria based on specifications cleared for predicate devices. |
Acceptable applicability, usability, and efficiency during clinical performance evaluation | More than 90% of users found applicability, usability, and efficiency acceptable or better. |
No adverse events (beyond minor skin irritation) | Only one instance of minor skin irritation reported. |
Missing Information: Specific quantitative thresholds for hardware, software, and clinical performance are not detailed. For example, what constitutes "acceptable" usability or specific signal-to-noise ratios for EEG acquisition are not provided.
2. Sample size used for the test set and the data provenance
The text mentions "Clinical performance evaluations were conducted with the EEG module to validate two channel functionality under conditions existing in the indicated hospital environments."
- Sample Size: Not specified. It only refers to "users."
- Data Provenance: The studies were conducted "under conditions existing in the indicated hospital environments" for the specified patient populations (adult, pediatric, and neonatal). No country of origin is explicitly mentioned, but the submitter is from Germany and the notification is to the US FDA. It's implied to be prospective clinical observations, but not explicitly stated.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
This information is not provided in the document. The clinical evaluation focused on "applicability, usability, and efficiency" as perceived by "users," not on diagnostic accuracy requiring expert ground truth establishment.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not provided as the study did not focus on diagnostic accuracy requiring adjudication.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No MRMC study was done. This device is a measurement module for EEG signals, not an AI-assisted diagnostic tool for human readers. The clinical evaluation focused on system usability and functionality.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- The document implies that the device's core functionality (measuring EEG signals) was evaluated as a standalone component within the larger monitoring system during "system level tests, integration tests, environmental tests, safety testing from hazard analysis, interference testing, and hardware testing." These tests would assess the algorithm's performance in signal acquisition and processing. However, a separate "algorithm only" performance study in the context of diagnostic accuracy (e.g., classifying EEG patterns) is not described.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
For the clinical performance evaluation mentioned, the "ground truth" was user perception of "applicability, usability, and efficiency," rather than a clinical ground truth for diagnostic accuracy (like expert consensus on EEG abnormalities or pathology). For the technical tests, the "ground truth" would be established by engineering specifications, relevant standards, and the performance of predicate devices.
8. The sample size for the training set
This information is not provided. The document describes validation and testing activities, but not the development or training of any machine learning algorithms. The device's functionality seems to be based on established signal processing and measurement principles rather than a learning-based approach requiring a "training set."
9. How the ground truth for the training set was established
This information is not applicable/not provided as there is no mention of a training set or machine learning components requiring a "ground truth" for training.
Ask a specific question about this device
(156 days)
HEWLETT-PACKARD GMBH
The Hewlett-Packard Viridia Component Monitoring System, (M1175A/76A/77A), and Viridia 24/26 (M1205A) Rev.K with EASI-ECG Option is indicated for:
- Assessment of symptoms that may be related to rhythm disturbances of the heart: patients with palpitations, the evaluation of arrhythmia's in adult and pediatric patients.
- Assessment of risk in patients with or without symptoms of arrhythmia.
- Assessment of efficacy of antiarrhythmic therapy.
- Assessment of pacemaker function.
- Assessment of symptomatic or asymptomatic patients to evaluate for ischemic heart disease.
- Assessment is indicated for single-hospital environment.
The new device consists of a software enhancement enabling the CMS system to accommodate an electrode placement pattern allowing signals for deriving the 12-lead electrocardiogram from the 5-lead EASI electrode system. The EASI option is fully compatible with the existing HP ECG frontend modules M1001A/B or M1002A/B.
This 510(k) summary (K990476) describes a software enhancement to the Hewlett-Packard Viridia Component Monitoring System (CMS) and Viridia 24/26 Rev. K, adding an EASI-ECG Option. This option allows the system to derive a 12-lead electrocardiogram from a 5-lead EASI electrode system. The submission focuses on substantial equivalence to predicate devices rather than presenting a performance study with acceptance criteria for the EASI-ECG option itself.
Here's a breakdown of the requested information based on the provided text, highlighting what is present and what is not:
1. Table of Acceptance Criteria and Reported Device Performance
No specific acceptance criteria for the EASI-ECG option's performance (e.g., accuracy of derived 12-lead ECG compared to standard 12-lead ECG) are mentioned. The submission focuses on equivalence to predicate devices for the overall system's function and intended use. Therefore, a table for this specific information cannot be generated from the provided text.
2. Sample Size Used for the Test Set and Data Provenance
Not applicable. The submission does not describe a clinical study of the EASI-ECG option proving its performance against specific acceptance criteria. The focus is on substantial equivalence.
3. Number of Experts Used to Establish Ground Truth and Qualifications
Not applicable, as no performance study is detailed.
4. Adjudication Method
Not applicable, as no performance study is detailed.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No, an MRMC comparative effectiveness study is not mentioned in the provided text. The submission focuses on the functionality of the EASI-ECG option and its equivalence to predicate devices.
6. Standalone (Algorithm Only) Performance Study
No, a standalone performance study for the EASI-ECG algorithm is not described. The document discusses a "software enhancement enabling the CMS system to accommodate an electrode placement pattern," implying a feature integration rather than a standalone algorithm performance evaluation.
7. Type of Ground Truth Used
Not applicable, as no performance study is detailed with a defined ground truth.
8. Sample Size for the Training Set
Not applicable. The submission describes a software enhancement, not a machine learning algorithm requiring a training set in the typical sense.
9. How Ground Truth for the Training Set Was Established
Not applicable for the same reason as above.
Summary of Device Acceptance and Study Information (Based on Provided Text):
Feature/Criterion | Description (Based on K990476) |
---|---|
Acceptance Criteria | Not explicitly stated for the EASI-ECG option's performance. The basis for clearance is Substantial Equivalence to legally marketed predicate devices (Totemite EASI Lead System Cable K872781B and Zymed T8010 Telemetry Central Station Monitor K951370). The device's "technological characteristics are essentially the same as those of the legally marketed predicate devices" in terms of measurement technology and ECG signal transmission. |
Device Performance Reported | No specific performance metrics (e.g., accuracy, sensitivity, specificity for derived 12-lead ECG) are reported in the provided summary. The device "accommodate[s] an electrode placement pattern allowing signals for deriving the 12-lead electrocardiogram from the 5-lead EASI electrode system." |
Sample Size (Test Set) | Not applicable. No performance study with a test set is detailed. The submission focused on demonstrating substantial equivalence. |
Data Provenance (Test Set) | Not applicable. |
Number & Qualifications of Experts (Ground Truth - Test Set) | Not applicable. |
Adjudication Method (Test Set) | Not applicable. |
MRMC Comparative Effectiveness Study | No. |
Effect Size (MRMC) | Not applicable. |
Standalone Performance Study | No. The submission describes a software enhancement integrated into an existing system, cleared based on substantial equivalence. |
Type of Ground Truth Used | Not applicable, as no performance study is detailed with specific ground truth data for the EASI-ECG derived leads. The substantial equivalence argument relies on the predicate devices' established safety and effectiveness. |
Sample Size (Training Set) | Not applicable. This summary describes a software enhancement, not an AI/ML algorithm requiring a training set. |
Ground Truth Establishment (Training Set) | Not applicable. |
Conclusion from the Provided Text:
The K990476 submission for the Hewlett-Packard Viridia Component Monitoring System Rev.K with EASI-ECG Option does not present a performance study with explicit acceptance criteria, sample sizes, expert adjudication, or ground truth details for the EASI-ECG feature itself. Instead, it relies on demonstrating substantial equivalence to existing predicate devices (Totemite EASI Lead System Cable K872781B and Zymed T8010 Telemetry Central Station Monitor K951370) by asserting that the new device shares the same intended use and essentially similar technological characteristics (measurement technology and ECG signal transmission). The clearance is based on the premise that the EASI-ECG option is a software enhancement that allows an existing, cleared system to utilize a different electrode placement pattern to derive 12-lead ECG signals, implicitly assuming the derivation method's performance is acceptable within the context of the overall system's substantial equivalence to the predicate.
Ask a specific question about this device
(13 days)
HEWLETT-PACKARD GMBH
The Hewlett-Packard family of patient monitor products is intended for monitoring, recording, and alarming of multiple physiological parameters. The devices are indicated for use in health care facilities by health care professionals whenever there is a need for monitoring the physiological parameters of adult, neonatal, and pediatric patients.
The Hewlett-Packard family of Viridia Patient Monitors individually known as the M3000A/M3046A (Viridia M3/4). The common name is patient monitor. The modification consists of the addition of software that involves only the arrhythmia and ST measurement algorithm of the measurement computer processing unit of each device.
The provided text does not contain detailed acceptance criteria or a specific study proving the device meets them. It primarily describes a 510(k) submission for a patient monitor, focusing on its substantial equivalence to previously cleared devices.
Here's an attempt to extract and infer information based on the limited text:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Maintain performance and reliability characteristics of the STAR algorithm. | Substantial equivalence to predicate device specifications (implied). |
Pass system-level tests. | Tests conducted and passed (implied). |
Pass integration tests. | Tests conducted and passed (implied). |
Pass safety testing from hazard analysis. | Tests conducted and passed (implied). |
Pass interference testing. | Tests conducted and passed (implied). |
Pass hardware testing. | Tests conducted and passed (implied). |
Note: The document states that "Pass/Fail criteria were based on the specifications cleared for the predicate device and test results showed substantial equivalence." This implies that the acceptance criteria for the new device were the same as those established for the predicate device, and the new device met them. However, the specific metrics or values for these criteria are not detailed.
2. Sample size used for the test set and the data provenance
The document mentions "bench studies" for verification, validation, and testing activities. However, it does not specify the sample size used for the test set or the data provenance (e.g., country of origin of the data, retrospective or prospective). There's no indication of patient data being used for these bench studies; rather, it suggests laboratory or simulated environments.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
This information is not provided in the document. The testing described focuses on the algorithm's performance against predicate device specifications, not on expert-adjudicated ground truth from patient data.
4. Adjudication method for the test set
This information is not provided in the document. Given the nature of "bench studies" and evaluation against predicate specifications, an adjudication method in the context of human expert review is unlikely to have been relevant.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done
No, a multi-reader multi-case (MRMC) comparative effectiveness study was not mentioned or implied. The testing described is focused on the device's adherence to its own specifications and equivalence to predicate devices, not on human reader performance with or without AI assistance.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
Yes, the testing described appears to be a standalone algorithm evaluation. The document states, "Verification, validation, and testing activities were conducted to establish the performance and reliability characteristics of the STAR algorithm using bench studies." This implies evaluating the algorithm's performance independent of human interaction.
7. The type of ground truth used
The "ground truth" for the testing described seems to be the "specifications cleared for the predicate device." The device's performance was compared against these established specifications, and "test results showed substantial equivalence." There is no mention of expert consensus, pathology, or outcomes data being used as ground truth for this particular submission's testing.
8. The sample size for the training set
The document does not provide information regarding a training set or its sample size. The focus is on the performance and reliability characteristics of an existing algorithm (STAR software) that has been modified, not on the development or training of a new AI model with new data.
9. How the ground truth for the training set was established
Since no training set is mentioned or implied for the modifications described in this 510(k) summary, how its ground truth was established is not applicable or provided. The document is about a modification to an already existing and presumably validated algorithm (STAR software).
Ask a specific question about this device
(27 days)
HEWLETT-PACKARD GMBH
The Hewlett-Packard family of patient monitor products is intended for monitoring, recording, and alarming of multiple physiological parameters. The devices are indicated for use in health care facilities by health care professionals whenever there is a need for monitoring the physiological parameters of adult, neonatal, and pediatric patients.
The name of this device is the Hewlett-Packard family of Viridia Patient Monitors individually known as the M1175A/76A/77A (Viridia CMS), the M1205A (Viridia 24/26), and the M3000A/M3046A (Viridia M3/4). The common name is patient monitor. The modification is a software based change that involves only the SpO2 algorithm of the measurement computer processing unit of each device.
The provided text describes a 510(k) submission for a software-based change to the SpO2 algorithm within the Hewlett-Packard family of Viridia Patient Monitors. Here's a breakdown of the acceptance criteria and study details based on the input:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Specifications cleared for predicate devices (SpO2 algorithm performance) | All tested sensor-monitor combinations passed test criteria. Test results showed substantial equivalence to the predicate devices. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size:
- Neonatal patient data: Not explicitly stated, but clinical performance evaluations were conducted with "ICU neonates."
- Adult patient data: Not explicitly stated, but a "desaturation study" was conducted with "adults."
- Data Provenance: Not explicitly stated, but the studies were clinical performance evaluations. It's not specified whether the data was retrospective or prospective, or the country of origin, though the submitter is based in Germany.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
Not explicitly stated. The text mentions "co-oximeters as a reference" for the adult desaturation study, implying a quantitative, objective ground truth rather than expert consensus for that specific measurement.
4. Adjudication Method for the Test Set
Not applicable/mentioned. The ground truth for the adult study was established using co-oximeters. The method for the neonatal data ground truth is not detailed.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No, an MRMC comparative effectiveness study was not done. The study focused on the performance of the updated SpO2 algorithm and its equivalence to predicate devices, not on the improvement of human readers with AI assistance.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
Yes, the studies described appear to be standalone performance evaluations of the SpO2 algorithm. The text states: "Clinical performance evaluations using the new algorithm were conducted..." and describes validating the measurement of oxygen saturation.
7. The Type of Ground Truth Used
- Adult Study: Co-oximeters were used as a reference to validate the accuracy of the SpO2 measurement during a desaturation study. Co-oximetry is a gold standard for blood oxygen saturation measurement.
- Neonatal Study: The type of ground truth used to validate the measurement of oxygen saturation in ICU neonates is not explicitly stated.
8. The Sample Size for the Training Set
Not applicable/mentioned. The text describes verification, validation, and testing activities for a software change to an existing algorithm. There's no indication of a new algorithm being "trained" on a specific dataset in the context of this 510(k) summary. These are performance evaluations of an updated algorithm.
9. How the Ground Truth for the Training Set Was Established
Not applicable/mentioned, as a training set for a new algorithm is not described. The evaluation focused on the performance of a modified SpO2 algorithm against established predicate device specifications and clinical references.
Ask a specific question about this device
(16 days)
HEWLETT-PACKARD GMBH
The Hewlett-Packard Viridia Component Monitoring System is intended for use in monitoring, recording, and alarming of multiple physiological parameters. The devices are indicated for use in health care facilities by health care professionals whenever there is a need for monitoring the physiological parameters of adult, neonatal, and pediatric patients.
The modification is a software based change that combines software in the HP Clinical Monitoring System product line (M1175A, M1176A and M1177A, M100B/M1002B ECG/RESP) with the software in the HP Component Transport System Viridia, (M1205A), the devices are to be known collectively as the HP Viridia Component Monitoring System. The combination system will allow for shared future functionality capabilities in HP patient monitoring systems.
The provided text is a 510(k) summary for the Hewlett-Packard Viridia Component Monitoring System. It describes a software-based change that combines existing software within HP's clinical monitoring product lines. The document primarily focuses on establishing substantial equivalence to previously cleared predicate devices rather than directly detailing a study that proves the device meets specific acceptance criteria.
Therefore, much of the requested information regarding acceptance criteria, study design, sample sizes, expert involvement, and ground truth establishment cannot be found in the provided text. The document does not describe a clinical performance study with specific acceptance criteria that the device was tested against.
Here's a breakdown of what can be extracted and what is missing:
1. Table of Acceptance Criteria and Reported Device Performance
Not available in the provided text. The document focuses on showing substantial equivalence based on intended use, technological characteristics, and software changes rather than a new performance study with benchmarked acceptance criteria.
2. Sample Size Used for the Test Set and Data Provenance
Not available in the provided text. No information is given about a test set or its data provenance, as a dedicated performance study is not described.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
Not available in the provided text. Since no test set or ground truth establishment is described, this information is absent.
4. Adjudication Method for the Test Set
Not available in the provided text. No adjudication method is mentioned as a specific test set is not detailed.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
Not available in the provided text. The document does not mention any MRMC study or the effect size of human readers improving with AI assistance. The device in question is a patient monitoring system, not primarily an AI-driven diagnostic tool in the sense of image interpretation.
6. Standalone (Algorithm Only) Performance Study
Not available in the provided text. While the change is software-based, the document doesn't detail a standalone performance study of the algorithm in isolation. It describes the integration of existing software components.
7. Type of Ground Truth Used
Not available in the provided text. Since no specific performance study is detailed, no information on the type of ground truth used is provided.
8. Sample Size for the Training Set
Not available in the provided text. No training set is mentioned, as the submission describes combining existing software from legally marketed predicate devices, implying prior validation rather than de-novo algorithm training for this specific submission.
9. How the Ground Truth for the Training Set Was Established
Not available in the provided text. As no training set is discussed, information on ground truth establishment for it is also absent.
In summary, the provided 510(k) notification focuses on the regulatory pathway of substantial equivalence for a software-based combination of existing monitoring systems. It does not contain the detailed clinical study information typically found when a new device's performance against specific acceptance criteria is being proven.
Ask a specific question about this device
(74 days)
HEWLETT-PACKARD GMBH
The HP M3000A/M3046A Compact Portable Patient Monitor is indicated for use in health care facilities by health care professionals when the patient's clinician deems it appropriate to use a device that:
- Can measure and display multiple physiological parameters' and a) waves' of one patient, and can generate alarms and printouts based on those measurements.
- Can be used on adult, pediatric, and neonatal patients as specified in b) the Technical Data Sheets.
-
- List of supported measurements
- (a) ECG
- (b) Respiration
- Invasive blood pressure (c)
- Non-invasive blood pressure (d)
- SpO2 and Pleth (e)
- Temperature (f)
-
The M3000A/M3046A Compact Portable Patient Monitor is intended for monitoring, and alarming of ECG, respiration, oxygen saturation, invasive and non-invasive pressure, and temperature. This [510(k)] modification adds a new feature (patient population) to the M3000A/M3046A Compact Portable Patient Monitor: the capability of monitoring neonatal patients is added to the noninvasive blood pressure measurement. The M3000A/M3046A Compact Portable Patient Monitor contains software.
Based on the provided text, the document is a 510(k) summary and an FDA letter regarding a patient monitor, not a study describing acceptance criteria and device performance. Therefore, I cannot extract the requested information about acceptance criteria or a study proving the device meets them from this document.
The document states:
- "The M3000A/M3046A Compact Portable Patient Monitor was fully validated."
- "The comparison of intended use and technological characteristics of this device to other legally marketed devices taken together with the validation results and other information in this submission indicate that this device is substantially equivalent to legally marketed predicate devices in safety, effectiveness, and intended use."
However, it does not provide:
- A table of acceptance criteria and reported device performance.
- Sample sizes, data provenance, or details of a test set.
- Information on experts, ground truth establishment, or adjudication methods.
- Details of a multi-reader multi-case (MRMC) comparative effectiveness study, effect sizes, or human reader improvement.
- A standalone algorithm performance study.
- The type of ground truth used (e.g., pathology, outcomes data).
- Training set sample size or how its ground truth was established.
This document is a regulatory approval notice, not the technical validation report itself.
Ask a specific question about this device
(304 days)
HEWLETT-PACKARD GMBH
The HP M3000A/M3046A Compact Portable Patient Monitor is indicated for use in health care facilities by health care professionals when the patient's clinician deems it appropriate to use a device that:
- Can measure and display multiple physiological parameters and waves' of a) one patient, and can generate alarms and printouts based on those measurements.
- Can be used on adult, pediatric, and neonatal patients as specified in the b) Technical Data Sheets. The non-intrasive pressure can be used for adult and pediatric patients.
-
- List of supported measurements
- ECG (a)
- Respiration (b)
- Invasive blood pressure (c)
- Non-invasive blood pressure (d)
- SpO2 and Pleth (e)
- Temperature (f)
-
M3000A/M3046A Compact Portable Patient Monitor
Here's an analysis of the provided text regarding the acceptance criteria and study information for the Hewlett-Packard Models M3000A Multi-Measurement Server and M3046A Compact Portable Patient Monitor.
Based on the provided document, which is an FDA 510(k) clearance letter, there is no detailed information about the acceptance criteria or a specific study that proves the device meets these criteria in the manner expected for a typical AI/ML device submission.
This document is a 510(k) clearance letter from 1998 for a patient monitor, not an AI/ML diagnostic device. Therefore, the questions related to AI performance metrics, ground truth establishment, training sets, and expert adjudication are mostly not applicable to this type of device and submission.
The document primarily focuses on establishing "substantial equivalence" to a predicate device marketed before May 28, 1976. This is a regulatory pathway that doesn't typically require the detailed performance study data that would be relevant for an AI/ML device.
However, I will extract any relevant information that can be inferred from the text regarding the device's intended performance and "acceptance" in a regulatory context, and explicitly state what information is not present.
1. A table of acceptance criteria and the reported device performance
Acceptance Criteria (Inferred) | Reported Device Performance | Comments |
---|---|---|
Regulatory Substantial Equivalence: | Device deemed "substantially equivalent" to predicate devices. | This is the primary "acceptance criterion" for a 510(k) submission. It means the device is as safe and effective as a legally marketed device. |
Indications for Use: | Monitor physiological parameters and waves (ECG, Respiration, Invasive BP, Non-invasive BP, SpO2/Pleth, Temperature) for adult, pediatric, and neonatal patients. | The device's ability to measure these parameters as specified in Technical Data Sheets is implied. Specific performance metrics (e.g., accuracy, precision) are not detailed in this clearance letter. |
Alarm Generation: | Device can generate alarms based on measurements. | Functionality is stated. No specific alarm sensitivity or specificity performance data is provided. |
Printout Capability: | Device can generate printouts based on measurements. | Functionality is stated. |
Patient Population: | Adult, pediatric, and neonatal patients (with non-invasive pressure for adult and pediatric only). | The device's applicability to these populations is cleared. Specific performance across these populations is not detailed here. |
Clinical Environment: | Used in healthcare facilities by healthcare professionals. | This defines the intended use environment and user. |
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Not Applicable/Not Provided. The 510(k) clearance letter does not describe a clinical performance test set, its sample size, or the provenance of data for specific performance evaluation in the way it would for an AI/ML device. The "test" for a traditional 510(k) is primarily a demonstration of equivalence to a predicate device, often through bench testing and review of existing predicate data, rather than a de novo clinical study with a dedicated test set.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not Applicable/Not Provided. No mention of a test set requiring expert-established ground truth. This is not relevant for a patient monitor's 510(k) clearance.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not Applicable/Not Provided. No test set or adjudication method is described.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- Not Applicable/Not Provided. This is a clearance for a patient monitor, not an AI-assisted diagnostic tool.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not Applicable/Not Provided. The device is a "Multi-Measurement Server and Compact Portable Patient Monitor," implying human users. It is not an algorithm-only device.
7. The type of ground truth used (expert concensus, pathology, outcomes data, etc)
- Not Applicable/Not Provided. The concept of "ground truth" for diagnostic accuracy (as would be relevant for AI/ML) is not present in this document. The "ground truth" for a patient monitor would be the true physiological state, measured by gold-standard instruments, but such comparative validation details are not in this clearance letter.
8. The sample size for the training set
- Not Applicable/Not Provided. This device is a hardware patient monitor, not an AI/ML system that undergoes "training."
9. How the ground truth for the training set was established
- Not Applicable/Not Provided. Not an AI/ML device.
Ask a specific question about this device
Page 1 of 2