Search Results
Found 9 results
510(k) Data Aggregation
(90 days)
MARQUETTE MEDICAL SYSTEMS, INC.
The Quantitative Sentinel (QS) System's is intended for automatic patient data management. It does this by:
-
(a) providing the user the ability to create and use electronic forms for entering, viewing and storing patient and facility related data (e.g. charts, forms, graphs, chalkboards, care plans, user reference manual).
-
(b) interfacing with other hospital information systems and medical devices for automatic data acquisition, viewing and storage with the electronic patient record.
-
(c) providing visual notification of when acquired fetal monitor heart rate values exceed the user defined limits for high and low fetal heart rate and poor signal quality.
-
(d) providing Spectra Alerts capabilities for fetal monitoring (surveillance).
-
(e) providing automatic computations of physiologic indexes (e.g. nutrition).
-
(f) providing calculations from user defined formulas (i.e. index calculator).
-
(g) providing the ability to record, with the patient record, fluid input and oulput information that is defined by the user.
-
(h) providing the ability to export patient data to relational databases for research and Quality Assurance purposes.
-
(i) providing the ability to archive files to a secondary or tertiary storage medium (i.e. optical disk).
-
(i) providing the ability to print (locally or remotely) patient records and QS database definition (e.g. item names)
-
(k) providing the ability to review fetal monitor data (OB-Link) remotely over the internet.
The Quantitative Sentinel (QS) System is a software application that is intended for use as clinical data management system (also referred to as a clinical information system - CIS). The primary function of the system is the management of clinical data (whether manually or automatically acquired) for the purpose of providing integrated, ready and organized access to patient and/or clinical data that would normally be provided on paper records and/or separate clinical systems/devices. The QS System serves as a decision support tool as well as an electronic medical record. The QS System operates on off-the-shelf software and hardware. The device is intended for use in a hospital/clinical environment.
The Quantitative Sentinel System 7.0 is a software application intended for use as a clinical data management system. According to the provided 510(k) summary, "No clinical testing was necessary to demonstrate conformity to performance requirements." Therefore, there is no study described that proves the device meets specific acceptance criteria related to its clinical performance.
The submission focuses on demonstrating substantial equivalence to a predicate device based on technological characteristics and extensive software testing to meet its requirements and design.
However, based on the provided text, we can infer some "acceptance criteria" related to functionality and system capabilities, and the "reported device performance" is the statement that these functions are met or enabled.
Here's an attempt to structure the information based on your request, acknowledging the limitations due to the absence of clinical study data:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria (Implied from Intended Use/Features) | Reported Device Performance (as stated in 510(k) Summary) |
---|---|
Ability to create and use electronic forms for entering, viewing, and storing patient and facility related data. | Yes (stated in Indications for Use) |
Interface with other hospital information systems and medical devices for automatic data acquisition, viewing, and storage. | Yes (stated in Indications for Use & comparison table) |
Provide visual notification when acquired fetal monitor heart rate values exceed user-defined limits and for poor signal quality. | Yes (stated in Indications for Use - "Spectra Alerts capabilities for fetal monitoring (surveillance)" and comparison table) |
Provide automatic computations of physiologic indexes. | Yes (stated in Indications for Use) |
Provide calculations from user-defined formulas. | Yes (stated in Indications for Use) |
Provide ability to record fluid input and output information with the patient record. | Yes (stated in Indications for Use) |
Provide ability to export patient data to relational databases for research and Quality Assurance purposes. | Yes (stated in Indications for Use) |
Provide ability to archive files to a secondary or tertiary storage medium. | Yes (stated in Indications for Use) |
Provide ability to print (locally or remotely) patient records and QS database definitions. | Yes (stated in Indications for Use) |
Provide ability to review fetal monitor data remotely over the internet. | Yes (stated in Indications for Use & comparison table) |
Operate on off-the-shelf software and hardware. | Yes (stated in Device Description & comparison table) |
Utilize Network architecture (Ethernet, Token Ring, IBM Wireless LAN). | Yes (stated in comparison table) |
2. Sample size used for the test set and the data provenance:
The document explicitly states, "No clinical testing was necessary to demonstrate conformity to performance requirements." Therefore, there is no information provided regarding a clinical "test set" in the context of patient data. The testing mentioned refers to software and environmental testing. The provenance of any data used for internal software testing is not detailed.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
Not applicable, as no clinical test set with expert-established ground truth was reported.
4. Adjudication method for the test set:
Not applicable, as no clinical test set was reported.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
No MRMC study was reported. This device is a data management system, not an AI-assisted diagnostic tool for human readers.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
The device is a software application intended as a clinical data management system and decision support tool. It is inherently designed to manage and present data for human use. The "performance" described refers to its functional capabilities (e.g., data acquisition, display, calculation, remote access) rather than a standalone diagnostic or interpretive algorithm. The document states "The QS software and its environment have been extensively tested to meet its requirements and design," which implies standalone software functionality testing, but not in the context of a standalone clinical performance without human involvement in interpretation or decision-making.
7. The type of ground truth used:
Not applicable. For software and environmental testing, the "ground truth" would be the predefined functional and performance requirements of the software, verified through testing scenarios. No clinical ground truth (e.g., pathology, outcomes data, expert consensus on patient conditions) was used or reported in the context of clinical performance evaluation.
8. The sample size for the training set:
Not applicable, as no clinical training set was reported. This device does not appear to involve machine learning models that require a training set in the conventional sense.
9. How the ground truth for the training set was established:
Not applicable.
Ask a specific question about this device
(201 days)
MARQUETTE MEDICAL SYSTEMS, INC.
QT Guard Analysis System is intended to be used in a hospital or clinic environment by competent health professionals.
QT Guard Analysis System is intended to perform the analysis of simultaneously acquired 12-lead ECG for obtaining the measurements of QT interval dispersion and T wave complexity.
QT-Guard Analysis Program is intended to provide only the measurements of the QT dispersion and T wave complexity and is not intended to produce any interpretation of those measurements or diagnosis.
The QT dispersion and T wave complexity measurements produced by QT Guard Analysis System are intended to be used by qualified personnel in evaluating the patient in conjunction with patient's clinical history, symptoms, other diagnostic tests, as well as the professional's clinical judgment.
QT Guard Analysis System is intended for adult patient populations.
QT Guard Analysis System is a software program for measuring the QT interval dispersion and T wave complexity from simultaneously acquired 12-lead ECG.
The provided text is a 510(k) summary for the "QT Dispersion and T wave Analysis System (QT Guard Analysis System)". It states that the device is substantially equivalent to a predicate device, the Marquette 12SL Analysis Program, based on quality assurance measures and laboratory tests. However, it does not provide detailed acceptance criteria or a comprehensive study report with specific performance metrics as requested.
Based on the information available:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Not explicitly stated in the provided text. The submission focuses on substantial equivalence to a predicate device rather than specific quantitative performance thresholds. | The device performs "as well as the predicate device, Marquette 12SL analysis program" in terms of safety and effectiveness, based on quality assurance measures, software testing, and laboratory tests. |
2. Sample size used for the test set and the data provenance
The document does not specify a sample size for a test set or the data provenance. It mentions "laboratory tests" but does not elaborate on their methodology or the data used.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
This information is not provided. The comparison is made against a predicate device, and it's not clear how "ground truth" was established for any internal testing, if any such concept was even applied in a formal way beyond comparing outputs to the predicate.
4. Adjudication method for the test set
This information is not provided.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
A multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly mentioned or implied. The device is for analysis of ECGs to provide measurements, not for assisting human readers in interpretation or diagnosis.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, the testing described appears to be a standalone assessment of the algorithm's performance. The statement "The results of these measurements demonstrated that QT Guard analysis is as safe, as effective, and performs as well as the predicate device" refers to the system itself, without human intervention in its primary function of generating measurements.
7. The type of ground truth used
The concept of "ground truth" in the typical sense (e.g., pathology, outcomes data, or expert consensus on clinical diagnoses) is not directly applicable or explicitly stated in the context of this submission. The device provides measurements of QT interval dispersion and T wave complexity. The "truth" in this context would likely be the correctness and consistency of these measurements when compared to the predicate device, or possibly to known physiological standards, though the latter is not detailed. The submission relies on comparison to a predicate device (Marquette 12SL Analysis Program) as the basis for establishing substantial equivalence for its performance.
8. The sample size for the training set
The document does not specify a training set or its sample size. As a 510(k) submission for substantial equivalence, the focus is on demonstrating similar performance to an existing device, rather than detailing the development and training of a new AI model in the contemporary sense.
9. How the ground truth for the training set was established
This information is not provided, as a training set is not discussed.
Ask a specific question about this device
(87 days)
MARQUETTE MEDICAL SYSTEMS, INC.
The MUSE CV is a large capacity client server based computer system that accesses, stores and manages cardiovascular information. The information can consist of measurements, text, and digitized waveforms. MUSE CV is intended to be used in a hospital environment by trained operators. MUSE CV is designed for network compatibility and interfaces with other hospital information systems through various communication protocols. MUSE CV provides the ability to serially compare/trend cardiovascular information. Use of MUSE CV is intended for accessing, storage and management of both adult and pediatric cardiovascular information.
MUSE CV is a large capacity client server based computer system that accesses, stores and manages cardiovascular information. The information can consist of measurements, text, and digitized waveforms.
This document describes a 510(k) premarket notification for the "MUSE Cardiovascular Information System". It's important to note that this submission does not contain acceptance criteria or performance study results in the typical sense of evaluating an AI/ML device.
The document states that the MUSE CV system employs the same functional technology as predicate devices, with improvements in "speed, performance and reliability." It also claims compliance with "voluntary standards as detailed in Section 9 of this submission," but Section 9 is not provided in the given text.
The "performance" section mentions quality assurance measures applied during development, but these are general development practices and not specific study results or acceptance criteria for a device performance claim like accuracy, sensitivity, or specificity.
Therefore, many of the requested fields cannot be filled from the provided text because the submission focuses on substantial equivalence to predicate devices based on functional technology and general quality assurance, rather than detailed performance metrics of a novel AI/ML algorithm.
Here's a breakdown based on the information available:
-
A table of acceptance criteria and the reported device performance:
- Acceptance Criteria: Not explicitly stated as quantifiable performance metrics (e.g., accuracy, sensitivity, specificity). The criteria appear to be compliance with voluntary standards and demonstrating that the device is "as safe, as effective, and performs as well as the predicate devices."
- Reported Device Performance: The document only states that "The results of these measurements demonstrated that MUSE CV is as safe, as effective, and performs as well as the predicate devices." No specific quantitative performance values are provided.
-
Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):
- Not specified. There is no mention of a traditional "test set" or clinical study data.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):
- Not applicable. No ground truth establishment process is described as there's no clinical performance study detailed.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not applicable. No test set or expert adjudication is described.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No. This type of study is not mentioned. The device's primary function is described as storing, managing, and facilitating serial comparison/trending of cardiovascular information, not as an AI-assisted diagnostic tool for human readers.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Not applicable in the typical AI sense. The device is a "Cardiovascular Information System" for data management, not a standalone diagnostic algorithm. While it performs "serial comparison" and "serial trending," its performance is evaluated against predicate devices based on a broader system functionality, not specific diagnostic accuracy.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc):
- Not applicable. No ground truth for performance evaluation is described.
-
The sample size for the training set:
- Not applicable. This is not an AI/ML device that requires a training set in the typical sense.
-
How the ground truth for the training set was established:
- Not applicable. As above, no training set is mentioned.
Summary based on available information:
The provided document is a 510(k) summary focused on demonstrating substantial equivalence of the "MUSE Cardiovascular Information System" to existing predicate devices. It emphasizes functional and technological similarity, as well as adherence to general quality assurance and voluntary standards. It does not detail specific performance studies with quantitative metrics, test sets, or ground truth establishment typically associated with the evaluation of AI/ML diagnostic or assistive devices.
Ask a specific question about this device
(87 days)
MARQUETTE MEDICAL SYSTEMS, INC.
This device is viewed as a component of a system. The APEX Oximeter adds modularity to Marquette's CD Telemetry System product line. The Apex Oximeter is intended for portable patient monitoring of an ambulating patient's oxygen saturation and pulse rate. This device is intended to be used by personnel trained in the use of the equipment. It is intended to be used within the hospital/facility environment.
This device is viewed as a component of a system. The APEX OXIMETER adds modularity to Marquette's telemetry product line. It is intended for portable patient monitoring of an ambulating patient's oxygen saturation and pulse rate. The oxygen saturation calculations for the Apex Oximeter is performed identically to the method used in the Nonin 8500 series hand held pulse oximeter. The oximeter generates serial communications with a custom protocol to communicate with the CD Telemetry System.
Here's a breakdown of the acceptance criteria and study information for the KC180299 APEX OXIMETER, based on the provided text:
Important Note: The provided document is a 510(k) Summary of Safety and Effectiveness, not a detailed study report. As such, it does not contain the granular level of detail typically found in a comprehensive clinical or performance study. The information below is extracted directly from the text; many common elements of a detailed study will be marked as "Not provided" because the document does not include them.
Acceptance Criteria and Reported Device Performance
The document does not explicitly state numerical acceptance criteria in a table format. Instead, it relies on substantial equivalence to predicate devices (Nonin Model 8500 and 9500 pulse oximeters) and general statements about meeting requirements. The "reported device performance" is a high-level conclusion rather than specific metrics.
Acceptance Criteria (Explicitly Stated) | Reported Device Performance |
---|---|
Overall Performance: Device is safe and effective and performs substantially equivalent to predicate devices. | "Various reliability and software testing was performed on the product, and the results indicated that the APEX OXIMETER met the requirements of its intended use." |
Oxygen Saturation Calculation: Identical to the method used in the Nonin 8500 series hand held pulse oximeter. | The device uses the "identically" method used in the Nonin 8500 series. |
Intended Use: Portable patient monitoring of an ambulating patient's oxygen saturation and pulse rate. | "met the requirements of its intended use." |
Study Details
-
Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size: Not provided. The document mentions "Various reliability and software testing" but does not specify sample sizes for these tests or for any patient data.
- Data Provenance: Not provided. Country of origin or whether the data was retrospective or prospective is not mentioned.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not provided. The document does not describe the establishment of a ground truth in the context of expert review for a test set. The primary method of demonstrating performance is substantial equivalence to predicate devices.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not applicable/Not provided. No expert-based adjudication method is described.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No. This is a pulse oximeter, not an AI-assisted diagnostic device. Therefore, an MRMC study comparing human readers with and without AI assistance is not relevant and was not performed.
-
If a standalone (i.e. algorithm only without human-in-the loop performance) was done
- Yes, implicitly. The "various reliability and software testing" would assess the device's (algorithm's) performance in a standalone context to ensure it met its functional requirements and performed identically to the predicate device in its oxygen saturation calculations. The device itself is a standalone measurement tool, although it communicates with a telemetry system.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- The "ground truth" for the oxygen saturation calculation is based on the identical method used by the predicate device, the Nonin 8500 series pulse oximeter. For other functional aspects (reliability, software performance), the ground truth would be defined by the technical specifications and requirements the device was designed to meet. No independent expert consensus, pathology, or outcomes data is explicitly mentioned as being used to establish ground truth for this device's performance validation beyond its equivalence to the predicate.
-
The sample size for the training set
- Not applicable/Not provided. This device is a pulse oximeter, and its core function is based on established biophysical principles and algorithms, not machine learning that requires a "training set" in the modern sense of AI/ML development.
-
How the ground truth for the training set was established
- Not applicable/Not provided, as there is no "training set" in the context of AI/ML. The device's foundational "ground truth" for oxygen saturation measurement is essentially derived from the established and validated performance of its predicate devices, whose methods it duplicates.
Ask a specific question about this device
(27 days)
MARQUETTE MEDICAL SYSTEMS, INC.
Multi-Link Cable and Lead Wire Systems are electrode cable systems used to transmit signals from patient surface electrodes to various electrocardiograph recorders/monitors for both diagnostic and monitoring purposes. Use is limited by the indications for use of the connected monitoring or diagnostic equipment. Such equipment is commonly located in hospitals, doctors offices, emergency vehicles, as well as in home use.
Multi-Link Cable and Lead Wire Systems are a reusable electrode cable systems used to transmit signals from patient electrodes to various electrocardiograph recorders / monitors for both diagnostic and monitoring purposes. This type of device is common to both the industry and to most medical establishments.
The provided K980582 submission describes a medical device, the "Multi-Link Cable and Lead Wire Systems," which are reusable electrode cable systems used to transmit ECG signals. The study described is a comparison to a predicate device, not a standalone AI device study. Therefore, many of the requested categories are not applicable.
Here's the analysis based on the provided text:
Acceptance Criteria and Reported Device Performance
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
21CFR Part 898: Performance Standard for Electrode Lead Wire and Patient Cables (effective May 11, 1998) | Meets 21CFR Part 898: Performance Standard for Electrode Lead Wire and Patient Cables |
AAMI ECGC-5/83 (Electrical and Mechanical Testing) | Meets equivalent electrical and mechanical testing to AAMI ECGC-5/83 |
Substantial Equivalence to ConMed 3. Corp. ECG Patient Cables and Leadwires (K933649) | Demonstrated substantial equivalence to the predicate device |
Since this is a submission for an ECG lead wire and cable system, and not an AI-powered diagnostic device, many of the requested fields related to AI performance, ground truth, and expert evaluation are not applicable.
2. Sample size used for the test set and the data provenance:
- Sample Size: Not specified. The study involved a "comparison of the predicate device's production and both electrical and mechanical testing to AAMI ECGC-5/83 was made to the equivalent production, electrical and mechanical testing of the Multi-Link Cable and Lead Wire Systems." This suggests product testing rather than a clinical dataset in the traditional sense for an AI device.
- Data Provenance: Not applicable in the context of clinical data for AI. The testing would have been performed on the device itself.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not applicable. This is not a study requiring expert-established clinical ground truth for diagnostic accuracy.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not applicable. This is not a study requiring adjudication of diagnostic findings.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study was not done. This device is an ECG cable system, not an AI diagnostic tool.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Not applicable. This is not an algorithm or AI device.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For electrical and mechanical testing, the "ground truth" would be the specifications and requirements of the AAMI ECGC-5/83 standard and 21CFR Part 898. For substantial equivalence, it would be a direct comparison of the device's characteristics and performance to the predicate device.
8. The sample size for the training set:
- Not applicable. This device does not involve a training set as it's not an AI/machine learning device.
9. How the ground truth for the training set was established:
- Not applicable.
Ask a specific question about this device
(184 days)
MARQUETTE MEDICAL SYSTEMS, INC.
CardioSmart ST is intended to be used for resting and stress test ECG and for recording ECG in real-time without arrhythmia detection.
CardioSmart ST is intended to be used by trained operators when ECG records are required in the iudgment of a physician.
CardioSmart ST is not intended for use as a vital signs physiological monitor.
CardioSmart ST offers no diagnostic opinion to the user. Instead, it provides interpretive statements for which the physician renders his/her own medical opinion.
CardioSmart ST is not designed for intracardial use.
CardioSmart ST is not intended for home use.
CardioSmart ST is a portable ECG data acquisition and recording system designed and manufactured by Marquette Hellige GmbH.
CardioSmart ST allows the user to
- record resting ECGs,
- i measure and interpret the ECGs,
- perform ECG stress tests.
Here's an analysis of the provided text regarding the acceptance criteria and study for the CardioSmart ST device:
The provided 510(k) summary for the CardioSmart ST device, K973403, focuses heavily on demonstrating substantial equivalence to predicate devices rather than presenting specific quantitative acceptance criteria or a detailed clinical study for the CardioSmart ST itself.
The document emphasizes that:
- "CardioSmart ST hardware architecture is identical to the Technology predicate device CardioSmart."
- "All parts of the software which determine the medical functionality of the CardioSmart ST were re-used from the predicate devices."
- "The results of these measures demonstrated that CardioSmart ST is as safe, as effective, and performs as well as the predicate devices."
This indicates that the primary "proof" of meeting acceptance criteria for CardioSmart ST relies on its direct equivalence to legally marketed predicate devices (Marquette Hellige CardioSmart K950989, Marquette Hellige CardioSys K951130, and Marquette Responder 1500 K903644) and compliance with relevant industry standards.
Therefore, many of the requested details about a specific study may not be explicitly present for this particular device in this type of submission. The performance of the predicate devices would have undergone such studies.
Here's the information as extractable from the provided text, with explanations for missing points:
Acceptance Criteria and Reported Device Performance
Given that the submission relies on substantial equivalence and standard compliance rather than a novel clinical performance study for the CardioSmart ST, the "acceptance criteria" are implied to be compliance with the listed standards and equivalent performance to the predicate devices. No specific numerical performance metrics are provided for the CardioSmart ST itself in this document.
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Compliance with ANSI/AAMI EC11-1991 (ECG Diagnostic Devices) | CardioSmart ST complies with this standard. |
Compliance with ANSI/AAMI ES1-1993 (Cardiac Defibrillator Safety Series) | CardioSmart ST complies with this standard. |
Compliance with IEC 601-1 (Medical Electrical Equipment - Part 1: General Requirements for Safety) | CardioSmart ST complies with this standard. |
Compliance with IEC 601-1-2 (Medical Electrical Equipment - Part 1-2: General Requirements for Safety - Collateral Standard: Electromagnetic Compatibility - Requirements and Tests) | CardioSmart ST complies with this standard. |
Compliance with IEC 601-2-25 (Medical Electrical Equipment - Part 2-25: Particular Requirements for the Safety of Electrocardiographs) | CardioSmart ST complies with this standard. |
Performance "as safe, as effective, and performs as well as" predicate devices (CardioSmart, CardioSys, Responder 1500) | "The results of these measures demonstrated that CardioSmart ST is as safe, as effective, and performs as well as the predicate devices." |
Passed EC type-examination | CardioSmart ST passed the EC type-examination and bears the CE mark. |
Study Details for CardioSmart ST
-
Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):
- Not explicitly stated for the CardioSmart ST. The document describes "software and hardware testing, safety testing, environmental testing, final validation testing by an independent test group" as quality assurance measures, but does not provide details on the test set's size, origin, or nature (prospective/retrospective). This type of submission relies on the predicate having sufficient data, and the new device being substantially equivalent through engineering and design verification.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):
- Not explicitly stated for the CardioSmart ST. No mention of expert review for a specific ground truth test set for this device's submission.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not applicable / Not stated. No clinical study involving expert adjudication of a test set is described for the CardioSmart ST.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC study was not done/described. The device is an ECG acquisition and interpretation system, but the submission does not detail a study on human reader improvement with its "interpretive statements." It explicitly states, "CardioSmart ST offers no diagnostic opinion to the user. Instead, it provides interpretive statements for which the physician renders his/her own medical opinion."
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Not explicitly stated. While the device includes an "optionally available analysis program" for interpretation, the submission does not detail a standalone performance study of this algorithm. The focus is on the device's overall safety and effectiveness as an ECG system, largely by comparison with predicates and compliance with standards.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc):
- Not applicable / Not explicitly stated for a clinical ground truth. For the validation and testing mentioned, the "ground truth" would likely be based on engineering specifications, reference ECG data (potentially from databases or simulations), and established clinical standards for ECG waveform characteristics, rather than a specific disease outcome or pathology in a clinical study context for a new algorithm.
-
The sample size for the training set:
- Not applicable / Not stated for the CardioSmart ST. As the software was "re-used from the predicate devices" and no specific new algorithm development for CardioSmart ST is described, a training set for this specific submission is not relevant or mentioned. The training would have occurred for the predicate devices' algorithms.
-
How the ground truth for the training set was established:
- Not applicable / Not stated for the CardioSmart ST. Similar to the point above, this information would pertain to the development of the algorithms in the predicate devices, not specifically for the CardioSmart ST submission itself.
Summary of Approach:
The K973403 submission for CardioSmart ST represents a common pathway for medical device clearance, particularly for devices with significant component re-use from established predicate devices. Instead of extensive new clinical trials with detailed performance metrics, the manufacturer demonstrated safety and effectiveness through:
- Substantial equivalence: Showing the device is essentially the same as already cleared devices.
- Compliance with recognized standards: Adhering to national and international standards for medical electrical equipment and ECG devices.
- Quality assurance measures: Implementing standard development and testing practices (requirements specification, design reviews, code inspections, software/hardware testing, safety/environmental testing, independent validation).
This approach means that the detailed clinical study information often sought for novel AI/ML devices is not typically present in such a 510(k) submission focused on equivalence.
Ask a specific question about this device
(88 days)
MARQUETTE MEDICAL SYSTEMS, INC.
Acute Cardiac Ischemia Time-Insensitive Predictive Instrument (ACI-TIPI ) Option is intended to be used in a hospital or clinic environment by competent health professionals utilizing recorded ECG data to produce a numerical score which is the predicted probability of acute cardiac ischemia (which includes unstable angina pectoris and acute myocardial infarction). Like any computer-assisted ECG interpretation program, the Marquette ACI-TIPI evaluation and probability score is intended to supplement, not substitute for the physician's decision process. It should be used in conjunction with knowledge of the patient's history, the results of a physical examination, the ECG tracing, and other clinical findings. ACI-TIPI is intended for adult patient populations.
Acute Cardiac Ischemia Time-Insensitive Predictive Instrument (ACI-TIPI ) Option is a software option for Marquette MAC-series electrocardiographs to aid the physician's decision-making process in a chest pain setting by using patient age, gender, chest pain status and ECG features to provide the predicted probability of acute cardiac ischemia (which includes unstable angina pectoris and acute myocardial infarction).
The provided text does not contain detailed acceptance criteria or a specific study proving the device meets them. Instead, it focuses on the device's substantial equivalence to a predicate device and its intended use.
However, based on the information provided, we can infer some details:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly state numerical acceptance criteria. The performance claim is a qualitative comparison to the predicate device.
Acceptance Criteria | Reported Device Performance |
---|---|
Not explicitly stated | "ACI-TIPI analysis is as safe, as effective, and performs as well as the predicate device, Hewlett-Packard Model 1791A ACI-TIPI." |
2. Sample sized used for the test set and the data provenance
The document does not specify the sample size used for any test set or the data provenance.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
This information is not provided in the document.
4. Adjudication method for the test set
This information is not provided in the document.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
There is no mention of an MRMC comparative effectiveness study or the effect size of human reader improvement with AI assistance. The device is intended to supplement a physician's decision, not replace it, which suggests it might be used in conjunction with human readers, but no study details are provided.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The document mentions "software testing and field tests of the ACI-TIPI analysis," but it doesn't explicitly state whether a standalone performance study was conducted. The nature of the device as a "predictive instrument" producing a numerical score implies an algorithm-only component.
7. The type of ground truth used
The document does not specify the type of ground truth used for performance evaluation. It notes the device calculates "the predicted probability of acute cardiac ischemia (which includes unstable angina pectoris and acute myocardial infarction)." The "ground truth" would presumably relate to the actual occurrence of these conditions.
8. The sample size for the training set
This information is not provided in the document.
9. How the ground truth for the training set was established
This information is not provided in the document.
Summary of Study Information Provided in the Document:
The provided text primarily focuses on the regulatory submission for substantial equivalence. The "Performance" section briefly states that "The following quality assurance measures were applied to the development of ACI-TIPI: Requirements specification review, software testing and field tests of the ACI-TIPI analysis." It then concludes, "The results of these measurements demonstrated that ACI-TIPI analysis is as safe, as effective, and performs as well as the predicate device, Hewlett-Packard Model 1791A ACI-TIPI."
This statement indicates that some form of internal testing was conducted to demonstrate equivalence to the predicate device, but the details of these tests (sample sizes, ground truth establishment, expert involvement, specific metrics) are not disclosed in this summary. The regulatory approval is based on substantial equivalence, implying that the predicate device already had established performance.
Ask a specific question about this device
(88 days)
MARQUETTE MEDICAL SYSTEMS, INC.
The RAC 2A adds modularity to the Marquette Eagle configured product line. It is used to house the module [SAM (Smart Anesthesia Multi-gas) module] and provides an interface to the Eagle display. The module's patient information may be displayed on the Marquette Eagle monitor. This device is intended to be used by personnel trained in the use of the equipment. It is intended to be used within the hospital/facility environment.
This device is viewed as a component of a system. The RAC 2A adds modularity to Marquette's Eagle configured product line. It is used to house the module [SAM (Smart Anesthesia Multi-gas) module] and provides an interface to the Eagle display. The RAC 2A is intended to allow the module's patient information to be displayed on the Eagle monitor display.
The provided 510(k) summary (K973984) is for a medical device called the "RAC 2A," which is a module housing designed to interface a Smart Anesthesia Multi-gas (SAM) module with Marquette's Eagle display system. The device acts as a component within a larger system.
After reviewing the document, it is not possible to provide a detailed description of acceptance criteria and a study that proves the device meets them in the format requested. The document does not contain the level of detail necessary for such a response.
Here's why and what information is available:
1. A table of acceptance criteria and the reported device performance:
- Not provided in the document. The summary states, "Various reliability and software testing was performed on the product, and the results indicated that the RAC 2A met the requirements of its intended use." However, it does not specify what those requirements (acceptance criteria) were or present the detailed performance results in any measurable format.
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective):
- Not provided in the document. There is no mention of sample sizes for any testing, nor information about data provenance.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience):
- Not applicable/Not provided. This device is a module housing, not a diagnostic or AI-driven system that requires expert interpretation to establish ground truth for a test set. The document focuses on the device's functional and safety performance as an interface.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not applicable/Not provided. As above, this type of adjudication is typically for diagnostic interpretation studies, which is not relevant to this device.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- Not applicable/Not provided. This is not an AI-assisted diagnostic device, so an MRMC comparative effectiveness study would not be performed.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Not applicable/Not provided. This device is a hardware component. There is no "algorithm only" performance to evaluate in this context.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Not applicable/Not provided. Ground truth, in the sense of a definitive diagnostic label, is not relevant for a module housing component. Testing would likely involve functional verification against engineering specifications and safety standards.
8. The sample size for the training set:
- Not applicable/Not provided. There is no indication that this device uses a training set, as it is a hardware component and not an AI/ML algorithm.
9. How the ground truth for the training set was established:
- Not applicable/Not provided. As above, there is no training set mentioned or implied.
Summary of available relevant information from the document:
- Device Name: RAC 2A (module housing)
- Intended Use: To house a SAM (Smart Anesthesia Multi-gas) module and provide an interface to the Eagle display, allowing the module's patient information to be displayed on the monitor.
- Testing Performed: "Various reliability and software testing was performed."
- Conclusion of Testing: "The results indicated that the RAC 2A met the requirements of its intended use. Marquette Medical Systems has demonstrated that use of the RAC 2A is as safe and effective, and performs substantially equivalent its predicate devices."
- Predicate Devices: Eagle patient monitors (K960272, K960418, K961139), Tram-rac (K900598), Tram-rac / SAM module (K943977, K950581).
- Classification: Class II (Product Code: 73 CCK)
The document is a 510(k) summary for a hardware component where the primary focus is on demonstrating substantial equivalence to predicate devices through functional and reliability testing, rather than an AI/ML or diagnostic device that would require detailed clinical study results with ground truth establishment.
Ask a specific question about this device
(85 days)
MARQUETTE MEDICAL SYSTEMS, INC.
The Marquette Eagle 1000 (DASH 1000) Patient Monitor is a patient monitor that is designed to be used to monitor a patient's basic physiological parameters including: electrocardiography (ECG), invasive blood pressure, non-invasive blood pressure, oxygen saturation, and temperature. Optionally, the printing of information by a paper recorder may be added to the basic monitor configuration. Use of this device is intended for patient populations including: adult, pediatric, and/or neonatal.
The Marquette Eagle 1000 Patient Monitor is a patient monitoring system that is designed to be used to monitor a patient's basic physiological parameters including: electrocardiography (ECG), invasive blood pressure, non-invasive blood pressure, oxygen saturation, and temperature. The Eagle 1000 Patient Monitor is a microprocessor-based, software-driven device. The signal-acquisition and -processing technologies and the basic parts of the device software were re-used from former devices.
The provided 510(k) summary for the Marquette Eagle 1000 Patient Monitor primarily focuses on demonstrating substantial equivalence to a predicate device rather than presenting detailed performance studies with acceptance criteria for specific alarm detection or diagnostic functions. The device is a patient monitoring system, and the submission emphasizes its ability to monitor basic physiological parameters and raise alarms.
Here's an analysis of the requested information based on the provided text:
Acceptance Criteria and Device Performance
The document does not explicitly state numerical acceptance criteria for each monitored parameter (like ECG accuracy, blood pressure accuracy, SpO2 accuracy, etc.) nor does it report specific device performance against such criteria. Instead, it makes a general statement:
General Statement on Performance:
"Testing was performed on the Eagle 1000 Patient Monitor and its predicate devices. Precision, accuracy, as well as safety testing was performed. Test results indicate that the Eagle 1000 Patient Monitor provides an equivalent level or better in performance, when compared to the legally marketed predicate device(s) when tested to the accuracy requirements as specified in the contents of the premarket notification submission."
Since no specific acceptance criteria or quantitative performance data are given, a table of acceptance criteria and reported device performance cannot be generated from the provided text.
Additional Requested Information:
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
The document does not provide details on the sample size used for the test set or the data provenance.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
This information is not provided in the document. Given that this is a patient monitor for basic physiological parameters, ground truth would typically be established by validated reference devices, not human experts in the diagnostic sense (e.g., a highly accurate ECG machine as a reference for ECG, or an arterial line for invasive blood pressure).
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
Not applicable and not provided. Adjudication methods are typically used when subjective interpretations are involved, which is not the primary function of a physiological monitor measuring objective parameters.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
Not applicable. This device is a monitor, not an AI-assisted diagnostic tool that would involve human readers interpreting images or complex data in an MRMC study.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
This is a standalone device in the sense that it collects and displays physiological data. Its performance would inherently be "standalone" in that it performs its functions without direct human intervention in the signal processing. However, the document does not detail specific "algorithm only" performance metrics separate from the integrated device.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The document does not explicitly state the type of ground truth used. For a physiological monitor, ground truth is typically established by validated reference devices known for their high accuracy in measuring the specific physiological parameter (e.g., a highly accurate reference thermometer for temperature, an arterial line for invasive blood pressure, or a gold-standard oximeter for SpO2).
8. The sample size for the training set
The document does not mention a "training set" as this device predates the common application of machine learning with distinct training and test sets in medical device submissions. The device is described as "microprocessor-based, software-driven" and that "signal-acquisition and -processing technologies and the basic parts of the device software were re-used from former devices." This implies that the design and performance were likely based on established engineering principles and validation against known physiological signals, rather than iterative machine learning training.
9. How the ground truth for the training set was established
Not applicable, as a distinct "training set" in the modern machine learning sense is not indicated. The software and signal processing were re-used from former devices, suggesting that their development and validation would have followed standard engineering practices for medical devices at the time, likely involving comparisons to established reference measurements.
Ask a specific question about this device
Page 1 of 1