Search Results
Found 6 results
510(k) Data Aggregation
(290 days)
EMMA® Capnograph measures, displays and monitors carbon dioxide partial pressure and respiratory rate during anesthesia, recovery and respiratory care. It may be used in the operating suite, intensive care unit, patient room, clinic, emergency medicine and emergency transport settings for adult, pediatric and infant patients.
The subject device, EMMA® Capnograph (EMMA), the same as the predicate, is a portable medical device capable of measuring, displaying, and monitoring carbon dioxide and respiratory rates from exhaled air. The difference between the subject device and the predicate device is the addition of the wireless capability to allow for the wireless transmission of data. The intended use and measurement functions have not changed from the previous clearance.
Masimo Corporation sought 510(k) clearance for their EMMA Capnograph with added wireless capabilities. The device measures and monitors carbon dioxide partial pressure and respiratory rates for adult, pediatric, and infant patients across various clinical settings.
Here's an analysis of the acceptance criteria and supporting studies:
- Table of Acceptance Criteria and Reported Device Performance:
| Feature | Acceptance Criteria (EMMA Specification) | Reported Device Performance (Subject Device) |
|---|---|---|
| CO2 Accuracy | 0-40 mmHg: +/- 2 mmHg | 0-40 mmHg: +/- 2 mmHg (Same as predicate) |
| 41-99 mmHg: 6% of reading | 41-99 mmHg: 6% of reading (Same as predicate) | |
| Respiration Rate Accuracy | 3-150 breaths/min: $\pm$ 1 breaths/min | 3-150 breaths/min: $\pm$ 1 breaths/min (Same as predicate) |
| Total System Response Time | < 0.7 s | < 0.7 s (Specification met) |
| Operating Temperature | -5 to 50 °C (23 to 122 °F) | -5 to 50 °C (23 to 122 °F) (Same as predicate) |
| Storage/Transport Temperature | -40 to 70 °C (-40 to 158 °F) | -40 to 70 °C (-40 to 158 °F) (Extended from predicate: -30 to 70 °C, and verified. Test supports acceptability of lower temperature specification.) |
| Operating Humidity | 10 - 95%, non-condensing | 10 - 95%, non-condensing (Same as predicate) |
| Storage/Transport Humidity | 10 - 95%, non-condensing | 10 - 95%, non-condensing (Narrowed from predicate: 5 - 100%, and verified.) |
| Operating Atmospheric Pressure | 60 - 120 kPa | 60 - 120 kPa (Extended from predicate: 70 - 120 kPa, and verified. Test supports acceptability of lower atmospheric pressure specification.) |
| Electrical Safety/EMC | IEC 60601 compliant | IEC 60601-1-2:2014 compliant (Testing supports addition of wireless radio capabilities does not impact essential performance.) |
| Wireless Communication | Supports Bluetooth wireless communication | Bluetooth GFSK, 2402-2480 MHz, Max Peak Output Power -1 dBm, Antenna Peak Gain -7 dBi, Recommended Range ~10 feet (~3 meters) line-of-sight (Added capability, tested for radio co-existence). |
| Radio Co-existence | Quality of service of wireless connection maintained under normal and anticipated abnormal conditions | Testing supports the quality of the service of the wireless connection is maintained under normal and anticipated abnormal conditions. |
| Cybersecurity | Acceptable cybersecurity risks | Testing supports the acceptability of the cybersecurity risks. |
-
Sample Size Used for the Test Set and Data Provenance:
The document does not explicitly state the sample sizes for the test sets used in the engineering verification and validation. The studies mentioned (EMC, radio co-existence, cybersecurity, operational verification for temperature and atmospheric pressure) are typically engineering validation tests performed on the device itself, rather than clinical studies using human subjects or large datasets. As such, the data provenance is from laboratory testing of the manufactured device. -
Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:
Not applicable. The ground truth for the engineering tests would be established by reference standards or highly calibrated equipment used for performance verification (e.g., precise gas mixtures for CO2 accuracy, controlled temperature/pressure chambers, standardized EMC/wireless testing setups). -
Adjudication Method for the Test Set:
Not applicable. These are engineering performance tests, not studies requiring expert adjudication of clinical outcomes. -
Multi Reader Multi Case (MRMC) Comparative Effectiveness Study:
No MRMC comparative effectiveness study was done. The submission focuses on substantial equivalence based on engineering changes (addition of wireless capabilities) to an already cleared predicate device, rather than a new clinical comparison. -
Standalone Performance Study (Algorithm Only Without Human-in-the-Loop Performance):
While the EMMA Capnograph functions as a standalone device, the document does not detail a "standalone algorithm only" study in the context of AI without human interaction. The performance criteria listed (e.g., CO2 accuracy, respiration rate accuracy) inherently represent the standalone performance of the device's measurement functions. The core measurement algorithm itself has not changed from the predicate. -
Type of Ground Truth Used:
For the CO2 and respiration rate accuracy, the ground truth would be established by controlled gas mixtures with known CO2 concentrations and controlled respiration rates simulated in a laboratory setting, using highly accurate reference measurement systems. For environmental and electrical tests, ground truth is defined by established international standards (e.g., IEC 60601-1-2) and highly calibrated testing equipment. -
Sample Size for the Training Set:
Not applicable. This device is not an AI/Machine Learning device that requires a training set in the typical sense. It relies on established physical principles of infrared light absorption. -
How the Ground Truth for the Training Set Was Established:
Not applicable, as there is no training set for this type of device.
Ask a specific question about this device
(58 days)
The EMMA 1.5T MR System is a magnetic resonance diagnostic device (MRDD) which produces transverse, sagittal, coronal and oblique cross-sectional images, and those display the internal structure and/or function of the head, body, or extremities. Depending on the region of interest, contrast agents may be used. These images when interpreted by trained physician yield information that may assist medical diagnosis.
The EMMA 1.5T MRI System is a 1.5T superconducting magnet MRI system which produces transverse, sagittal, coronal and oblique cross-sectional images, and those display the internal structure and/or function of the head, body, or extremities. It is composed of Magnet, Magnet Enclosure, Patient Table, Gradient Coil, Transmission Coil, Receiver Coil, Client PC, and Imaging Cabinet. The system software, Prodiva, a Windows-based software, is an interactive program with user friendly interface. The device is conformed to IEC and DICOM standards.
The provided text is a 510(k) summary for the EMMA 1.5T MRI System. It details the device's characteristics and its substantial equivalence to a predicate device, focusing on non-clinical performance data and a comparison of clinical images.
Here's an analysis based on your request, highlighting the information available in the text:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria for the EMMA 1.5T MRI System are largely based on compliance with various NEMA MS and IEC standards, demonstrating that its performance is equivalent to the predicate device. The text does not provide specific numerical acceptance criteria alongside numerical reported performance for each metric; instead, it states compliance with the standards.
| Acceptance Criteria Category | Standard/Requirement | Reported Device Performance |
|---|---|---|
| Biocompatibility | ISO 10993-1 | Complies; evaluation conducted. |
| Electrical Safety & EMC | AAMI/ANSI ES60601-1, IEC 60601-2-33, IEC 60601-1-2:2014 Edition 4 | Complies; testing conducted. |
| Surface Heating of RF Receive Coils | AAMI/ANSI ES60601-1 (max 41°C) | Measured temperature never exceeds 41°C in either coil-plugged or coil-unplugged configurations. |
| Software Verification & Validation | FDA's Guidance for Industry and FDA Staff, "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices." | Documentation provided; testing conducted. |
| Acoustic Testing | NEMA MS 4-2010 | Complies; testing conducted. |
| Performance Testing (Bench) | NEMA MS-1-2008 (R2014) (SNR) | Complies; demonstrates safety and performance as expected. |
| NEMA MS 2-2008 (R2014) (2D Geometric Distortion) | Complies; demonstrates safety and performance as expected. | |
| NEMA MS 3-2008 (R2014) (Image Uniformity) | Complies; demonstrates safety and performance as expected. | |
| NEMA MS 5-2010 (Slice Thickness) | Complies; demonstrates safety and performance as expected. | |
| NEMA MS 6-2008 (R2014) (SNR & Image Uniformity for Single-Channel Non-Volume Coils) | Complies; demonstrates safety and performance as expected. | |
| NEMA MS 8-2016 (SAR) | Complies; demonstrates safety and performance as expected. |
2. Sample Size Used for the Test Set and Data Provenance
The document mentions "Sample clinical images are provided to verify the claim of filing device's capability in generating images for diagnostic purposes." and "Sample clinical image sets from filing device and predicate device on same pulse sequences are provided to demonstrate the substantial equivalence."
- Sample size used for the test set: Not explicitly stated. The term "sample clinical images" suggests a limited set, but no number is given.
- Data provenance: Not explicitly stated. It is implied these are clinical images, but information on country of origin or whether they are retrospective or prospective is not provided.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
This information is not provided in the document. The text states that the resulting images, "when interpreted by trained physician yield information that may assist medical diagnosis," but it does not detail an expert review process for a specific test set or the qualifications of any such experts.
4. Adjudication Method for the Test Set
This information is not provided in the document. There is no mention of an adjudication method like 2+1 or 3+1 for establishing ground truth on a test set.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
A MRMC comparative effectiveness study is not mentioned in the document. The study performed focuses on demonstrating substantial equivalence through non-clinical performance and a visual comparison of sample clinical images, not on quantifying human reader improvement with or without AI assistance.
6. Standalone (Algorithm Only) Performance Study
This is an MRI system, not an AI algorithm. Therefore, a standalone (algorithm only) performance study in the context of AI is not applicable/not performed as described for diagnostic algorithms. The performance studies are for the imaging system itself. The document mentions "Software Verification and Validation Testing," which is focused on the software's functionality and safety within the device, not its standalone diagnostic performance.
7. Type of Ground Truth Used
For the clinical image comparison, the ground truth is implicitly based on the visual interpretability of the images by a "trained physician" for diagnostic assistance. However, a formal "ground truth" (e.g., pathology, clinical follow-up) for a specific diagnostic outcome for these sample images is not explicitly stated or detailed. The comparison is about the quality and diagnostic utility of the images produced by the new device versus the predicate device.
8. Sample Size for the Training Set
This information is not applicable as the document describes an MRI system, not an AI algorithm that requires a training set in that conventional sense. The "software" in this context refers to the operating and image reconstruction software, which is traditionally developed and validated through engineering processes, not trained on radiological data like an AI model.
9. How the Ground Truth for the Training Set Was Established
This information is not applicable for the same reason as point 8.
Ask a specific question about this device
(175 days)
The EMMa system is intended for diagnostic evaluation of patients who experience transient symptomatic events that may suggest non-lethal cardiac arthythmia to support and possibly improve an ongoing treatment by the patient's physician. Data received from battery powered ambulatory monitoring devices, triggered by an arrhythmia detection algorithm or manually by the patient, are stored and forwarded to licensed physician for review.
EMMa (Electronic Monitoring Management) is the server part of a telemedical system to receive data from the ambulatory ECG monitors Kate Loop (Event Monitor) and Kate MCT (Mobile Cardiac Telemetry; similar to Loop but with additional functions like trend data and streaming). Physiological data recorded by the ECG monitors are transmitted via their GSM module (cellular telephon network) to the server EMMa. The detection of arrhythmias and other cardiac conditions is done on the ECG monitoring device, not on the EMMA, and is not scope of this 510k. EMMa is provided to be installed in a Telemonitoring Service Centre (TSC). ECG-technicians / Agents working there generate patient reports from physiological data with the aim to send the reports to the patients' physicians. No interpretation of data is performed by the server software. The generated reports support physicians in adapting the therapy. EMMa is not designed for and compatible with iOS and Android.
The provided text does not contain detailed acceptance criteria for the EMMa device's performance, nor does it describe a study specifically designed to "prove" the device meets such criteria in terms of clinical accuracy or effectiveness.
The document is a 510(k) premarket notification letter and summary, primarily focusing on establishing substantial equivalence to a predicate device. It highlights software verification and validation activities and other non-clinical performance data, but clinical performance data was not required for the substantial equivalence determination.
Therefore, many of the requested details, such as specific performance metrics, sample sizes for test sets, expert qualifications, adjudication methods, MRMC studies, or standalone algorithm performance, are not available in the provided text.
Here's what can be extracted and what information is missing:
Missing Information:
- A table of acceptance criteria and reported device performance (specifically for clinical accuracy)
- Sample size used for the test set
- Data provenance for clinical testing
- Number of experts used to establish ground truth
- Qualifications of experts establishing ground truth
- Adjudication method for the test set
- Whether a multi-reader multi-case (MRMC) comparative effectiveness study was done, or the effect size of human reader improvement with AI assistance
- Whether a standalone (algorithm only) performance study was done (the document states arrhythmia detection is done on the ECG monitoring device, not the EMMa software)
- The type of ground truth used for clinical effectiveness (as clinical data was not required)
- Sample size for the training set (no mention of a training set for clinical performance)
- How ground truth for the training set was established (no mention of a training set for clinical performance)
Available Information (related to non-clinical performance and design):
-
A table of acceptance criteria and the reported device performance:
- The document implies that the device "meets all the stated requirements and passed all the testing noted above." However, specific numerical performance metrics (e.g., sensitivity, specificity for arrhythmia detection) are not provided. The "acceptance criteria" appear to be compliance with relevant standards (IEC 62304, IEC 62366-1) and software verification/validation.
Acceptance Criterion (Implied) Reported Device Performance Compliance with IEC 62304 (Medical Device Software life cycle) Passed; documentation provided Compliance with FDA Guidance for Software Contained in Medical Devices Software verification and validation conducted; documentation provided Software Level of Concern (Moderate) satisfied Yes Compliance with IEC 62366-1 (Usability engineering) Passed Meeting "all stated requirements" (General design/functionality) EMMa meets all stated requirements -
Sample size used for the test set and the data provenance:
- Not provided for clinical performance. The "test set" mentioned refers to software verification and validation, not a clinical dataset for diagnostic accuracy.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not applicable/Not provided for clinical performance. Clinical ground truth establishment was not part of the required testing for the 510(k) pathway for this device.
-
Adjudication method for the test set:
- Not applicable/Not provided for clinical performance.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No. The document explicitly states "Clinical data were not required to support the safety and effectiveness of the device EMMa." Furthermore, the EMMa is a server system designed to receive data and present it for human review by ECG-technicians and physicians; it does not perform final diagnostic interpretation or AI-driven assistance that would typically be evaluated in an MRMC study comparing human performance with and without AI. The arrhythmia detection algorithm itself resides on the ambulatory ECG monitoring devices (e.g., Kate Loop, Kate MCT), not on the EMMa server, and "is not scope of this 510k."
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- No. The document clarifies that the arrhythmia detection is done on the ECG monitoring device, not on the EMMa software. EMMa is a server system for managing, storing, and forwarding data for human review.
-
The type of ground truth used:
- For the non-clinical software testing, the "ground truth" would be the expected functional behavior and output based on design specifications and regulatory standards. Clinical ground truth (e.g., pathology, outcomes data) was not used as clinical data was not required.
-
The sample size for the training set:
- Not provided. Clinical training data is not discussed as clinical studies were not performed for this 510(k).
-
How the ground truth for the training set was established:
- Not provided. (See point 8).
Ask a specific question about this device
(31 days)
The Inrange Remote Medication Management System is intended for use as an aid to medical providers in managing therapeutic regimens for patients in the home or clinic. The system provides a means: for the patient's prescribed medications to be stored in a delivery unit; for a medical provider to remotely schedule the patient's prescribed medications; to provide notification to the patient when the prescribed medications are due to be taken; to release the prescribed medications to a tray of the delivery unit accessible to the patient's command; and to provide to the medical provider a history of the event.
Not Found
The provided document is a 510(k) premarket notification letter from the FDA to Inrange Systems, Incorporated for their Inrange Remote Medication Management System (EMMA). It confirms that the device is substantially equivalent to legally marketed predicate devices.
However, this document does not contain any information about acceptance criteria, device performance studies, sample sizes, expert qualifications, or ground truth establishment relevant to the performance of an AI/ML device. This is a regulatory clearance letter, not a performance study report.
Therefore, I cannot fulfill your request for the specific information regarding acceptance criteria and performance studies based on the provided text. The document only outlines the device's indications for use:
Indications for Use:
The Inrange Remote Medication Management System is intended for use as an aid to medical providers in managing therapeutic regimens for patients in the home or clinic. The system provides a means:
- for the patient's prescribed medications to be stored in a delivery unit;
- for a medical provider to remotely schedule the patient's prescribed medications;
- to provide notification to the patient when the prescribed medications are due to be taken;
- to release the prescribed medications to a tray of the delivery unit accessible to the patient's command; and
- to provide to the medical provider a history of the event.
To answer your request, a different type of document, such as a clinical study report or a technical performance assessment, would be required.
Ask a specific question about this device
(88 days)
The EMMA Emergency Capnometer Monitor measures, displays and monitors carbon dioxide concentration and respiratory rate during anesthesia, recovery and respiratory care. It may be used in the operating suite, intensive care unit, patient room, clinic, emergency medicine and emergency transport settings for adult, pediatric and infant patients.
The EMMA Emergency Capnometer Analyzer measures and displays carbon dioxide concentration and respiratory rate during anesthesia, recovery and respiratory care. It may be used in the operating suite, intensive care unit, patient room, clinic, emergency medicine and emergency transport settings for adult, pediatric and infant patients.
The EMMA Emergency Capnometer is a miniature mainstream infrared gas analysis bench with an integrated user interface. The complete carbon dioxide analyzer is contained within a transducer that is attached to the breathing circuit via the EMMA Airway Adapter.
The provided text describes the EMMA Emergency Capnometer and states that testing was done in direct comparison to predicates throughout the operating range using calibrated gas samples and legally marketed anesthesia and ventilation devices. The conclusion was that the device demonstrated performance, safety, and effectiveness equivalent or superior to its predicates in all characteristics. However, the document does not explicitly detail specific acceptance criteria or provide a table of reported device performance against those criteria.
Given the information provided, I can only address some of your questions.
1. A table of acceptance criteria and the reported device performance
The provided text does not contain a specific table of acceptance criteria with corresponding reported device performance values. It only generally states that the device "demonstrated performance, safety and effectiveness equivalent or superior to its predicates in all characteristics."
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The text mentions "calibrated gas samples" but does not specify the sample size used for the test set. It also does not specify the data provenance (e.g., country of origin, retrospective or prospective).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This is not applicable as the study involved "calibrated gas samples and legally marketed anesthesia and ventilation devices" rather than human-interpreted data requiring expert consensus for ground truth.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This is not applicable, as there's no indication of human adjudication for the device performance testing.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
A multi-reader multi-case (MRMC) comparative effectiveness study was not conducted. This device is a capnometer, not an AI-assisted diagnostic tool for human readers.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The testing described ("Testing in direct comparison to predicates throughout the operating range was conducted using calibrated gas samples and legally marketed anesthesia and ventilation devices") inherently represents a standalone performance evaluation of the EMMA Emergency Capnometer. There is no mention of a human-in-the-loop component for the performance assessment itself.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The ground truth was established by "calibrated gas samples" and the performance of "legally marketed anesthesia and ventilation devices" (presumably as a reference standard for comparison with the device's measurements).
8. The sample size for the training set
The device is a hardware capnometer, not based on a machine learning algorithm that requires a training set. Therefore, a training set is not applicable or mentioned.
9. How the ground truth for the training set was established
As there is no training set for this type of device, this question is not applicable.
Ask a specific question about this device
(111 days)
The EMMA Emergency Capnometer Monitor measures, displays and monitors carbon dioxide concentration and respiratory rate during anesthesia, recovery and respiratory care. It may be used in the operating suite, intensive care unit, patient room, clinic, emergency medicine and emergency transport settings for adult and pediatric patients.
The EMMA Emergency Capnometer Analyzer measures and displays carbon dioxide concentration and respiratory rate during anesthesia, recovery and respiratory care. It may be used in the operating suite, intensive care unit, patient room, clinic, emergency medicine and emergency transport settings for adult and pediatric patients.
The EMMA Emergency Capnometer is a miniature mainstream infrared gas analysis bench with an integrated user interface. The complete carbon dioxide analyzer is contained within a transducer that is attached to the breathing circuit via the EMMA Airway Adapter.
The provided 510(k) summary for the EMMA Emergency Capnometer focuses on establishing substantial equivalence to predicate devices rather than providing detailed acceptance criteria and a standalone study with specific performance metrics. Therefore, many of the requested details are not explicitly present in the document.
Here's an analysis based on the available information:
Acceptance Criteria and Device Performance
The documentation does not provide specific quantitative acceptance criteria (e.g., a target accuracy range for CO2 concentration or respiratory rate) with corresponding reported device performance. Instead, it states a general conclusion about equivalence or superiority.
The key statement is from section 12: "The EMMA Emergency Capnometer demonstrated performance, safety and effectiveness equivalent or superior to its predicates in all characteristics."
| Acceptance Criteria | Reported Device Performance |
|---|---|
| Not explicitly stated as quantitative targets. The implicit acceptance criteria are that the device's performance, safety, and effectiveness are at least equivalent to or superior to the predicate devices. | "Demonstrated performance, safety and effectiveness equivalent or superior to its predicates in all characteristics." |
Study Information
-
Sample size used for the test set and the data provenance:
- Sample Size: Not specified.
- Data Provenance: The study involved "calibrated gas samples and legally marketed anesthesia and ventilation devices." This suggests a laboratory or bench testing environment. There is no information regarding country of origin or whether the data was retrospective or prospective in a clinical setting.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: Not mentioned.
- Qualifications of Experts: Not mentioned. The ground truth was likely established by the "calibrated gas samples" (known concentrations) and possibly measurements from the "legally marketed anesthesia and ventilation devices" (which would have their own validated measurement capabilities).
-
Adjudication method for the test set:
- Not applicable/Not mentioned. The study appears to be a direct comparison against a known standard (calibrated gas) and established medical devices, rather than a subjective assessment requiring expert adjudication.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- Was an MRMC study done? No. This device is an emergency capnometer, a medical measurement device, not an AI-powered diagnostic tool requiring human reader interpretation. The study described is a device-to-device performance comparison.
-
If a standalone (i.e. algorithm only without human-in-the loop performance) was done:
- Yes, implicitly. The testing described ("Testing in direct comparison to predicates throughout the operating range was conducted using calibrated gas samples and legally marketed anesthesia and ventilation devices") is a standalone performance test of the device itself, without human interpretation of its outputs being part of the primary evaluation.
-
The type of ground truth used:
- Ground Truth Type: A combination of "calibrated gas samples" (representing a known, accurate CO2 concentration) and measurements from "legally marketed anesthesia and ventilation devices" (which are themselves considered accurate and reliable benchmarks). This is akin to a reference standard or benchmarking against established, validated measurement systems.
-
The sample size for the training set:
- Not applicable/Not mentioned. This device does not appear to be an AI/machine learning product requiring a training set in the conventional sense. Its development and validation are based on established engineering principles for gas analysis.
-
How the ground truth for the training set was established:
- Not applicable/Not mentioned, as there is no mention of a training set for an AI/ML model.
Summary of Device Performance and Equivalence Claim:
The 510(k) submission for the EMMA Emergency Capnometer establishes substantial equivalence primarily through direct comparative testing against its predicate devices (Tidal Wave Model 610, Novametrix Medical Systems Inc. and VEO Multigas Monitor for Pocket PC, Phasein AB). The "testing vs. predicates" section (11) indicates that the device was compared using "calibrated gas samples and legally marketed anesthesia and ventilation devices" across its operating range. The conclusion (12) states that the EMMA Emergency Capnometer "demonstrated performance, safety and effectiveness equivalent or superior to its predicates in all characteristics." This implies that the device's accuracy in measuring CO2 concentration and respiratory rate was found to be at least as good as, if not better than, the predicate devices when tested under controlled conditions.
Ask a specific question about this device
Page 1 of 1