(38 days)
The MUSE Cardiology Information System is intended to store, access and manage cardiovascular information on adult and pediatric patients. The information consists of measurements, text, and digitized waveforms. The MUSE Cardiology Information System provides the ability to review and edit electrocardiographic procedures on screen, through the use of reviewing, measuring, and editing tools including ECG serial comparison and interpretive 12-lead analysis. The MUSE Cardiology Information System is intended to be used under the direct supervision of a licensed healthcare practitioner, by trained operators in a hospital or facility providing patient care. The MUSE Cardiology Information System is not intended for real-time patient monitoring. The MUSE Cardiology Information System is not intended for pediatric serial comparison.
The MUSE Cardiology Information System is a network PC-based system comprised of a client workstation/server configuration that manages adult and pediatric diagnostic cardiology data by providing centralized storage and ready access to a wide range of data/reports (e.g. Resting ECG, Stress, Holter, HiRes) from GE and non-GE diagnostic and monitoring equipment. The device provides the ability to: Review and edit stored data consisting of measurements, text, and digitized waveforms on screen, through the use of reviewing, measuring, and editing tools including ECG serial comparison and interpretive 12-lead analysis. Generate formatted management reports, ad-hoc database search reports and clinical patient reports on selected stored data.
The provided text is a 510(k) Premarket Notification Submission for the GE Healthcare MUSE Cardiology Information System. This document focuses on establishing substantial equivalence to a predicate device and does not contain detailed information on specific acceptance criteria and a study proving device performance in the way typically found for novel AI/ML devices.
However, based on the provided text, here's what can be extracted and inferred regarding the "acceptance criteria" for this specific submission:
1. A table of acceptance criteria and the reported device performance:
Since this is a 510(k) submission for a system that leverages existing technology and claims substantial equivalence, the "acceptance criteria" are primarily related to general safety, effectiveness, and functional comparability to its predicate device, rather than specific performance metrics (like sensitivity, specificity, or AUC) for a new algorithm.
Acceptance Criteria Category | Reported Device Performance (from text) |
---|---|
Safety and Effectiveness | "GE Healthcare considers the MUSE Cardiology Information System to be as safe, as effective, and performance is substantially equivalent to the predicate device." |
Voluntary Standards Compliance | Complies with: |
- IEC 60601-1-1:2001 (Medical Electrical Equipment - General Requirements for Safety)
- IEC 60601-1-2:2007 (Electromagnetic Compatibility)
- IEC 60601-1-4:2000 (Programmable Electrical Medical Systems)
- ISO 14971:2009 (Risk Management to medical devices) |
| Quality Assurance Measures | Applied: - Risk Analysis
- Requirements Reviews
- Design Reviews
- Code Inspection
- Testing on unit level (Module verification)
- Integration testing (System verification)
- Performance testing (Verification)
- Safety testing (Verification)
- Simulated use testing (Validation) |
| Functional Equivalence to Predicate | "The proposed MUSE Cardiology Information System employs the same functional scientific technology as the predicate device MUSE Cardiovascular Information System (K110132)." |
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective):
The document explicitly states: "The subject of this premarket submission, MUSE Cardiology Information System, did not require clinical studies to support substantial equivalence." Therefore, no specific test set sample size or data provenance is mentioned as part of a formal clinical study for this submission. The testing mentioned in the Quality Assurance Measures (e.g., performance testing, simulated use testing) would have involved internal validation, but details about sample size or data provenance for these internal tests are not provided in this public summary.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
Since no clinical studies were required and no specific "test set" with a delineated ground truth is mentioned in the context of proving performance against clinical endpoints, this information is not available in the provided text.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
No adjudication method is mentioned, as no clinical test set requiring this was conducted for supporting substantial equivalence.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
No MRMC comparative effectiveness study was performed or cited in this submission. The device is an "ECG Analysis Computer" and "Cardiology Information System" which provides tools for review and editing, including interpretive 12-lead analysis. The submission focuses on its system capabilities and equivalence to its predicate, not on a human-AI collaborative performance study.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
The device description mentions "interpretive 12-lead analysis," which implies an algorithm that performs analysis. However, the submission does not detail any standalone performance study of this algorithm. The overall device is a system that allows reviewing and editing by trained operators under the supervision of a licensed healthcare practitioner. The focus of the submission is on the system's ability to store, access, and manage cardiovascular information, leveraging the same functional scientific technology as its predicate device.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
This information is not provided. Given that no formal clinical study was required for this submission, there is no mention of how "ground truth" would have been established for performance evaluation (e.g., for the interpretive 12-lead analysis component).
8. The sample size for the training set:
Not applicable or not provided. This submission is for a system that uses "the same functional scientific technology" as its predicate. It does not describe a new AI/ML algorithm that would undergo a separate training process requiring a specific training set size to be detailed in this type of submission.
9. How the ground truth for the training set was established:
Not applicable or not provided, for the same reasons as in point 8.
§ 870.1425 Programmable diagnostic computer.
(a)
Identification. A programmable diagnostic computer is a device that can be programmed to compute various physiologic or blood flow parameters based on the output from one or more electrodes, transducers, or measuring devices; this device includes any associated commercially supplied programs.(b)
Classification. Class II (performance standards).