(331 days)
The MedSet Cardiolight system is intended to be used by trained ECG technicians with analysis and interpretation by cardiologists for the measurement and analysis of electrocardiogram signals on ambulatory patients as an aid to the diagnosis of heart disease.
The MedSet Cardiolight System is a real time ambulatory ECG recording and analysis system for a recording period of up to 24 hours. The stored data is transferred to a PC for graphical display of the results as an aid to the medical diagnosis of heart disease by a trained cardiologist.
Here's an analysis of the acceptance criteria and the study proving the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Metric | Acceptance Criteria (Implied) | Reported Device Performance |
---|---|---|
Sensitivity (Normal) | N/A (Compared to AHA) | 99% |
Specificity (Normal) | N/A (Compared to AHA) | 99% |
Pos. Predictive Acc. (Normal) | N/A (Compared to AHA) | 92% |
Sensitivity (VES) | N/A (Compared to AHA) | 91% |
Specificity (VES) | N/A (Compared to AHA) | 95% |
Pos. Predictive Acc. (VES) | N/A (Compared to AHA) | 99% |
Overall Quality | Comparable to other commercial long-term ECG devices | "the quality of the automatic analysis of the Cardiolight system corresponds to those of other commercial long term ECG devices." |
Note on Acceptance Criteria: The document does not explicitly state numerical acceptance criteria for Sensitivity, Specificity, and Positive Predictive Accuracy. Instead, it reports the device's performance against "standard data of the American Hearts Association (AHA)" as a benchmark. The overall qualitative acceptance criterion appears to be that its performance is equivalent to, or better than, other commercial long-term ECG devices, as stated in the conclusion of the validation study.
2. Sample Size Used for the Test Set and Data Provenance
- Test Set: "150 000 events" for the automatic ECG analysis (derived from "many recorded and manually edited ECG data sets," including the AHA standard data).
- Clinical Validation (Patients): 150 patients.
- Data Provenance:
- Automatic ECG Analysis: "standard data of the American Hearts Association (AHA)" and "many recorded and manually edited ECG data sets." The nationality of the "many recorded and manually edited ECG data sets" is not specified but given the company location (Germany) and the use of AHA data, it's likely a mix. This appears to be retrospective data.
- Clinical Validation: Performed at the University of Ulm Medical Clinic, which is in Germany. This was likely a prospective study on 150 patients.
- Commercial Use: "in commercial use in Europe for several years" implies retrospective real-world data collection.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts
- Automatic ECG Analysis: The ground truth for the "150 000 events" was established using "AHA-annotations" and "manually edited ECG data sets."
- For the AHA data, the ground truth would have been established by experts associated with the American Heart Association. Their specific number and qualifications are not detailed in this document but are generally understood to be highly qualified cardiologists/electrophysiologists.
- For the "manually edited ECG data sets," the number and specific qualifications of the experts who performed the manual editing are not specified, but the context implies trained professionals.
- Clinical Validation: The analysis and interpretation are intended "by cardiologists." For the clinical validation protocol, it's implied that cardiologists assessed the results to make the comparison, but the exact number or qualifications of those involved in establishing the ground truth for this specific study are not given.
4. Adjudication Method for the Test Set
The document does not explicitly describe an adjudication method (e.g., 2+1, 3+1). For the "manually edited ECG data sets" and "AHA-annotations," it's generally understood that a consensus or expert review process would have been used to establish the ground truth, but the specifics are not provided.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No, a multi-reader multi-case (MRMC) comparative effectiveness study evaluating human readers with and without AI assistance was not reported. The study focused on the performance of the algorithm itself (standalone) and then a clinical validation where the system served as an "aid to the medical diagnosis," implying a human-in-the-loop, but not a direct comparative effectiveness study on reader improvement.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, a standalone performance evaluation of the algorithm was done. The "Test Results" section directly compares the "automatic analysis" results (Sensitivity, Specificity, Positive Predictive Accuracy for Normal and VES events) to "AHA-annotations" and manually edited data.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The ground truth used was:
- Expert Consensus/Annotation: "AHA-annotations" and "manually edited ECG data sets." This implies a form of expert consensus or highly trained expert review of ECG waveforms.
8. The Sample Size for the Training Set
The document does not specify the sample size for the training set. The provided performance metrics (Sensitivity, Specificity, PPP) relate to the test results of the algorithm.
9. How the Ground Truth for the Training Set was Established
The document does not specify how the ground truth for the training set was established, as it doesn't describe the training process or data.
§ 870.2800 Medical magnetic tape recorder.
(a)
Identification. A medical magnetic tape recorder is a device used to record and play back signals from, for example, physiological amplifiers, signal conditioners, or computers.(b)
Classification. Class II (performance standards).