Search Results
Found 1 results
510(k) Data Aggregation
(254 days)
The E-FAX Pacemaker Services software is designed to provide an additional program directory in the E-FAX System to support scheduling, receiving, and annotating transtelephonic pacemaker magnet testing and ECG strips using a pacemaker telephonic transmitter provided to the patient by his/her physician.
The E-FAX Pacemaker Services software is an add on to the present E-FAX System software (K932859), that provides additional program directory in the E-FAX System to support scheduling, receiving, and annotating transtelephonic pacemaker magnet testing using a pacemaker transmitter provided to the patient by his/her physician.
The provided text describes the E-FAX System with Pacemaker Follow-Up Services and its substantial equivalence to predicate devices, focusing on its ability to transmit ECGs and pacemaker data over telephone lines. It does not contain specific acceptance criteria for performance metrics (like sensitivity, specificity, or accuracy) in a quantitative sense, nor does it detail a study designed to prove the device precisely meets such criteria. Instead, the "performance testing" section describes a clinical study that assesses the equivalence of ECG quality to released devices.
Based on the provided information, here's an attempt to answer your request, highlighting what is present and what is not:
1. A table of acceptance criteria and the reported device performance
Since explicit quantitative acceptance criteria (e.g., "The device shall achieve a sensitivity of X%") are not stated, the acceptance criterion for the clinical test was based on the qualitative assessment of ECG quality.
Acceptance Criteria | Reported Device Performance |
---|---|
Quality of electrocardiograms | "The quality of the electrocardiograms in every case is equivalent." (to released devices) |
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
- Sample size: 53 volunteers
- Data provenance: Not explicitly stated, but the context of "volunteers" and "transmitting over telephone lines in parallel with released devices" suggests a prospective collection within Baytown, TX, where the company is located, or surrounding areas in the US. No specific country of origin for the data is mentioned.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
This information is not provided in the given text. It states that "The quality of the electrocardiograms in every case is equivalent," but it does not specify who made this assessment or their qualifications.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
This information is not provided in the given text.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC study: No, a multi-reader multi-case comparative effectiveness study as typically understood for AI evaluation was not performed. The study described compares the ECG quality of the E-FAX system to released predicate devices.
- Effect size of human reader improvement: Not applicable, as this was not an AI-assisted diagnostic device study in that context.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
This device, the E-FAX System with Pacemaker Follow-Up Services, is described as an "add on to the present E-FAX System software" that supports "scheduling, receiving, and annotating transtelephonic pacemaker magnet testing and ECG strips." It’s designed to transmit and receive ECG data, rather than inherently perform diagnostic interpretation autonomously. The performance test focused on the quality of transmitted ECGs, implying it's a data capture and transmission system, not an "algorithm-only" diagnostic tool. Therefore, a standalone diagnostic performance evaluation of an algorithm would not be relevant in the conventional sense for this device based on the provided description.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The ground truth for the clinical study was established by comparing the "quality of the electrocardiograms" from the E-FAX system "in parallel with released devices." This implies that the accepted standard for ECG quality from the predicate/released devices served as the implicit ground truth benchmark. The method of assessing this quality is not detailed (e.g., expert review, automated metrics).
8. The sample size for the training set
The provided text describes a clinical study (n=53 volunteers) to assess the quality of ECG recordings. It does not mention a "training set" in the context of an algorithm or machine learning model. This appears to be a traditional medical device, not an AI/ML-driven device.
9. How the ground truth for the training set was established
Not applicable, as no training set for an AI/ML model is described.
Ask a specific question about this device
Page 1 of 1