(156 days)
The Fitbit ECG App is a software-only mobile medical application intended for use with Fitbit wrist wearable devices to create, record, store, transfer, and display a single channel electrocardiogram (ECG) qualitatively similar to a Lead I ECG. The Fitbit ECG App determines the presence of atrial fibrillation (AFib) or sinus rhythm on a classifiable waveform. The AFib detection feature is not recommended for users with other known arrhythmias.
The Fitbit ECG App is intended for over-the-counter (OTC) use. The ECG data displayed by the Fitbit ECG App is intended for informational use only. The user is not interpret or take clinical action based on the device output without consultation of a qualified healthcare professional. The ECG waveform is meant to supplement rhythm classification for the purposes of discriminating AFib from normal sinus rhythm and not intended to replace traditional methods of diagnosis or treatment. The Fitbit ECG App is not intended for use by people under 22 years old.
The Fitbit ECG App is a software-only medical device used to create, record, display, store and analyze a single channel ECG. The Fitbit ECG App consists of a Device application ("Device app") on a consumer Fitbit wrist-worn product and a mobile application tile ("mobile app") on Fitbit's consumer mobile application. The Device app uses data from electrical sensors on a consumer Fitbit wrist-worn product to create and record an ECG. The algorithm on the Device app analyzes a 30 second recording of the ECG and provides results to the user. Users are able to view their past results as well as a pdf report of the waveform similar to a Lead I ECG on the mobile app.
Below is the information regarding the Fitbit ECG App's acceptance criteria and the study that proves it, based on the provided document:
1. Table of acceptance criteria and the reported device performance
Category | Acceptance Criteria | Reported Device Performance |
---|---|---|
AFib Detection (Sensitivity) | Not explicitly stated in the provided text as a numerical criterion, but implicitly expected to be high for AFib detection. The predicate device's performance often forms the basis for substantial equivalence. | 98.7% for AFib detection |
AFib Detection (Specificity) | Not explicitly stated in the provided text as a numerical criterion, but implicitly expected to be high for ruling out AFib. The predicate device's performance often forms the basis for substantial equivalence. | 100% for AFib detection |
ECG Waveform Morphological Equivalence to Lead I | ECG waveform "qualitatively similar to a Lead I ECG" and expected to meet specific morphological equivalence criteria. | 95.0% of AF and SR tracings deemed morphologically equivalent to Lead I of a 12-Lead ECG waveform. |
2. Sample size used for the test set and the data provenance
- Sample Size: 475 subjects.
- Data Provenance: Subjects were recruited across 9 US sites. This indicates prospective data collection from the United States.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: For subjects with a known history of AFib, a "single qualified physician" performed the screening and assigned them to the AFib cohort. The document doesn't specify how many experts reviewed the 12-lead ECGs for the ground truth of AFib or Sinus Rhythm (NSR) for all 475 subjects, beyond the single physician for the AFib cohort screening. For the overall study, it implies a 12-lead ECG was the reference, which would typically be interpreted by qualified cardiologists or electrophysiologists.
- Qualifications of Experts: For AFib screening, the expert was referred to as a "single qualified physician." Specific qualifications like "radiologist with 10 years of experience" are not provided.
4. Adjudication method for the test set
The document does not explicitly state an adjudication method (e.g., 2+1, 3+1). It mentions that subjects with a known history of AFib were screened by a "single qualified physician." For the simultaneous 12-lead ECG, it implies a clinical standard interpretation which often involves adjudicated reads, but this is not detailed in the provided text.
5. If a Multi-Reader, Multi-Case (MRMC) comparative effectiveness study was done
No, a Multi-Reader, Multi-Case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance was not reported in this document. The study focuses on evaluating the standalone performance of the Fitbit ECG App against a clinical standard (12-lead ECG).
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, a standalone performance study was done. The document states: "The Fitbit ECG App software algorithm was able to detect AF with the sensitivity and specificity of 98.7% and 100%, respectively." This indicates a direct evaluation of the algorithm's performance.
7. The type of ground truth used
The ground truth was established using a simultaneous 30-second 12-lead ECG. This is a clinical gold standard for rhythm analysis.
8. The sample size for the training set
The document does not provide the sample size for the training set. It only details the clinical testing conducted for validation/evaluation of the device.
9. How the ground truth for the training set was established
The document does not provide information on how the ground truth for the training set was established, as it focuses on the validation study.
§ 870.2345 Electrocardiograph software for over-the-counter use.
(a)
Identification. An electrocardiograph software device for over-the-counter use creates, analyzes, and displays electrocardiograph data and can provide information for identifying cardiac arrhythmias. This device is not intended to provide a diagnosis.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Clinical performance testing under anticipated conditions of use must demonstrate the following:
(i) The ability to obtain an electrocardiograph of sufficient quality for display and analysis; and
(ii) The performance characteristics of the detection algorithm as reported by sensitivity and either specificity or positive predictive value.
(2) Software verification, validation, and hazard analysis must be performed. Documentation must include a characterization of the technical specifications of the software, including the detection algorithm and its inputs and outputs.
(3) Non-clinical performance testing must validate detection algorithm performance using a previously adjudicated data set.
(4) Human factors and usability testing must demonstrate the following:
(i) The user can correctly use the device based solely on reading the device labeling; and
(ii) The user can correctly interpret the device output and understand when to seek medical care.
(5) Labeling must include:
(i) Hardware platform and operating system requirements;
(ii) Situations in which the device may not operate at an expected performance level;
(iii) A summary of the clinical performance testing conducted with the device;
(iv) A description of what the device measures and outputs to the user; and
(v) Guidance on interpretation of any results.