Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K230292

    Validate with FDA (Live)

    Date Cleared
    2023-05-02

    (89 days)

    Product Code
    Regulation Number
    870.2345
    Reference & Predicate Devices
    Predicate For
    Why did this record match?
    Reference Devices :

    K201168, DEN180042

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Samsung ECG Monitor Application with Irregular Heart Rhythm Notification is an over-the-counter (OTC) softwareonly, mobile medical application operating on a compatible Samsung Galaxy Watch and Phone for informational use only in adults 22 years and older. The app analyzes pulse rate data to identify episodes of irregular heart rhythms suggestive of atrial fibrillation (AFib) and provides a notification suggesting the user record an ECG to analyze the heart rhythm. The Irregular Heart Rhythm Notification Feature is not intended to provide a notification on every episode of irregular rhythm suggestive of AFib and the absence of a notification is not intended to indicate no disease process is present, rather the feature is intended to opportunistically acquire pulse rate data when the data when determined sufficient toward surfacing a notification.

    Following this prompt, or based on the user's own initiative, the app is intended to create, record, store, transfer, and display a single-channel ECG. similar to a Lead I ECG. Classifiable traces are labeled by the app as either AFib or sinus rhythm with the intention of aiding heart rhythm identification.

    The app is not intended for users with other known arrhythmias, and it is not intended to replace traditional methods of diagnosis or treatment. Users should not interpret or take clinical action based on the device of the of a qualified healthcare professional.

    Device Description

    The Samsung ECG Monitor App with Irregular Heart Rhythm Notification (IHRN) Feature is a software as a medical device (SaMD) that consists of a pair of mobile medical apps: one app on a compatible Samsung wearable and the other on a compatible Samsung phone, both general-purpose computing platforms.

    When enabled, the wearable application of the SaMD uses a wearable photoplethysmography (PPG) sensor to background monitor bio-photonic signals from the user. The application examines beat-to-beat intervals and generates an irregular rhythm notification indicative of atrial fibrillation (AFib). Upon receiving an irregular rhythm notification or at their discretion, the user can record a single-lead ECG using the same wearable. The wearable application then calculates the average heart rate from the ECG recording and produces a rhythm classification. The wearable application also securely transmits the data to the ECG phone application on the paired phone device. The phone application shows a time-stamped irregular rhythm notification history with heart rate information; ECG measurement history; and generates a PDF file of the ECG signal, which the user can share with their healthcare provider.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the Samsung ECG Monitor Application with Irregular Heart Rhythm Notification Feature, based on the provided text:

    Acceptance Criteria and Device Performance

    Acceptance Criteria (Targeted Performance)Reported Device Performance (Samsung IHRN Feature)
    Subject Level:
    Sensitivity (for irregular rhythm notification)68.0% (C.I. 60.5 - 75.5)
    Specificity (for irregular rhythm notification)98.8% (C.I. 98.0 - 99.6)
    Tachogram Level:
    Positive Predictive Value (PPV)95.7% (C.I. 94.7 - 96.7)
    ECG Function (inherited from K201168):
    Atrial Fibrillation Sensitivity98.1%
    Sinus Rhythm Specificity100%

    The document states that Samsung's algorithm performance for the IHRN function is substantially equivalent to the predicate device (Apple IRN Feature DEN180042) at both subject and tachogram levels, indicating these reported values met the acceptance criteria. For the ECG function, the device inherited the performance from the previously cleared Samsung ECG Monitor App (K201168) and thus the reported values were assumed to meet their prior acceptance criteria.


    Study Details

    2. Sample size used for the test set and the data provenance:

    • IHRN Clinical Validation (PPG-based notification):

      • Analyzable Dataset for primary and secondary endpoints: 810 subjects (from 888 enrolled).
      • Tachogram-level assessment: 98 subjects with AFib episodes (over an hour) and 101 subjects with less than an hour of AFib or no AFib were randomly selected from the cardiologist-reviewed subjects. Up to 25 positive tachograms with reference ECG data were randomly selected from these subjects.
      • Data Provenance: The document does not explicitly state the country of origin, but it is a clinical study. The phrasing "All recruited subjects were at risk for AFib and had experienced symptoms..." suggests prospective data collection.
    • ECG Function (on-demand):

      • No new clinical, human factors, or ECG database tests were conducted as the function was unchanged from the K201168 clearance. Therefore, a new test set was not used for this specific clearance.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • IHRN Clinical Validation:
      • Subject-level ground truth: "clinician-adjudicated and cardiologist-reviewed patch ECG data." The exact number of clinicians/cardiologists for this overarching adjudication is not specified, but it implies multiple experts.
      • Tachogram-level ground truth: "Two board-certified cardiologists reviewed each reference ECG for annotation with a third cardiologist serving as tie-breaker."
      • Qualifications: "Board-certified cardiologists."

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    • Tachogram-level ground truth: 2+1 (Two board-certified cardiologists reviewed, with a third serving as a tie-breaker).
    • Subject-level ground truth: Not explicitly stated as a specific numerical method (e.g., 2+1), but referred to as "clinician-adjudicated and cardiologist-reviewed," implying a consensus or expert-driven process.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No MRMC comparative effectiveness study involving human readers with and without AI assistance was mentioned or conducted. The study evaluated the device's performance (IHRN feature) against a clinical ground truth, not the improvement of human readers using the device.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    • Yes, the clinical validation study for the Irregular Heart Rhythm Notification (IHRN) feature primarily assesses the standalone performance of the PPG-based algorithm in identifying irregular rhythms and generating notifications. The "subject-level irregular rhythm notification accuracy" and "tachogram-level positive predictive value" are metrics of the algorithm's performance without direct human interpretation being part of the primary output.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • IHRN Clinical Validation: Expert consensus using reference ECG patch data reviewed and adjudicated by clinicians and board-certified cardiologists.

    8. The sample size for the training set:

    • The document does not specify the sample size for the training set. It focuses on the validation study.

    9. How the ground truth for the training set was established:

    • The document does not specify how the ground truth for the training set (if any) was established. It only details the ground truth establishment for the test/validation set.
    Ask a Question

    Ask a specific question about this device

    K Number
    K213519

    Validate with FDA (Live)

    Manufacturer
    Date Cleared
    2022-06-10

    (219 days)

    Product Code
    Regulation Number
    882.1950
    Reference & Predicate Devices
    Predicate For
    Why did this record match?
    Reference Devices :

    K161717, DEN180044, DEN180042

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Rune Labs Kinematic System is intended to quantify kinematics of movement disorder symptoms including tremor and dyskinesia, in adults (45 years of age or older) with mild to moderate Parkinson's disease.

    Device Description

    The Rune Labs Kinematic System collects derived tremor and dyskinesia probability scores using processes running on the Apple Watch, and then processes and uploads this data to Rune's cloud platform where it is available for display for clinicians.

    The Rune Labs Kinematic System uses software that runs on the Apple Watch to measure patient wrist movements. These movements are used to determine how likely dyskinesias or tremors are to have occurred. The times with symptoms are then sent to the Rune Labs Cloud Platform using the Apple Watch's internet connection, which is then displayed for clinician use.

    The Apple Watch contains accelerometers and gyroscopes which provide measurements of wrist movement. The Motor Fluctuations Monitor for Parkinson's Disease (MM4PD) is a toolkit developed by Apple for the Apple Watch that assesses the likely presence of tremor and dyskinesia as a function of time. Specifically, every minute, the Apple Watch calculates what percentage of the time that tremor and dyskinesia were likely to occur. The movement disorder data that is output from the Apple's MM4PD toolkit have been validated in a clinical study (Powers et al., 20211).

    The Rune Labs Kinematic System is software that receives, stores, and transfers the Apple Watch MM4PD classification data to the Rune Labs Cloud Platform where it is available for visualization by clinicians. The device consists of custom software that runs on the users' smart watch and web browsers.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

    Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implicitly defined by the correlation and differentiation shown by the device's measurements against established clinical ratings and conditions. The study highlights the performance in terms of correlation coefficients and statistical significance.

    Acceptance Criteria (Implicit)Reported Device Performance
    Tremor Detection Correlation: Strong correlation between daily tremor detection rate and clinician's overall tremor rating (MDS-UPDRS tremor constancy score).Spearman's rank correlation coefficient of 0.72 in both the design set (n=95) and hold-out set (n=43) for mean daily tremor percentage vs. MDS-UPDRS tremor constancy score.
    Tremor False Positive Rate (Non-PD): Low false positive rate for tremor detection in elderly, non-PD controls.False positives occurred 0.25% of the time in 171 elderly, non-PD longitudinal control subjects (43,300+ hours of data).
    Dyskinesia Differentiation: Significant difference in detected dyskinesia between subjects with and without chorea.Dyskinesia detected significantly differed (p < 0.001) between subjects with chorea (10.7 ± 9.9% of day) and those without (2.7 ± 2.2% of day) in the design set (n=125 without, n=32 with chorea). Similar significant difference (P = 0.027) in hold-out set (n=47 without, n=10 with chorea).
    Dyskinesia False Positive Rate (Non-PD): Low false positive rate for dyskinesia detection in elderly, non-PD controls.Median false-positive rate of 2.0% in all-day data from elderly, non-PD controls (171 subjects, 59,000+ hours of data).
    Correlation with Motion Capture (Watch Functionality): Strong correlation between watch movement measurements and a professional motion tracking system.Pearson correlation coefficient of 0.98 between displacement measured by motion capture and watch estimate, with a mean signed error of -0.04 ± 0.17 cm.

    Study Details (Powers et al., 2021)

    1. Sample sizes used for the test set and the data provenance:

      • Motion Measurement Correlation (initial validation step): A single healthy control subject (likely a very small test set to validate the sensor itself, not the clinical algorithm performance).

      • Tremor Validation:

        • Design Set: n = 95 patients (from longitudinal patient study)
        • Hold-out Set: n = 43 patients (from longitudinal patient study)
        • False Positive Testing: 171 elderly, non-PD longitudinal control subjects.
      • Dyskinesia Validation:

        • Choreiform Movement Score (CMS) differentiation:
          • 65 subjects with confirmed absence of in-session dyskinesia (89 tasks)
          • 69 subjects with discordant dyskinesia ratings (109 tasks)
          • 19 subjects with confirmed dyskinesia across all three raters (22 tasks)
        • Longitudinal Dyskinesia Detection:
          • Design Set: 125 patients with no known dyskinesia, 32 patients with chorea.
          • Hold-out Set: 47 subjects with no reported dyskinesia, 10 subjects with chorea.
        • False Positive Testing: 171 elderly, non-PD longitudinal control subjects.
      • Data Provenance: The study was conducted by Apple, implying a global or multi-center approach, but specific country of origin is not mentioned. The studies were likely prospective observational studies where data was collected over time from participants wearing the Apple Watch. Some initial development data may have been retrospective, but the validation steps appear prospective.

    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience):

      • For the Dyskinesia validation (specifically the "Choreiform Movement Score" differentiation), three MDS-certified experts were used to provide dyskinesia ratings during multiple MDS-UPDRS assessments. Their specific experience level (e.g., "10 years of experience") is not detailed, but MDS certification implies a high level of specialized expertise in movement disorders.
      • For the Tremor validation, the "clinician's overall tremor rating" and "MDS-UPDRS tremor constancy score" were used. While it mentions "clinician's," it doesn't specify if this was a consensus or single reading, nor the number of clinicians. Given the use of MDS-UPDRS, it implies assessment by trained medical professionals (neurologists or movement disorder specialists).
    3. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

      • For Dyskinesia validation, the ratings from the three MDS-certified experts were categorized as:
        • "confirmed absence" (all three agreed absence)
        • "discordant" (raters disagreed)
        • "confirmed dyskinesia" (all three agreed presence).
          This implicitly suggests a form of consensus-based adjudication (3/3 agreement for "confirmed," disagreement acknowledged for "discordant").
      • For Tremor validation, the adjudication method for the "clinician's overall tremor rating" or "MDS-UPDRS tremor constancy score" is not explicitly stated. It likely refers to standard clinical assessment practices using the UPDRS scale, which can be done by a single trained rater or with multiple raters for research purposes (though not explicitly detailed here as an adjudication).
    4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No, a multi-reader, multi-case (MRMC) comparative effectiveness study evaluating human readers with vs. without AI assistance was not described. The study focused on validating the device's standalone ability to quantify movements against clinical ground truth (UPDRS scores, expert ratings of dyskinesia). The device is described as quantifying kinematics for clinicians to display, implying it's an assessment tool rather than an AI-assisted diagnostic aid for interpretation by human readers.
    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • Yes, the core validation steps for tremor and dyskinesia detection described in the Powers et al. (2021) paper are standalone algorithm-only performance evaluations. The Apple Watch's MM4PD toolkit calculates the percentage of time tremor and dyskinesia were likely to occur, and this algorithm's output is directly compared to clinical ground truth. The Rune Labs Kinematics System then receives, stores, and transfers this classification data for display.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • Expert Consensus/Clinical Ratings:
        • For Tremor: "clinician's overall tremor rating" and "MDS-UPDRS tremor constancy score" (a widely accepted clinical rating scale for Parkinson's disease).
        • For Dyskinesia: Ratings from "three MDS-certified experts" during MDS-UPDRS assessments, leading to classifications like "confirmed absence," "discordant," and "confirmed dyskinesia." Clinical history (e.g., "known chorea") was also used.
      • Objective Measurement Reference: For the fundamental sensor accuracy, a commercially available motion tracking system (Vicon) was used as a reference to compare against the watch's displacement measurements.
    7. The sample size for the training set:

      • The document implies that the MM4PD algorithms were developed using data from various studies.

        • Tremor Algorithm Development:
          • Pilot study: N=69 subjects
          • Longitudinal patient study: first 143 subjects enrolled (used for the "design set" and hold-out set, so the training set would be a subset of these or distinct, but not explicitly broken out).
          • Longitudinal control study: 236 subjects (for false positive rates, likely also contributed to defining normal movement).
        • Dyskinesia Algorithm Development:
          • Pilot study: N=10 subjects (divided evenly between dyskinetic and non-dyskinetic)
          • Longitudinal patient study: N=97 subjects (first 143 enrolled; 22 with choreiform dyskinesia, 75 without)
          • Longitudinal control study: N=171 subjects.
      • The term "design set" is used for both tremor and dyskinesia validation, which often implies the data used for training/tuning the algorithm. So, the explicit "training set" size for each specific algorithm (tremor vs. dyskinesia) isn't given as a distinct number separate from the "design set," but the various datasets described contributed to algorithm development. For tremor, the "design set" was effectively the training/tuning set (n=95), with n=43 being the hold-out test set. For dyskinesia, a "design set" of n=97 (or n=157 total from longitudinal study) was used for development, and subsets of this were then characterized.

    8. How the ground truth for the training set was established:

      • The ground truth for the training/design sets mirrored how it was established for the test sets:
        • Clinical Ratings: For tremor, clinicians' overall tremor ratings and MDS-UPDRS tremor constancy scores were collected. For dyskinesia, ratings from MDS-certified experts during MDS-UPDRS assessments were used to label data within the training/design sets.
        • Self-Reported History: "Self-reported history" was also mentioned for certain conditions (e.g., history of tremor, dyskinesia) in the demographics, which likely informed initial subject stratification.
        • Observed Behavior within Tasks: For dyskinesia, observations during specific tasks (e.g., in-clinic cognitive distraction tasks) provided context for the expert ratings.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1