Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K213519
    Manufacturer
    Date Cleared
    2022-06-10

    (219 days)

    Product Code
    Regulation Number
    882.1950
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Rune Labs Kinematic System is intended to quantify kinematics of movement disorder symptoms including tremor and dyskinesia, in adults (45 years of age or older) with mild to moderate Parkinson's disease.

    Device Description

    The Rune Labs Kinematic System collects derived tremor and dyskinesia probability scores using processes running on the Apple Watch, and then processes and uploads this data to Rune's cloud platform where it is available for display for clinicians.

    The Rune Labs Kinematic System uses software that runs on the Apple Watch to measure patient wrist movements. These movements are used to determine how likely dyskinesias or tremors are to have occurred. The times with symptoms are then sent to the Rune Labs Cloud Platform using the Apple Watch's internet connection, which is then displayed for clinician use.

    The Apple Watch contains accelerometers and gyroscopes which provide measurements of wrist movement. The Motor Fluctuations Monitor for Parkinson's Disease (MM4PD) is a toolkit developed by Apple for the Apple Watch that assesses the likely presence of tremor and dyskinesia as a function of time. Specifically, every minute, the Apple Watch calculates what percentage of the time that tremor and dyskinesia were likely to occur. The movement disorder data that is output from the Apple's MM4PD toolkit have been validated in a clinical study (Powers et al., 20211).

    The Rune Labs Kinematic System is software that receives, stores, and transfers the Apple Watch MM4PD classification data to the Rune Labs Cloud Platform where it is available for visualization by clinicians. The device consists of custom software that runs on the users' smart watch and web browsers.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

    Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implicitly defined by the correlation and differentiation shown by the device's measurements against established clinical ratings and conditions. The study highlights the performance in terms of correlation coefficients and statistical significance.

    Acceptance Criteria (Implicit)Reported Device Performance
    Tremor Detection Correlation: Strong correlation between daily tremor detection rate and clinician's overall tremor rating (MDS-UPDRS tremor constancy score).Spearman's rank correlation coefficient of 0.72 in both the design set (n=95) and hold-out set (n=43) for mean daily tremor percentage vs. MDS-UPDRS tremor constancy score.
    Tremor False Positive Rate (Non-PD): Low false positive rate for tremor detection in elderly, non-PD controls.False positives occurred 0.25% of the time in 171 elderly, non-PD longitudinal control subjects (43,300+ hours of data).
    Dyskinesia Differentiation: Significant difference in detected dyskinesia between subjects with and without chorea.Dyskinesia detected significantly differed (p < 0.001) between subjects with chorea (10.7 ± 9.9% of day) and those without (2.7 ± 2.2% of day) in the design set (n=125 without, n=32 with chorea). Similar significant difference (P = 0.027) in hold-out set (n=47 without, n=10 with chorea).
    Dyskinesia False Positive Rate (Non-PD): Low false positive rate for dyskinesia detection in elderly, non-PD controls.Median false-positive rate of 2.0% in all-day data from elderly, non-PD controls (171 subjects, 59,000+ hours of data).
    Correlation with Motion Capture (Watch Functionality): Strong correlation between watch movement measurements and a professional motion tracking system.Pearson correlation coefficient of 0.98 between displacement measured by motion capture and watch estimate, with a mean signed error of -0.04 ± 0.17 cm.

    Study Details (Powers et al., 2021)

    1. Sample sizes used for the test set and the data provenance:

      • Motion Measurement Correlation (initial validation step): A single healthy control subject (likely a very small test set to validate the sensor itself, not the clinical algorithm performance).

      • Tremor Validation:

        • Design Set: n = 95 patients (from longitudinal patient study)
        • Hold-out Set: n = 43 patients (from longitudinal patient study)
        • False Positive Testing: 171 elderly, non-PD longitudinal control subjects.
      • Dyskinesia Validation:

        • Choreiform Movement Score (CMS) differentiation:
          • 65 subjects with confirmed absence of in-session dyskinesia (89 tasks)
          • 69 subjects with discordant dyskinesia ratings (109 tasks)
          • 19 subjects with confirmed dyskinesia across all three raters (22 tasks)
        • Longitudinal Dyskinesia Detection:
          • Design Set: 125 patients with no known dyskinesia, 32 patients with chorea.
          • Hold-out Set: 47 subjects with no reported dyskinesia, 10 subjects with chorea.
        • False Positive Testing: 171 elderly, non-PD longitudinal control subjects.
      • Data Provenance: The study was conducted by Apple, implying a global or multi-center approach, but specific country of origin is not mentioned. The studies were likely prospective observational studies where data was collected over time from participants wearing the Apple Watch. Some initial development data may have been retrospective, but the validation steps appear prospective.

    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience):

      • For the Dyskinesia validation (specifically the "Choreiform Movement Score" differentiation), three MDS-certified experts were used to provide dyskinesia ratings during multiple MDS-UPDRS assessments. Their specific experience level (e.g., "10 years of experience") is not detailed, but MDS certification implies a high level of specialized expertise in movement disorders.
      • For the Tremor validation, the "clinician's overall tremor rating" and "MDS-UPDRS tremor constancy score" were used. While it mentions "clinician's," it doesn't specify if this was a consensus or single reading, nor the number of clinicians. Given the use of MDS-UPDRS, it implies assessment by trained medical professionals (neurologists or movement disorder specialists).
    3. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

      • For Dyskinesia validation, the ratings from the three MDS-certified experts were categorized as:
        • "confirmed absence" (all three agreed absence)
        • "discordant" (raters disagreed)
        • "confirmed dyskinesia" (all three agreed presence).
          This implicitly suggests a form of consensus-based adjudication (3/3 agreement for "confirmed," disagreement acknowledged for "discordant").
      • For Tremor validation, the adjudication method for the "clinician's overall tremor rating" or "MDS-UPDRS tremor constancy score" is not explicitly stated. It likely refers to standard clinical assessment practices using the UPDRS scale, which can be done by a single trained rater or with multiple raters for research purposes (though not explicitly detailed here as an adjudication).
    4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No, a multi-reader, multi-case (MRMC) comparative effectiveness study evaluating human readers with vs. without AI assistance was not described. The study focused on validating the device's standalone ability to quantify movements against clinical ground truth (UPDRS scores, expert ratings of dyskinesia). The device is described as quantifying kinematics for clinicians to display, implying it's an assessment tool rather than an AI-assisted diagnostic aid for interpretation by human readers.
    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • Yes, the core validation steps for tremor and dyskinesia detection described in the Powers et al. (2021) paper are standalone algorithm-only performance evaluations. The Apple Watch's MM4PD toolkit calculates the percentage of time tremor and dyskinesia were likely to occur, and this algorithm's output is directly compared to clinical ground truth. The Rune Labs Kinematics System then receives, stores, and transfers this classification data for display.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • Expert Consensus/Clinical Ratings:
        • For Tremor: "clinician's overall tremor rating" and "MDS-UPDRS tremor constancy score" (a widely accepted clinical rating scale for Parkinson's disease).
        • For Dyskinesia: Ratings from "three MDS-certified experts" during MDS-UPDRS assessments, leading to classifications like "confirmed absence," "discordant," and "confirmed dyskinesia." Clinical history (e.g., "known chorea") was also used.
      • Objective Measurement Reference: For the fundamental sensor accuracy, a commercially available motion tracking system (Vicon) was used as a reference to compare against the watch's displacement measurements.
    7. The sample size for the training set:

      • The document implies that the MM4PD algorithms were developed using data from various studies.

        • Tremor Algorithm Development:
          • Pilot study: N=69 subjects
          • Longitudinal patient study: first 143 subjects enrolled (used for the "design set" and hold-out set, so the training set would be a subset of these or distinct, but not explicitly broken out).
          • Longitudinal control study: 236 subjects (for false positive rates, likely also contributed to defining normal movement).
        • Dyskinesia Algorithm Development:
          • Pilot study: N=10 subjects (divided evenly between dyskinetic and non-dyskinetic)
          • Longitudinal patient study: N=97 subjects (first 143 enrolled; 22 with choreiform dyskinesia, 75 without)
          • Longitudinal control study: N=171 subjects.
      • The term "design set" is used for both tremor and dyskinesia validation, which often implies the data used for training/tuning the algorithm. So, the explicit "training set" size for each specific algorithm (tremor vs. dyskinesia) isn't given as a distinct number separate from the "design set," but the various datasets described contributed to algorithm development. For tremor, the "design set" was effectively the training/tuning set (n=95), with n=43 being the hold-out test set. For dyskinesia, a "design set" of n=97 (or n=157 total from longitudinal study) was used for development, and subsets of this were then characterized.

    8. How the ground truth for the training set was established:

      • The ground truth for the training/design sets mirrored how it was established for the test sets:
        • Clinical Ratings: For tremor, clinicians' overall tremor ratings and MDS-UPDRS tremor constancy scores were collected. For dyskinesia, ratings from MDS-certified experts during MDS-UPDRS assessments were used to label data within the training/design sets.
        • Self-Reported History: "Self-reported history" was also mentioned for certain conditions (e.g., history of tremor, dyskinesia) in the demographics, which likely informed initial subject stratification.
        • Observed Behavior within Tasks: For dyskinesia, observations during specific tasks (e.g., in-clinic cognitive distraction tasks) provided context for the expert ratings.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1