Search Results
Found 1 results
510(k) Data Aggregation
(241 days)
The Parky App is intended to quantify kinematics of movement disorder symptoms including tremor and dyskinesia, in adults (45 years of age or older) with mild to moderate Parkinson's disease.
Parky App is a symptom tracker mobile app for Parkinson's Disease patients. It collects motion data through Apple Watch continuously and quantifies tremor and dyskinesia episodes based on clinically validated MM4PD algorithm. Tracked symptoms are reported as daily, weekly and monthly. Each report is shared with the prescribing healthcare professional through email. The mobile app has a medication reminder module which the patients can manually enter their medication schedule, receive on-time reminder notifications on Apple Watch and iPhone and can respond to them as "taken" or "not yet taken". Parky also reports daily step counts provided by Apple Services - HealthKit.
Acceptance Criteria and Device Performance Study for Parky App
The Parky App utilizes the MM4PD (Mobile Movement for Parkinson's Disease) algorithm to quantify movement disorder symptoms in adults with mild to moderate Parkinson's disease. The following details outline the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) summary.
1. Table of Acceptance Criteria and Reported Device Performance
| Acceptance Criteria / Performance Metric | Reported Device Performance |
|---|---|
| Correlation with clinical evaluations of tremor severity (MDS-UPDRS tremor constancy) | Rank Correlation Coefficient (ρ) = 0.72 |
| Differentiation of dyskinesia presence (from no dyskinesia) | Statistically significant difference (P = 0.027) with Wilcoxon rank sum test between "No DK" and "Chorea" groups |
| Smartwatch captured symptom changes matching clinician expectations | 94% of cases with full patient history (blinded: 87.5% correct classifications by 3 experts) |
| Likelihood of dyskinesia mapped to expert ratings | P < 0.001 during in-clinic tasks |
2. Sample Size and Data Provenance
- Test Set Sample Size:
- Tremor Algorithm Test (Hold-out data): n = 43 (patients from the longitudinal patient study)
- Dyskinesia Algorithm Test (Hold-out dataset): n = 57 (from the longitudinal patient study), specifically n = 47 for "No DK" group and n = 10 for "Chorea" group.
- Clinician Evaluation (full patient history): 112 subjects (from the longitudinal patient study)
- Blinded Clinician Classification: 10 sets of profiles (cases)
- Data Provenance: The studies were conducted by Powers et al. (2021), a publication referenced multiple times. While the specific country of origin is not explicitly stated in the provided text, the use of "MDS-UPDRS" (Movement Disorder Society-Unified Parkinson's Disease Rating Scale) suggests a globally recognized clinical standard. The studies are described as both retrospective (designing algorithms with existing in-clinic and all-day data) and prospective (longitudinal studies, evaluation of symptom changes in response to treatment).
3. Number of Experts and Qualifications
- Number of Experts: 3 expert raters were used for the blinded classification task.
- Qualifications of Experts: They are described as "blinded movement disorder specialists." Specific years of experience or board certifications are not provided.
4. Adjudication Method
- Blinded Clinician Classification: The method used for the 10 cases evaluated by 3 blind clinicians resulted in "87.5% of classifications were correct." This suggests a consensus or majority vote approach, but the exact adjudication method (e.g., 2+1, 3+1) is not explicitly detailed. It's mentioned that "three misclassifications occurred because raters presumed that an alternate medication had a dominant effect. Six cases were deemed inconclusive and were excluded." This implies a form of expert review and selection of cases for evaluation.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Yes, a form of MRMC study was done, but not explicitly framed as "AI vs. Human with AI assistance." The study assessed clinician performance with and without full patient history, effectively comparing clinical judgment with the aid of the smartwatch symptom profiles.
- Effect Size:
- When clinicians had full patient history and reviewed smartwatch symptom profiles, "symptom changes matched the clinician's expectation of the prescribed medication change in 94% of cases."
- When 3 blinded movement disorder specialists classified symptom profiles (without full patient history, but with medication schedule and MDS-UPDRS tremor/dyskinesia ratings from intake), "87.5% of classifications were correct."
- This indicates that the smartwatch-generated symptom profiles (AI-generated data) significantly aided clinicians in affirming or understanding treatment effects, achieving high agreement rates, even when blinded. The direct "improvement with AI vs. without AI" is not quantified as a direct comparative effectiveness study in the traditional sense of reader performance metrics but rather as the utility of the AI-generated profiles in supporting clinical assessment.
6. Standalone (Algorithm Only) Performance
- Yes, standalone performance was done for the core algorithms.
- "MM4PD measurements correlated to clinical evaluations of tremor severity (Rank Correlation Coefficient=0.80) and mapped to expert ratings of dyskinesia presence (P<0.001) during in-clinic tasks." (This refers to the algorithm's direct measurement and correlation).
- "The ability of MM4PD to identify tremors and the likelihood of dyskinesia was tested with the final algorithm in holdout sets."
- Specifically, Fig. 3E (Tremor algorithm test with hold-out data) and Fig. 4E (Dyskinesia algorithm test with hold-out data) demonstrate stand-alone algorithm performance against clinical ground truth.
7. Type of Ground Truth Used
- Expert Consensus / Clinical Evaluations:
- MDS-UPDRS ratings: Used for tremor severity correlation.
- Expert ratings of dyskinesia presence: Used for mapping dyskinesia likelihood.
- Clinician's expectations: Used as ground truth for evaluating how well the symptom changes matched expected treatment responses.
- Movement disorder specialists' classifications: Used as ground truth for the blinded classification task.
8. Sample Size for Training Set
The training set sample sizes are implicitly provided through the "MM4PD development and validation" overview (Figure S1) and "Study demographics" (Table S1).
- Pilot study (PD patients in-clinic + 1 week live-on): 118 patients
- Longitudinal patient study (PD patients long-term live-on): 225 patients
- Longitudinal control study (Elderly controls): 171 individuals
- This totals 514 individuals participating in the development and validation studies, from which data was used for algorithm design (training) and testing (hold-out sets).
9. How Ground Truth for Training Set was Established
The ground truth for the training set (algorithm design phase) was established through:
- In-clinic tasks: Patients performed specific tasks during clinic visits, and their movement was captured by the Apple Watch. These in-clinic observations would have been correlated with simultaneous or contemporaneous clinical assessments like MDS-UPDRS ratings by clinicians.
- All-day data: Continuous data collected by the Apple Watch over longer periods, which would have been analyzed and perhaps retrospectively correlated with patient diaries, medication logs, and clinical assessments at follow-up visits.
- The MM4PD algorithm was designed to match MDS-UPDRS tremor constancy and its outputs were mapped to expert ratings of dyskinesia. This indicates that clinical scores and expert consensus from neurologists or movement disorder specialists were the primary ground truth for algorithm development.
Ask a specific question about this device
Page 1 of 1