Search Results
Found 8 results
510(k) Data Aggregation
(94 days)
Jersey 07059
Re: K250153
Trade/Device Name: Neu Platform
Regulation Number: 21 CFR 882.1950
------------------|------------|-------------------|------------------|-------------------------|
| 882.1950
-------------|-------------------------------|-------------------|
| H2O Therapeutics | Parky App | 882.1950
The Neu Platform is intended to quantify the kinematics of movement disorder symptoms, including tremor in adults (45 years and older) with mild to moderate Parkinson's disease.
The Neu Platform is a platform designed to capture digital motor and patient-reported data in patients with mild to moderate Parkinson's Disease. The platform has two key components:
- A smartphone application for the remote capture of symptoms - Patients utilise the smartphone application to perform the motor tremor measurement activities. The app is also used for the patient to self-reported data, including onboarding information and subjective symptom information,
- A dashboard for the clinical team to view the captured data - The clinician dashboard presents the data to clinical staff responsible for managing the patient's condition for review.
The Platform is comprised of a patient app, supporting backend infrastructure, and a clinician web-based dashboard. The Neu Platform is related to the components of the platform specifically for data capture from patients with Parkinson's disease.
Here's a breakdown of the acceptance criteria and study details for the Neu Platform, based on the provided FDA 510(k) clearance letter:
1. Table of Acceptance Criteria and Reported Device Performance
| Acceptance Criterion (Implicit) | Reported Device Performance |
|---|---|
| Correlation with Clinical Standard (Rest Tremor) | 0.92 (p < 0.01) between the Neu Platform's tremor measurement and the clinical standard (MDS-UPDRS). This is an improvement compared to the predicate device's reported correlation of r=0.72. |
| Correlation with Clinical Standard (Postural Tremor) | 0.85 (p = 0.002) between the Neu Platform's tremor measurement and the clinical standard (MDS-UPDRS). This also appears to be an improvement compared to the predicate device's reported general correlation of r=0.72. |
| Agreement with Clinically Meaningful Differences in Tremor Severity | The performance data demonstrates that the Neu Health tremor measurements are in agreement with clinically meaningful differences in tremor severity. |
| Safety and Effectiveness | The device is deemed as safe and effective for its intended use, based on non-clinical performance data and substantial equivalence to the predicate. No new or different questions of safety or effectiveness are raised by the device, despite not measuring dyskinesia like the predicate (as this can still be clinically assessed or patient-reported). The device has undergone software validation per FDA guidance and international standards (IEC 62304, IEC 14971) and cybersecurity threat analysis and mitigation (per "Content of Premarket Submissions for Management of Cybersecurity in Medical Devices"), including data encryption (TLS-1.2+ and AES-256) and MFA for access. |
2. Sample Size Used for the Test Set and Data Provenance
The document describes "bench testing" but does not explicitly state the sample size (number of patients or measurements) used for this test set.
- Data Provenance: The study was "obtained in a controlled setting." The document does not specify the country of origin of the data
- Retrospective or Prospective: Not specified, but "bench testing" often implies a controlled, possibly retrospective analysis of collected data or a prospective collection in a controlled environment.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document states that the device's measurements were compared to "the clinical standard for the rest and postural tremor measurements respectively" and "the accepted clinical standard (MDS-UPDRS)."
- It does not specify the number of experts used to establish this ground truth for the test set.
- It does not specify the qualifications of these experts beyond referring to "clinical standard" and "UPDRS-III assessment." Typically, MDS-UPDRS assessments are performed by trained neurologists or movement disorder specialists, but this is not explicitly stated as the method for ground truth establishment for the test set.
4. Adjudication Method for the Test Set
The document does not describe an explicit adjudication method (e.g., 2+1, 3+1) for the comparison of the device's measurements against the clinical standard. The "ground truth" seems to be derived directly from the application of a clinical rating scale (MDS-UPDRS-III).
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was one done? No. The document explicitly states: "Substantial equivalence is based on an assessment of non-clinical performance data and no animal or clinical performance data is included." Therefore, a traditional MRMC study comparing human readers with and without AI assistance was not performed.
- Effect Size: Not applicable, as no MRMC study was conducted. The study focused on the device's correlation with the clinical standard.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Performance
Yes, the performance data presented is for the standalone device's ability to measure tremor. The "bench testing" evaluated the device's tremor measurements directly against the clinical standard, implying an algorithm-only evaluation for the core tremor measurement functionality. The device itself is designed to quantify the kinematics, not to issue a diagnosis or treatment recommendation, making its core function a standalone measurement tool.
7. Type of Ground Truth Used
The ground truth used was based on the "clinical standard" as defined by the MDS-UPDRS-III assessment (Movement Disorder Society - Unified Parkinson's Disease Rating Scale, Part III), which is a clinical rating scale for motor symptoms in Parkinson's disease.
8. Sample Size for the Training Set
The document does not provide any information about the sample size used for the training set for the Neu Platform's algorithms.
9. How the Ground Truth for the Training Set Was Established
The document does not provide any information on how the ground truth for the training set was established.
Ask a specific question about this device
(269 days)
NeuroRPM is intended to quantify movement disorder symptoms during wake periods in adult patients 46 to 85 years of age with Parkinson's disease. These symptoms include tremor, bradykinesia, and dyskinesia. NeuroRPM is intended for clinic and home environments.
NeuroRPM is a software application for the Apple Watch that is prescribed by a health professional to quantify motor symptoms of Parkinson's disease including bradykinesia, dyskinesia, and tremor. NeuroRPM collects accelerometer and gyroscope data from the Apple Watch. The motion data are transmitted to cloud servers and analyzed using machine learning models developed to generate binary symptom classifications. Binary symptom classification output is generated every 15-minutes.
Here's a breakdown of the acceptance criteria and study details for the NeuroRPM device, based on the provided text:
1. Table of Acceptance Criteria & Reported Device Performance
The acceptance criteria for NeuroRPM were based on achieving specific sensitivity and specificity thresholds for detecting tremor, bradykinesia, and dyskinesia. While explicit "acceptance criteria" values (e.g., "must meet X sensitivity") aren't directly stated as minimum required values, the study design aimed to demonstrate adequate performance to support its intended use and substantial equivalence to the predicate device. The reported performance is presented with 95% confidence intervals.
| NeuroRPM Output | Acceptance Criteria (Implied) | Reported Sensitivity [95% CI] | Reported Specificity [95% CI] |
|---|---|---|---|
| Tremor | (Adequate performance for intended use) | 0.7176 [0.6081, 0.8172] | 0.9508 [0.9119, 0.9802] |
| Bradykinesia | (Adequate performance for intended use) | 0.7143 [0.5894, 0.8332] | 0.7740 [0.6787, 0.8597] |
| Dyskinesia | (Adequate performance for intended use) | 0.7123 [0.5323, 0.8652] | 0.9466 [0.9069, 0.9741] |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: The study was conducted with 36 subjects. However, the analysis was based on "events" rather than subjects.
- Total number of events (from ground truth) for sensitivity evaluation:
- Tremor: 170
- Bradykinesia: 203
- Dyskinesia: 73
- Total number of events (from ground truth) for specificity evaluation:
- Tremor: 325
- Bradykinesia: 292
- Dyskinesia: 422
- Total number of events (from ground truth) for sensitivity evaluation:
- Data Provenance: The study was an "observational, non-intervention study." The subject demographics indicate the study took place at a single site and had 95.5% Caucasian subjects. This suggests the data is retrospective in terms of being collected from past observations, but the study itself was designed prospectively to collect this data for validation. The country of origin is not explicitly stated beyond being a "single site," but given the FDA submission, it implicitly aligns with U.S. regulatory standards.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Those Experts
- Number of Experts: 3
- Qualifications of Experts: Board-certified movement disorder specialists.
4. Adjudication Method for the Test Set
- Adjudication Method: The ground truth for each sample was derived based on the majority score of the expert rater panel. This implies a 2-out-of-3 or 3-out-of-3 consensus approach.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and the effect size of how much human readers improve with AI vs. without AI assistance
- No, an MRMC comparative effectiveness study was not done. The study evaluated the standalone performance of the NeuroRPM device in quantifying symptoms against expert-derived ground truth. It did not assess human reader performance with or without AI assistance.
6. If a Standalone (i.e., algorithm only without human-in-the loop performance) was done
- Yes, a standalone performance study was done. The described clinical performance testing evaluates "NeuroRPM's ability to quantify Parkinson's symptom presence or absence" directly, without human intervention in the device's output interpretation or decision-making.
7. The Type of Ground Truth Used
- Type of Ground Truth: Expert Consensus based on clinical scales. The experts provided scores using the Unified Parkinson Disease Rating Scale (UPDRS) and the Abnormal Involuntary Movement Scale (AIMS). The device's binary classifications (e.g., "Tremor detected" vs. "No tremor detected") were then mapped to specific score ranges on these validated clinical scales, and the majority score from the expert panel served as the ground truth.
8. The Sample Size for the Training Set
- The document does not explicitly state the sample size for the training set. It mentions a "machine learning model developed to generate binary symptom classifications" but does not detail the dataset used for training. The clinical performance data provided (n=36 subjects) is specifically for the validation or test set.
9. How the Ground Truth for the Training Set was Established
- The document does not explicitly state how the ground truth for the training set was established. While it mentions machine learning models were used, it does not provide details on the data used for training these models or how their ground truth was determined.
Ask a specific question about this device
(241 days)
Ankara 06510 Turkey
Re: K220820
Trade/Device Name: Parky App Regulation Number: 21 CFR 882.1950
App Common Name: Movement Disorder Monitoring System Classification Name: Tremor Transducer (21 CFR 882.1950
The Parky App is intended to quantify kinematics of movement disorder symptoms including tremor and dyskinesia, in adults (45 years of age or older) with mild to moderate Parkinson's disease.
Parky App is a symptom tracker mobile app for Parkinson's Disease patients. It collects motion data through Apple Watch continuously and quantifies tremor and dyskinesia episodes based on clinically validated MM4PD algorithm. Tracked symptoms are reported as daily, weekly and monthly. Each report is shared with the prescribing healthcare professional through email. The mobile app has a medication reminder module which the patients can manually enter their medication schedule, receive on-time reminder notifications on Apple Watch and iPhone and can respond to them as "taken" or "not yet taken". Parky also reports daily step counts provided by Apple Services - HealthKit.
Acceptance Criteria and Device Performance Study for Parky App
The Parky App utilizes the MM4PD (Mobile Movement for Parkinson's Disease) algorithm to quantify movement disorder symptoms in adults with mild to moderate Parkinson's disease. The following details outline the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) summary.
1. Table of Acceptance Criteria and Reported Device Performance
| Acceptance Criteria / Performance Metric | Reported Device Performance |
|---|---|
| Correlation with clinical evaluations of tremor severity (MDS-UPDRS tremor constancy) | Rank Correlation Coefficient (ρ) = 0.72 |
| Differentiation of dyskinesia presence (from no dyskinesia) | Statistically significant difference (P = 0.027) with Wilcoxon rank sum test between "No DK" and "Chorea" groups |
| Smartwatch captured symptom changes matching clinician expectations | 94% of cases with full patient history (blinded: 87.5% correct classifications by 3 experts) |
| Likelihood of dyskinesia mapped to expert ratings | P < 0.001 during in-clinic tasks |
2. Sample Size and Data Provenance
- Test Set Sample Size:
- Tremor Algorithm Test (Hold-out data): n = 43 (patients from the longitudinal patient study)
- Dyskinesia Algorithm Test (Hold-out dataset): n = 57 (from the longitudinal patient study), specifically n = 47 for "No DK" group and n = 10 for "Chorea" group.
- Clinician Evaluation (full patient history): 112 subjects (from the longitudinal patient study)
- Blinded Clinician Classification: 10 sets of profiles (cases)
- Data Provenance: The studies were conducted by Powers et al. (2021), a publication referenced multiple times. While the specific country of origin is not explicitly stated in the provided text, the use of "MDS-UPDRS" (Movement Disorder Society-Unified Parkinson's Disease Rating Scale) suggests a globally recognized clinical standard. The studies are described as both retrospective (designing algorithms with existing in-clinic and all-day data) and prospective (longitudinal studies, evaluation of symptom changes in response to treatment).
3. Number of Experts and Qualifications
- Number of Experts: 3 expert raters were used for the blinded classification task.
- Qualifications of Experts: They are described as "blinded movement disorder specialists." Specific years of experience or board certifications are not provided.
4. Adjudication Method
- Blinded Clinician Classification: The method used for the 10 cases evaluated by 3 blind clinicians resulted in "87.5% of classifications were correct." This suggests a consensus or majority vote approach, but the exact adjudication method (e.g., 2+1, 3+1) is not explicitly detailed. It's mentioned that "three misclassifications occurred because raters presumed that an alternate medication had a dominant effect. Six cases were deemed inconclusive and were excluded." This implies a form of expert review and selection of cases for evaluation.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Yes, a form of MRMC study was done, but not explicitly framed as "AI vs. Human with AI assistance." The study assessed clinician performance with and without full patient history, effectively comparing clinical judgment with the aid of the smartwatch symptom profiles.
- Effect Size:
- When clinicians had full patient history and reviewed smartwatch symptom profiles, "symptom changes matched the clinician's expectation of the prescribed medication change in 94% of cases."
- When 3 blinded movement disorder specialists classified symptom profiles (without full patient history, but with medication schedule and MDS-UPDRS tremor/dyskinesia ratings from intake), "87.5% of classifications were correct."
- This indicates that the smartwatch-generated symptom profiles (AI-generated data) significantly aided clinicians in affirming or understanding treatment effects, achieving high agreement rates, even when blinded. The direct "improvement with AI vs. without AI" is not quantified as a direct comparative effectiveness study in the traditional sense of reader performance metrics but rather as the utility of the AI-generated profiles in supporting clinical assessment.
6. Standalone (Algorithm Only) Performance
- Yes, standalone performance was done for the core algorithms.
- "MM4PD measurements correlated to clinical evaluations of tremor severity (Rank Correlation Coefficient=0.80) and mapped to expert ratings of dyskinesia presence (P<0.001) during in-clinic tasks." (This refers to the algorithm's direct measurement and correlation).
- "The ability of MM4PD to identify tremors and the likelihood of dyskinesia was tested with the final algorithm in holdout sets."
- Specifically, Fig. 3E (Tremor algorithm test with hold-out data) and Fig. 4E (Dyskinesia algorithm test with hold-out data) demonstrate stand-alone algorithm performance against clinical ground truth.
7. Type of Ground Truth Used
- Expert Consensus / Clinical Evaluations:
- MDS-UPDRS ratings: Used for tremor severity correlation.
- Expert ratings of dyskinesia presence: Used for mapping dyskinesia likelihood.
- Clinician's expectations: Used as ground truth for evaluating how well the symptom changes matched expected treatment responses.
- Movement disorder specialists' classifications: Used as ground truth for the blinded classification task.
8. Sample Size for Training Set
The training set sample sizes are implicitly provided through the "MM4PD development and validation" overview (Figure S1) and "Study demographics" (Table S1).
- Pilot study (PD patients in-clinic + 1 week live-on): 118 patients
- Longitudinal patient study (PD patients long-term live-on): 225 patients
- Longitudinal control study (Elderly controls): 171 individuals
- This totals 514 individuals participating in the development and validation studies, from which data was used for algorithm design (training) and testing (hold-out sets).
9. How Ground Truth for Training Set was Established
The ground truth for the training set (algorithm design phase) was established through:
- In-clinic tasks: Patients performed specific tasks during clinic visits, and their movement was captured by the Apple Watch. These in-clinic observations would have been correlated with simultaneous or contemporaneous clinical assessments like MDS-UPDRS ratings by clinicians.
- All-day data: Continuous data collected by the Apple Watch over longer periods, which would have been analyzed and perhaps retrospectively correlated with patient diaries, medication logs, and clinical assessments at follow-up visits.
- The MM4PD algorithm was designed to match MDS-UPDRS tremor constancy and its outputs were mapped to expert ratings of dyskinesia. This indicates that clinical scores and expert consensus from neurologists or movement disorder specialists were the primary ground truth for algorithm development.
Ask a specific question about this device
(219 days)
CA 93001
Re: K213519
Trade/Device Name: Rune Labs Kinematics System Regulation Number: 21 CFR 882.1950
Kinematics System Common Name: Tremor transducer Classification Name: Transducer, Tremor Regulation Number: 882.1950
The Rune Labs Kinematic System is intended to quantify kinematics of movement disorder symptoms including tremor and dyskinesia, in adults (45 years of age or older) with mild to moderate Parkinson's disease.
The Rune Labs Kinematic System collects derived tremor and dyskinesia probability scores using processes running on the Apple Watch, and then processes and uploads this data to Rune's cloud platform where it is available for display for clinicians.
The Rune Labs Kinematic System uses software that runs on the Apple Watch to measure patient wrist movements. These movements are used to determine how likely dyskinesias or tremors are to have occurred. The times with symptoms are then sent to the Rune Labs Cloud Platform using the Apple Watch's internet connection, which is then displayed for clinician use.
The Apple Watch contains accelerometers and gyroscopes which provide measurements of wrist movement. The Motor Fluctuations Monitor for Parkinson's Disease (MM4PD) is a toolkit developed by Apple for the Apple Watch that assesses the likely presence of tremor and dyskinesia as a function of time. Specifically, every minute, the Apple Watch calculates what percentage of the time that tremor and dyskinesia were likely to occur. The movement disorder data that is output from the Apple's MM4PD toolkit have been validated in a clinical study (Powers et al., 20211).
The Rune Labs Kinematic System is software that receives, stores, and transfers the Apple Watch MM4PD classification data to the Rune Labs Cloud Platform where it is available for visualization by clinicians. The device consists of custom software that runs on the users' smart watch and web browsers.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
Acceptance Criteria and Reported Device Performance
The acceptance criteria are implicitly defined by the correlation and differentiation shown by the device's measurements against established clinical ratings and conditions. The study highlights the performance in terms of correlation coefficients and statistical significance.
| Acceptance Criteria (Implicit) | Reported Device Performance |
|---|---|
| Tremor Detection Correlation: Strong correlation between daily tremor detection rate and clinician's overall tremor rating (MDS-UPDRS tremor constancy score). | Spearman's rank correlation coefficient of 0.72 in both the design set (n=95) and hold-out set (n=43) for mean daily tremor percentage vs. MDS-UPDRS tremor constancy score. |
| Tremor False Positive Rate (Non-PD): Low false positive rate for tremor detection in elderly, non-PD controls. | False positives occurred 0.25% of the time in 171 elderly, non-PD longitudinal control subjects (43,300+ hours of data). |
| Dyskinesia Differentiation: Significant difference in detected dyskinesia between subjects with and without chorea. | Dyskinesia detected significantly differed (p < 0.001) between subjects with chorea (10.7 ± 9.9% of day) and those without (2.7 ± 2.2% of day) in the design set (n=125 without, n=32 with chorea). Similar significant difference (P = 0.027) in hold-out set (n=47 without, n=10 with chorea). |
| Dyskinesia False Positive Rate (Non-PD): Low false positive rate for dyskinesia detection in elderly, non-PD controls. | Median false-positive rate of 2.0% in all-day data from elderly, non-PD controls (171 subjects, 59,000+ hours of data). |
| Correlation with Motion Capture (Watch Functionality): Strong correlation between watch movement measurements and a professional motion tracking system. | Pearson correlation coefficient of 0.98 between displacement measured by motion capture and watch estimate, with a mean signed error of -0.04 ± 0.17 cm. |
Study Details (Powers et al., 2021)
-
Sample sizes used for the test set and the data provenance:
-
Motion Measurement Correlation (initial validation step): A single healthy control subject (likely a very small test set to validate the sensor itself, not the clinical algorithm performance).
-
Tremor Validation:
- Design Set: n = 95 patients (from longitudinal patient study)
- Hold-out Set: n = 43 patients (from longitudinal patient study)
- False Positive Testing: 171 elderly, non-PD longitudinal control subjects.
-
Dyskinesia Validation:
- Choreiform Movement Score (CMS) differentiation:
- 65 subjects with confirmed absence of in-session dyskinesia (89 tasks)
- 69 subjects with discordant dyskinesia ratings (109 tasks)
- 19 subjects with confirmed dyskinesia across all three raters (22 tasks)
- Longitudinal Dyskinesia Detection:
- Design Set: 125 patients with no known dyskinesia, 32 patients with chorea.
- Hold-out Set: 47 subjects with no reported dyskinesia, 10 subjects with chorea.
- False Positive Testing: 171 elderly, non-PD longitudinal control subjects.
- Choreiform Movement Score (CMS) differentiation:
-
Data Provenance: The study was conducted by Apple, implying a global or multi-center approach, but specific country of origin is not mentioned. The studies were likely prospective observational studies where data was collected over time from participants wearing the Apple Watch. Some initial development data may have been retrospective, but the validation steps appear prospective.
-
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience):
- For the Dyskinesia validation (specifically the "Choreiform Movement Score" differentiation), three MDS-certified experts were used to provide dyskinesia ratings during multiple MDS-UPDRS assessments. Their specific experience level (e.g., "10 years of experience") is not detailed, but MDS certification implies a high level of specialized expertise in movement disorders.
- For the Tremor validation, the "clinician's overall tremor rating" and "MDS-UPDRS tremor constancy score" were used. While it mentions "clinician's," it doesn't specify if this was a consensus or single reading, nor the number of clinicians. Given the use of MDS-UPDRS, it implies assessment by trained medical professionals (neurologists or movement disorder specialists).
-
Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- For Dyskinesia validation, the ratings from the three MDS-certified experts were categorized as:
- "confirmed absence" (all three agreed absence)
- "discordant" (raters disagreed)
- "confirmed dyskinesia" (all three agreed presence).
This implicitly suggests a form of consensus-based adjudication (3/3 agreement for "confirmed," disagreement acknowledged for "discordant").
- For Tremor validation, the adjudication method for the "clinician's overall tremor rating" or "MDS-UPDRS tremor constancy score" is not explicitly stated. It likely refers to standard clinical assessment practices using the UPDRS scale, which can be done by a single trained rater or with multiple raters for research purposes (though not explicitly detailed here as an adjudication).
- For Dyskinesia validation, the ratings from the three MDS-certified experts were categorized as:
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, a multi-reader, multi-case (MRMC) comparative effectiveness study evaluating human readers with vs. without AI assistance was not described. The study focused on validating the device's standalone ability to quantify movements against clinical ground truth (UPDRS scores, expert ratings of dyskinesia). The device is described as quantifying kinematics for clinicians to display, implying it's an assessment tool rather than an AI-assisted diagnostic aid for interpretation by human readers.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, the core validation steps for tremor and dyskinesia detection described in the Powers et al. (2021) paper are standalone algorithm-only performance evaluations. The Apple Watch's MM4PD toolkit calculates the percentage of time tremor and dyskinesia were likely to occur, and this algorithm's output is directly compared to clinical ground truth. The Rune Labs Kinematics System then receives, stores, and transfers this classification data for display.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Expert Consensus/Clinical Ratings:
- For Tremor: "clinician's overall tremor rating" and "MDS-UPDRS tremor constancy score" (a widely accepted clinical rating scale for Parkinson's disease).
- For Dyskinesia: Ratings from "three MDS-certified experts" during MDS-UPDRS assessments, leading to classifications like "confirmed absence," "discordant," and "confirmed dyskinesia." Clinical history (e.g., "known chorea") was also used.
- Objective Measurement Reference: For the fundamental sensor accuracy, a commercially available motion tracking system (Vicon) was used as a reference to compare against the watch's displacement measurements.
- Expert Consensus/Clinical Ratings:
-
The sample size for the training set:
-
The document implies that the MM4PD algorithms were developed using data from various studies.
- Tremor Algorithm Development:
- Pilot study: N=69 subjects
- Longitudinal patient study: first 143 subjects enrolled (used for the "design set" and hold-out set, so the training set would be a subset of these or distinct, but not explicitly broken out).
- Longitudinal control study: 236 subjects (for false positive rates, likely also contributed to defining normal movement).
- Dyskinesia Algorithm Development:
- Pilot study: N=10 subjects (divided evenly between dyskinetic and non-dyskinetic)
- Longitudinal patient study: N=97 subjects (first 143 enrolled; 22 with choreiform dyskinesia, 75 without)
- Longitudinal control study: N=171 subjects.
- Tremor Algorithm Development:
-
The term "design set" is used for both tremor and dyskinesia validation, which often implies the data used for training/tuning the algorithm. So, the explicit "training set" size for each specific algorithm (tremor vs. dyskinesia) isn't given as a distinct number separate from the "design set," but the various datasets described contributed to algorithm development. For tremor, the "design set" was effectively the training/tuning set (n=95), with n=43 being the hold-out test set. For dyskinesia, a "design set" of n=97 (or n=157 total from longitudinal study) was used for development, and subsets of this were then characterized.
-
-
How the ground truth for the training set was established:
- The ground truth for the training/design sets mirrored how it was established for the test sets:
- Clinical Ratings: For tremor, clinicians' overall tremor ratings and MDS-UPDRS tremor constancy scores were collected. For dyskinesia, ratings from MDS-certified experts during MDS-UPDRS assessments were used to label data within the training/design sets.
- Self-Reported History: "Self-reported history" was also mentioned for certain conditions (e.g., history of tremor, dyskinesia) in the demographics, which likely informed initial subject stratification.
- Observed Behavior within Tasks: For dyskinesia, observations during specific tasks (e.g., in-clinic cognitive distraction tasks) provided context for the expert ratings.
- The ground truth for the training/design sets mirrored how it was established for the test sets:
Ask a specific question about this device
(263 days)
Trade/Device Name: Personal Kinetigraph (PKG) System Gen 2 Plus Regulation Number: 21 CFR 882.1950
Movement Disorder Monitoring System |
| Classification Name: | Transducer, Tremor (21 CFR 882.1950
The Personal Kinetigraph (PKG) is intended to quantify kinematics of movement disorder symptoms in conditions such as Parkinson's disease, including tremor, bradykinesia, and dyskinesia. It includes a medication reminder, an event marker and is intended to monitor activity associated with movement during sleep. The device is indicated for use in individuals 46 to 83 years of age.
The Personal Kinetigraph (PKG) Gen 2 Plus utilizes a PKG Watch (movement data logger) worn by the patient on their wrist over 6-to-10 day recording cycles. The PKG Watch continuously records and quantifies the kinematics of movement disorder symptoms such as bradykinesia (BK), dyskinesia (DK), tremor, immobility, and dyskinesia fluctuations, in movement disorder conditions such as Parkinson's disease. Proprietary PKG Analysis Algorithms are used to analyze the movement data and generate a PKG-2A Report, which provides the clinical provider with a summary of these movement disorder symptoms, plotted over the full recording period. The PKG-2A Report includes an additional feature that allows the plots to be annotated by a qualified PKG Reporter. The PKG Watch includes a medication reminder to notify the patient when it is time to take their medication, and an event marker for the patient to record when they have taken their prescribed medication. The Personal Kinetigraph (PKG) Gen 2 Plus System includes the GKCM Cloud Platform (PKG Clinic Server), a cloud-based service for receiving and processing the movement data files and generating PKG-2A Reports. The Personal Kinetigraph (PKG) Gen 2 Plus is a modified version of the predicate Personal Kinetigraph (PKG) System Model GKC-2000 (Gen 2) cleared under K161717, incorporating several new or enhanced features, including a Docking Station for charging the PKG Watch and uploading movement data files to the GKCM Cloud Platform, and a Clinic Portal (housed in the GKCM Cloud Platform), providing customer facing functions such as creating and editing patient details, scheduling medication reminders, raising PKG orders and viewing the PKG-2A Report. The Personal Kinetigraph (PKG) Gen 2 Plus also includes a PKG Tablet and PKG Dock Cable, cleared previously under K161717. The PKG Tablet is an off-the-shelf Android based tablet that runs a custom software application to configure the PKG Watch before a recording session, extract recorded data after a recording session, and upload this data to the GKCM Cloud Platform. The PKG Dock Cable connects the PKG Watch to the PKG Tablet for configuration before a recording session, and allows for uploading of the movement data to the PKG Clinic Server after the recording session. The PKG Tablet and PKG Dock Cable are not required when using the Docking Station and Clinic Portal. The Personal Kinetigraph (PKG) Gen 2 Plus system consists of the following key components: PKG Watch (movement data logger) including wrist bands; PKG Docking Station; PKG Clinic Portal; GKCM Cloud Platform (PKG Clinic Server); PKG Analysis Algorithms; PKG-2A Report; PKG Tablet; PKG Dock Cable; and 5-Bay charger.
This document describes the regulatory clearance of the Personal Kinetigraph (PKG) System Gen 2 Plus. Based on the provided text, the device is a modified version of a previously cleared device (K161717) and primarily focuses on hardware and software enhancements rather than changes to core diagnostic algorithms or indications for use. Therefore, the performance data provided focuses heavily on engineering validations (electrical safety, mechanical safety, software V&V, cybersecurity, biocompatibility, human factors) and explicitly states that clinical data was not required for this submission.
Consequently, a multi-reader multi-case (MRMC) comparative effectiveness study, standalone algorithm performance, or a detailed description of ground truth establishment for a diagnostic test set, which are typical for AI/ML-based diagnostic devices, are not applicable in this context. The device's primary function remains to "quantify kinematics of movement disorder symptoms" using "Proprietary PKG Analysis Algorithms" which appear to have been validated in the prior submission or are considered substantially equivalent in their underlying function.
Given this, the requested information will be presented as per available details, with explicit notes about what is not applicable based on the provided FDA document.
Device: Personal Kinetigraph (PKG) System Gen 2 Plus (K211887)
Purpose: To quantify kinematics of movement disorder symptoms in conditions such as Parkinson's disease, including tremor, bradykinesia, and dyskinesia.
1. Table of Acceptance Criteria and Reported Device Performance
The provided document describes engineering verification and validation testing for the modified device components rather than specific diagnostic accuracy metrics. The "acceptance criteria" are implied by compliance with the listed standards and successful completion of the tests.
| Acceptance Criteria (Implied by Standards/Testing) | Reported Device Performance (Summary from Document) |
|---|---|
| Electrical Safety & EMC Compliance | Underwent electrical safety and EMC evaluation and testing according to IEC 60601-1:2005+AMD1:2012, IEC 60601-1-2:2014, and IEC 60601-1-11:2015. |
| Mechanical Safety Compliance | Underwent mechanical safety evaluation and testing in accordance with IEC 60601-1:2005+A1:2012 and IEC 601-1-11:2015. Testing included: Shock & Vibration, Continuous Operation (Thermal Cycling), Transport and Storage, Impact Testing, Ingress Protection (IP21), Drop testing, Push testing, and Molding Stress Relief. |
| Software Verification & Validation (V&V) | Conducted in accordance with IEC 62304:2006 + AMD1:2015 and FDA's Guidance for Industry and "FDA Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices", May 11, 2005. |
| Cybersecurity Compliance | Designed and developed in accordance with applicable requirements outlined in "Content of Premarket Submissions for Management of Cybersecurity in Medical Devices - Draft Guidance for Industry and Food and Drug Administration Staff OCTOBER 2018". |
| Biocompatibility | Evaluation conducted according to FDA Guidance "Use of International Standard ISO-10993, 'Biological Evaluation of Medical Devices Part 1: Evaluation and testing within a risk management process' September 4, 2020", specifically for modified Wrist Strap materials. Testing performed: ISO10993-5:2009 Cytotoxicity, ISO10993-10:2010 Sensitization, and ISO10993-10:2010 Irritation. Testing performed in an FDA recognized GLP testing facility. |
| Human Factors Engineering / Usability | Performed during design and development in accordance with IEC 62366-1:2015 and FDA Guidance "Applying Human Factors and Usability Engineering to Medical Devices. Guidance for Industry and Food and Drug Administration Staff. February 3, 2016". |
Note: The document explicitly states: "Clinical data was not required for this submission as the changes to the Personal Kinetigraph (PKG) - Gen 2 Plus did not introduce any significant new risks, or changes to known risks, that would require clinical evaluation." This means there was no new clinical study specifically for this 510(k) submission to demonstrate the performance of the core "PKG Analysis Algorithms" in quantifying movement disorder symptoms. The substantial equivalence argument relies on the predicate device's prior clearance and the fact that the modifications are not considered to significantly alter the device's fundamental diagnostic function or safety profile.
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: Not applicable in the context of a clinical performance study. The "test sets" referenced for performance data (e.g., electrical, mechanical, software, biocompatibility) are material or system test articles, not patient data for diagnostic accuracy.
- Data Provenance: Not applicable for a clinical performance study for this specific submission as no new clinical data was collected.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Not applicable as no new clinical data was collected or analyzed for this submission where expert ground truth would be established. The device's "Proprietary PKG Analysis Algorithms" rely on quantitative kinematics, and their prior validation/equivalence is assumed.
4. Adjudication Method for the Test Set
- Not applicable as no new clinical data was collected requiring adjudication.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
- No, an MRMC comparative effectiveness study was not done. The submission states that "Clinical data was not required for this submission." Therefore, no assessment of human readers improving with or without AI assistance was performed. The device quantifies kinematics using algorithms, it's not described as an AI-assistance tool for human readers interpreting images.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not explicitly described as a new standalone performance study. While the device contains "Proprietary PKG Analysis Algorithms" which would inherently operate in a standalone manner to quantify kinematics, the document does not report new standalone performance metrics or studies in this submission. Its functional equivalence is tied to the predicate device whose algorithms would have been assessed previously.
7. The Type of Ground Truth Used
- For the engineering validation studies (e.g., electrical, mechanical, software, biocompatibility, cybersecurity, human factors), the "ground truth" is defined by the technical specifications, standards (e.g., IEC 60601, ISO 10993, IEC 62304), and regulatory guidance documents listed. For instance, for electrical safety, the ground truth is compliance with the limits set by IEC 60601-1.
- For the device's intended clinical function (quantifying movement disorders), the "ground truth" for the original validation of the PKG Analysis Algorithms (presumably done for the predicate device K161717) would typically involve comparison to clinical assessments by movement disorder specialists, standardized motor evaluations, or other objective measures (though this is not detailed in the current submission).
8. The Sample Size for the Training Set
- Not applicable for this 510(k) submission, as it focuses on modifications and substantial equivalence to a predicate device, rather than the initial development and training of new AI/ML algorithms requiring a "training set." The proprietary algorithms were developed previously.
9. How the Ground Truth for the Training Set was Established
- Not applicable for this 510(k) submission. Information on how the ground truth was established for the original development of the "Proprietary PKG Analysis Algorithms" is not provided in this document.
Ask a specific question about this device
(90 days)
Trade/Device Name: Personal Kinetigraph (PKG) System Model GKC-2000 Regulation Number: 21 CFR 882.1950
Classification Name: Transducer, Tremor (21 CFR 882.1950)
Regulatory Class: II
Primary Product Code
The Personal Kinetigraph (PKG) is intended to quantify kinematics of movement disorder symptoms in conditions such as Parkinson's disease, including tremor, bradykinesia and dyskinesia. It includes a medication reminder, an event marker and is intended to monitor activity associated with movement during sleep. The device is indicated for use in individuals 46 to 83 years of age.
The new Personal Kinetigraph (PKG) System, Model GKC-2000 (Gen 2), utilizes a small, wrist-worn data logging activity monitor (the PKG Watch) that continuously records and quantifies the kinematics of movement disorder symptoms over a 6 to 10 day period in movement disorder conditions such as Parkinson's disease. At the end of the recording period, the movement recording data is uploaded via a Tablet application at the supervising clinic, to a cloud-based server. A report is produced using the recorded data that objectively distinguishes the movement patterns consistent with tremor, bradykinesia, dyskinesia and immobility. This information can be used by the clinician to assess the extent and severity of movement disorder symptoms, and how they vary throughout the day, and from day to day. The PKG Watch has a medication reminder to the patient that it is time to take their medication, and an event marker for the patient to record when they have taken their prescribed medication.
The provided text describes the 510(k) premarket notification for the Personal Kinetigraph (PKG) System, Model GKC-2000 (Gen 2). The submission aims to demonstrate substantial equivalence to a predicate device (K140086 - Global Kinetics Corporation's Personal Kinetigraph (PKG) System).
Here's an analysis of the acceptance criteria and the study that indicates the device meets them, based on the provided document:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state quantitative acceptance criteria in a table format for the device's diagnostic performance (e.g., sensitivity, specificity for detecting tremor or bradykinesia). Instead, the performance evaluation primarily focuses on demonstrating functional equivalence of the new Gen 2 system to its predicate device.
The main "acceptance criterion" for the system validation relevant to its intended use appears to be: "functional performance of the PKG Gen 2 System is identical to the predicate system (within the specified acceptance criteria of system variability) for all measures."
| Acceptance Criteria (Inferred from Study) | Reported Device Performance (PKG Gen 2 System) |
|---|---|
| Functional Equivalence: The Gen 2 system's functional performance (movement recording, medication reminders, medication acknowledgment, PKG Analysis, PKG PDF report) should be identical to the predicate system's performance within specified system variability. | Met: "the functional performance of the PKG Gen 2 System is identical to the predicate system (within the specified acceptance criteria of system variability) for all measures, with the exception of accidental 'medication taken' acknowledgements, where the PKG Gen 2 System is improved over that of the predicate." |
| Biocompatibility: Device components in contact with skin must pose a low risk for cytotoxicity, irritation, and sensitization. | Met: All results demonstrated no adverse or unexplained events, no indications for cytotoxicity, irritation, and sensitization. Materials pose a low risk for surface application on intact skin for less than 30 days. |
| Electrical Safety and EMC: Compliance with relevant medical electrical equipment standards (IEC 60601 series, IEC 62133). | Met: The system complies with IEC 60601-1:2005, IEC 60601-1-11:2010, IEC 60601-1-2:2007, IEC 60601-1-2:2014 (selected tests), and IEC 62133:2012. |
| Mechanical Safety: Compliance with mechanical safety requirements of IEC 60601-1:2005. | Met: The PKG Watch complied with all tested mechanical safety parameters (push, drop, mold stress relief, altitude, thermal cycle, shock, broad-band random vibration, ingress protection). |
| Software Verification and Validation: Software should function as intended without introducing minor injury in case of failure. | Met: Verification and validation testing were conducted, and documentation provided. The software was deemed "moderate" level of concern. |
2. Sample size used for the test set and the data provenance
The document states: "The test was conducted over several days, with each of the test subjects wearing two recording loggers simultaneously on the same arm (PKG Watch and/or the predicate Data logger), performing predetermined actions and maintaining a diary."
The exact sample size (number of test subjects) is not specified in the provided text.
Data Provenance: The study was conducted as a "side-by-side validation of the new system with the predicate, when worn by test subjects." It appears to be a prospective study for this particular validation, conducted by the manufacturer. The country of origin of the data is not explicitly stated, but the manufacturer is based in Melbourne, Victoria, Australia.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
The document does not mention the involvement of experts for establishing ground truth in this specific comparative performance study. The comparison was primarily between the data generated by the Gen 2 device and the predicate device. The output reports are designed for use by a clinician, but the validation itself doesn't involve expert scoring of the device's output against a clinical ground truth.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
There is no mention of an adjudication method in the context of expert review for this comparative performance study, as there were no experts establishing ground truth for the device's output. The comparison was statistical and functional between the two devices.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No, a multi-reader multi-case (MRMC) comparative effectiveness study was not performed or described. The study focused on the equivalence of the device's output (PKG PDF reports) between the new Gen 2 system and the predicate, not on how human readers (clinicians) improve with or without AI assistance from the device. The device quantifies movement disorder symptoms; it's not described as an AI diagnostic aid for human readers in the traditional sense of an MRMC study.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, the functional performance comparison described is essentially a standalone (algorithm only) comparison. The "side-by-side comparative testing" focused on comparing the "PKG PDF Reports that resulted from each device" (Gen 2 vs. predicate) against each other, without involving a human clinician's interpretation as part of the primary equivalence assessment. The algorithm processes raw movement data to produce these reports.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
For the "System Validation testing" comparing the Gen 2 and predicate systems, the "ground truth" was the performance of the legally marketed predicate device (K140086). The goal was to prove "performance equivalence" between the two systems. The PKG PDF reports generated by both devices were compared. The overall system's ability to quantify kinematics of movement disorder symptoms was previously established and cleared with the predicate device.
8. The sample size for the training set
The document does not mention a training set or any machine learning model training. The device quantifies movement based on kinematics, and the validation described focuses on comparing the output of the new hardware/software iteration with its predecessor. Therefore, the concept of a "training set" as it pertains to machine learning is not applicable here based on the provided text.
9. How the ground truth for the training set was established
As no training set is mentioned, this information is not applicable based on the provided document.
Ask a specific question about this device
(220 days)
20852
Re: K140086
Trade/Device Name: Personal Kinetigraph (PKG) System Regulation Number: 21 CFR 882.1950
|
| Classification Name: | Tremor Transducer21 CFR 882.1950
The Personal Kinetigraph (PKG) System is intended to quantify kinematics of movement disorder symptoms in conditions such as Parkinson's disease, including tremor, bradykinesia. It includes a medication reminder, an event marker and is intended to monitor associated with movement during sleep. The device is indicated for use in individuals 46 to 83 years of age.
The Personal Kinetigraph (PKG) System is a small, wrist-worn activity monitor that continuously records and quantifies the kinematics of movement disorder symptoms over a 6 to 10 day period in movement disorder conditions such as Parkinson's disease. A report is produced using the recorded data that objectively distinguishes the movement patterns consistent with tremor, bradykinesia, and immobility. This information can be used by the clinician to assess the extent and severity of movement disorder symptoms, and how they very throughout the day, and from day to day. The PKG Data Logger has a medication reminder to indicate to the patient that it is time to take their medication, and an event marker for the patient to record when they have taken their prescribed medication.
The provided text describes a 510(k) premarket notification for the "Personal Kinetigraph (PKG) System." While it outlines the device's indications for use, its comparison to predicate devices, and general information required for FDA submission, it does not contain information about specific acceptance criteria or a detailed study proving the device meets said criteria.
Therefore, I cannot provide a table of acceptance criteria and reported device performance, nor can I elaborate on sample sizes, ground truth establishment, expert qualifications, or MRMC studies for this specific submission based on the provided text.
The document primarily focuses on establishing substantial equivalence to previously cleared predicate devices by comparing technological characteristics and indications for use. It asserts that "The technological characteristics of bradykinesia are the not the same as the predicates but do not raise new types of safety or effectiveness questions and the method used correlates with accepted scientific methods, such as the UPDRS III." This statement suggests that an internal validation or correlation study might have been performed for bradykinesia, but the details of such a study are not included in this document.
In summary, the provided document does not contain the specific information requested regarding acceptance criteria and the detailed study that proves the device meets those criteria.
Ask a specific question about this device
(98 days)
Cleveland, OH 44103
APR - 9 2012
Re: K063872
Trade/Device Name: Kinesia Regulation Number: 21 CFR 882.1950
Kinesia is intended to monitor physical motion and muscle activity to quantify kinematics of movement disorder symptoms such as tremor and assess activity in any instance where quantifiable analysis of motion and muscle activity is desired.
Kinesia™ is designed to monitor and record motion and electrical activity of muscle to quantify kinematics of movement disorders such as tremor for research and diagnostic purposes. The patient unit consists of a wrist module and ring sensor. Motion sensors including accelerometers and gyroscopes are integrated into a finger worn unit to capture three dimensional motions. The finger worn sensor unit is worn on a finger band and is connected to a wrist worn module by a thin flexible wire. The wrist module provides an input for two channels of electromyography, battery power, on board memory, and an embedded radio for real-time wireless transmission of the collected signals. The wrist module is worn on a comfortable, adjustable wristband.
The signals are communicated between the patient module and the computer unit using wireless technology based on 2.4-2.484 GHZ frequencies. Kinesia will consist of four major components:
-
Patient Module(consists of ring and wrist module)
-
Computer Unit
-
Electromyography Leads
-
Interface Software
-
The Patient Module includes a user worn ring and wrist module connected by a thin cable. The patient module monitors eight channels of data including three channels of accelerometers (linear acceleration sensors), three channels of gyroscopes (angular velocity sensors), and two channels of electrical muscle activity (EMG). The data can be transmitted in real-time over a wireless telemetry link to a computer or be stored in onboard memory. The wireless link can either transmit only (one-way) or transmit and receive (two-way). The basic functional feature of the component is to acquire signals from the subject, perform analog-to-digital conversion (when appropriate), encode, format, and transmit the signals to the Computer Unit or store data in on board memory. The Patient Module will operate on DC power from either rechargeable or replaceable batteries. The Patient Module includes a push button patient diary so the patient can indicate when they have taken their medication and when their symptoms are severe.
-
The Computer Unit will have the ability to only receive (one-way) or receive and transmit (two-way) data from the Patient Module. The basic functional feature of this component is to receive data packets from the patient unit, perform error detection and correction, and then send the data to the PC Operator interface where the data can be monitored in real time or stored and analyzed at a later time.
3: The Electromyography Leads provide an input for two channels of EMG recordings. The leads provide five standard snap connector inputs including two differential channels of EMG and a patient ground. The leads are connected to a lemo connector. The lemo connector attaches to the lemo input on the Patient Unit wrist module.
- The Interface Software program consists of several software modules that allow the user to acquire, store, and review data as acquired by the hardware.
The provided text describes the Kinesia™ device, its intended use, and its performance testing against various voluntary standards. However, it does not contain specific acceptance criteria, a detailed study proving device performance against such criteria, or information on sample size, data provenance, ground truth establishment, or expert involvement as requested.
The "Performance Testing" section states that Kinesia™ will be tested to certain voluntary standards (e.g., FCC Part 15.109, IEC60601-1 series). This indicates that the device intended to comply with these standards, but the document does not provide the results of these tests or specific acceptance criteria met.
Therefore, based solely on the provided text, a table of acceptance criteria and reported device performance, and details of a study meeting these criteria, cannot be fully generated.
Here's a breakdown of what can be extracted and what is missing:
1. A table of acceptance criteria and the reported device performance
| Acceptance Criteria (Anticipated based on standards listed) | Reported Device Performance (Not provided in text) |
|---|---|
| Compliance with FCC Part 15.109 Radiated emissions limits - Unintentional radiators. Class B digital device. | Results are not detailed. |
| Compliance with IEC60601-1, 10.1 Environmental Conditions, Transport and Storage | Results are not detailed. |
| Compliance with IEC60601-1, 10.2 Environmental Conditions, Operation | Results are not detailed. |
| Compliance with IEC60601-1, 19.3 Leakage currents, allowable values | Results are not detailed. |
| Compliance with IEC60601-1-2, 36.202.3 Radiated RF electromagnetic fields | Results are not detailed. |
| Compliance with IEC60601-1-2, 36.202.4 Electrical fast transient and bursts | Results are not detailed. |
| Compliance with IEC60601-1-2, 36.202.7 Voltage dips, short interruptions, and voltage variations | Results are not detailed. |
| Compliance with IEC60601-1-2, 36.202.6 Conducted Disturbances, Induced by RF fields | Results are not detailed. |
| Compliance with IEC60601-1-2, 36.202.8 Magnetic Fields | Results are not detailed. |
| Compliance with IEC60601-1-2, 36.202.2 Electrostatic Discharge | Results are not detailed. |
| Compliance with IEC60601-1-2, 36.201 Emissions | Results are not detailed. |
Missing Information (Not found in the provided text):
- Specific quantitative acceptance criteria: The document lists standards but does not specify the pass/fail thresholds or performance metrics (e.g., accuracy, precision, sensitivity, specificity) for quantifying movement disorder symptoms or activity.
- Actual test results: The document states the device will be tested to these standards but does not provide any reported device performance data against these or any other criteria.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Test set sample size: Not provided.
- Data provenance: Not provided. The document outlines regulatory compliance testing (e.g., electrical safety, EMC), but not a clinical or performance study with a "test set" in the context of, for example, classifying tremor.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not applicable/Not provided. The listed "Performance Testing" refers to compliance with voluntary engineering and safety standards, not a study involving expert-adjudicated ground truth for clinical performance.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not applicable/Not provided.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- Not applicable/Not provided. The Kinesia™ device is described as a monitor and recorder of motion and electrical activity, not an AI-assisted diagnostic tool for human readers.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not applicable/Not provided in the context of clinical performance algorithms. The device itself is a standalone measurement instrument.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- Not applicable/Not provided in the clinical sense. The "ground truth" for the listed performance testing would be the specifications and requirements of the voluntary standards themselves (e.g., measured leakage current must be below X mA).
8. The sample size for the training set
- Not applicable/Not provided. This document does not describe the development of a machine learning algorithm; it describes a medical device with sensors and a software interface.
9. How the ground truth for the training set was established
- Not applicable/Not provided.
Conclusion based on provided text:
The document focuses on the technical description, intended use, and intent to comply with general safety and electromagnetic compatibility (EMC) standards. It does not provide details of a clinical performance study with specific acceptance criteria related to its quantitative analysis capabilities, nor does it include information about data sets (training or test), ground truth establishment, or expert involvement in such a study. The information provided is consistent with a 510(k) summary for a monitoring device primarily demonstrating substantial equivalence through technological characteristics and compliance with recognized safety standards.
Ask a specific question about this device
Page 1 of 1