K Number
K141865
Device Name
DANA
Manufacturer
Date Cleared
2014-10-15

(97 days)

Product Code
Regulation Number
N/A
Panel
NE
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

DANA provides clinicians with objective measurements of reaction time (speed and accuracy) to aid in the assessment of an individual's medical or psychological state. Factors that may affect the measurement of reaction time include, but are not limited to concussion, head injury, insomnia, post traumatic stress disorder (PTSD), depression, attention deficit hyperactivity disorder (ADHD), memory impairment, delirium, prescription and non-prescription medication, some nutritional supplements, as well as a variety of psychological states (e.g. fatigue and stress).

DANA also delivers and scores standardized psychological questionnaires. DANA results should be interpreted only by qualified professionals.

Device Description

DANA is a mobile application indicated to provide clinicians with objective measurements of reaction time (speed and accuracy) and standardized health assessments to aid in the assessment of an individual's medical or psychological state. DANA results should be interpreted only by qualified professionals.

DANA was developed on a mobile platform to improve the access and availability of reaction time tests and standardized health assessments through (1) custom configuration of the system by clinicians based on their need and discretion; and (2) allowing for objective health assessments both in-clinic and out-of-clinic settings.

AI/ML Overview

The provided document is a 510(k) summary for the DANA device, an unclassified mobile-based task performance recorder. It compares DANA to a predicate device (QbTest) and mentions software testing, but it does not contain the detailed information necessary to fully address all parts of your request regarding acceptance criteria and the comprehensive study that proves the device meets those criteria.

Specifically, the document does not include:

  • A table of acceptance criteria or reported device performance metrics (e.g., sensitivity, specificity, accuracy, precision, recall for diagnostic devices, or specific reaction time performance metrics like mean, standard deviation, error rate, etc.).
  • Details on sample size for a test set, data provenance, ground truth establishment for a test set, or adjudication methods.
  • Information on Multi-Reader Multi-Case (MRMC) studies or standalone algorithm performance studies.
  • Details on sample size for the training set or how ground truth for the training set was established.

The document primarily focuses on demonstrating substantial equivalence to the predicate device based on intended use and technological characteristics, and mentions general software testing.

Given the limitations of the provided text, I will answer the questions to the best of my ability, indicating where information is not present.


Acceptance Criteria and Study Details for DANA Device

1. A table of acceptance criteria and the reported device performance

The provided document does not contain a table of acceptance criteria or specific reported device performance metrics for the DANA device (e.g., accuracy of reaction time measurement, consistency, precision, etc.). The 510(k) summary states that "Differences in the design and performance of DANA from the QbTest do not affect either the safety or effectiveness of DANA for its intended use," which is a high-level statement for substantial equivalence, but it does not quantify performance.

2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

The document does not specify a sample size for a "test set" or provide details on data provenance (country of origin, retrospective/prospective study design). The summary mentions "Software testing was conducted in accordance with FDA's May 2005 guidance document," which relates to software validation rather than clinical performance testing with a specific test set.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

The document does not mention the use of experts to establish ground truth for a test set or their qualifications. The DANA device is described as providing "objective measurements of reaction time (speed and accuracy)" and scoring "standardized psychological questionnaires." These types of measurements typically rely on predefined algorithms and the user's interaction directly with the device, rather than subjective expert interpretation for ground truth. The interpretation of DANA results is specified to be done by "qualified professionals," but this is about result usage, not ground truth establishment for performance validation of the device itself.

4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

The document does not describe any adjudication method as no specific test set requiring such expert adjudication is detailed.

5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

The document does not mention a Multi-Reader Multi-Case (MRMC) comparative effectiveness study. The DANA device provides objective measurements and scores questionnaires, it is not described as an AI-driven assistive tool for human readers in a diagnostic context that would typically warrant an MRMC study to measure improvement with AI assistance.

6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

The document describes DANA as a "mobile application indicated to provide clinicians with objective measurements of reaction time (speed and accuracy) and standardized health assessments." This implies a standalone (algorithm-only) performance in terms of generating these raw measurements and scores. The device takes inputs (user interactions) and produces outputs (reaction time data, questionnaire scores) without a specific "human-in-the-loop" component in the measurement generation process itself, though human interpretation of the results is required. However, no specific "standalone study" with detailed metrics is described.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

The document does not explicitly state the type of ground truth used for validating the device's performance because the nature of the device (measuring reaction time and scoring questionnaires) implies that its "ground truth" relates to the accuracy and reliability of its internal clock, input detection, and scoring algorithms, rather than a diagnostic 'truth' like pathology. For reaction time, the ground truth would be the actual time elapsed between stimulus and response, measured by highly accurate timing mechanisms. For questionnaires, the ground truth would be the correct application of the scoring rules. No validation study details are provided to elaborate on how these internal "ground truths" were confirmed.

8. The sample size for the training set

The document does not mention a training set sample size. AI/ML software often uses training sets, but the description of DANA focuses on objective measurement and standardized questionnaire scoring, which may not heavily rely on complex supervised machine learning models requiring large labeled training sets in the same way an image recognition AI would. The software testing mentioned is more likely related to functional and performance testing against specifications rather than AI model training.

9. How the ground truth for the training set was established

Since no training set is mentioned (see point 8), the document does not provide information on how ground truth for a training set was established.

N/A