K Number
K170209
Device Name
ImPACT
Date Cleared
2017-02-23

(30 days)

Product Code
Regulation Number
882.1471
Panel
NE
Reference & Predicate Devices
N/A
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

ImPACT is intended for use as a computer-based neurocognitive test battery to aid in the assessment and management of concussion.

ImPACT is a neurocognitive test battery that provides healthcare professionals with objective measure of neurocognitive functioning as an assessment aid and in the management of concussion in individuals ages 12-59.

Device Description

ImPACT® (Immediate Post-Concussion Assessment and Cognitive Testing) is a computer-based neurocognitive test battery.

ImPACT is a software-based tool that allows healthcare professionals to conduct a series of neurocognitive tests on individuals to gather basic data related to the neurocognitive functioning of the test subject. This computerized cognitive test battery evaluates and provides a healthcare professional with measures of various neurocognitive functions, including the reaction time, memory, attention, spatial processing speed and symptoms of an individual.

ImPACT provides healthcare professionals with a set of well-developed and researched neurocognitive tasks that have been medically accepted as state-of-the-art best practices and is intended to be used as part of a multidisciplinary approach to making return to activity decisions.

AI/ML Overview

The ImPACT device is a computer-based neurocognitive test battery intended to aid in the assessment and management of concussion. The submission focuses on a device modification – a rewrite of the software from Adobe Flash to HTML5/CSS/JavaScript, while preserving all functions of the original version. The performance testing aims to demonstrate that this software change does not affect the device's functionality or performance.

Here's a breakdown of the acceptance criteria and the study proving the device meets them:

1. Table of Acceptance Criteria and Reported Device Performance

The document does not explicitly present a formal table of acceptance criteria with specific quantifiable metrics like sensitivity, specificity, or AUC, as might be seen for a diagnostic AI device. Instead, the acceptance criteria are implicitly tied to demonstrating non-inferiority or statistical and clinical insignificance of differences between the new HTML5 version and the predicate Flash version, specifically for reaction time. The primary performance metric assessed was "reaction time."

Acceptance Criteria (Implicit)Reported Device Performance
Differences in reaction time between the new HTML5 version and the predicate Flash version are statistically and clinically insignificant.Two laboratory studies (one bench, one in volunteers age 19-22) showed that the differences in performance for reaction time were statistically and clinically insignificant.
All software verification and validation tests are met.All tests met the required acceptance criteria.
Risk management activities assure all risks are appropriately mitigated.Risk Management activities conducted in accordance with ISO 14971 assure all risk related to use of a computerized neurocognitive test, including use related risks are appropriately mitigated.

2. Sample Size Used for the Test Set and Data Provenance

  • Test Set Sample Size: "one in volunteers age 19-22". While a specific number is not provided, this indicates a human user study. The bench study would not involve a "sample size" in the same way, but rather controlled test conditions.
  • Data Provenance: The document does not specify the country of origin of the data. It appears to be a prospective study conducted for the purpose of this submission, comparing the new and old software versions.

3. Number of Experts Used to Establish Ground Truth and Qualifications

This information is not provided in the document. Given the nature of the device (a neurocognitive test battery for concussion assessment) and the type of performance testing described (comparison of reaction time between software versions), it's unlikely that external medical experts were used to establish "ground truth" in the typical sense (e.g., image annotation for disease presence). The "ground truth" here is the actual reaction time measurement as recorded by the device.

4. Adjudication Method for the Test Set

This information is not provided and is likely not applicable. Since the study is comparing measurements (reaction time) between two software versions rather than subjective interpretation of data, an adjudication method for reconciling expert opinions would generally not be needed.

5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

A MRMC comparative effectiveness study was not done. The study described focused on direct comparison of output (reaction time) between two software versions rather than evaluating the impact of AI assistance on human readers' diagnostic performance. The device itself is an assessment aid, not an AI for image reading or interpretation that augments a human reader.

6. Standalone (Algorithm Only) Performance Study

The study described is essentially a standalone performance study in the context of the device modification. It evaluates the performance of the new HTML5 algorithm directly against the predicate Flash algorithm, without involving human interpretation or human-in-the-loop assistance in the performance assessment itself. The "volunteers" in the study are the subjects on whom the reaction time is measured by the device itself.

7. Type of Ground Truth Used

The "ground truth" for this performance study was the objective measurement of reaction time. The study aimed to show that the new software version produced reaction time measurements statistically and clinically equivalent to those from the predicate software version. This is not a ground truth derived from expert consensus, pathology, or outcomes data in the usual sense of a diagnostic device.

8. Sample Size for the Training Set

This information is not provided. As this submission relates to a software re-write of an existing device and not the development of a new AI/ML model from scratch, there might not be a separate "training set" in the conventional machine learning sense. If the underlying neurocognitive test battery involved trained algorithms, that training would have occurred for the original predicate device, and the re-write aimed to replicate its behavior.

9. How the Ground Truth for the Training Set Was Established

This information is not provided. Similar to point 8, if there was initial algorithm training for the predicate device, the method for establishing its ground truth is not detailed in this document. Given it's a neurocognitive test battery, the initial "ground truth" would likely have been established through extensive psycho-metric validation, normative data collection, and clinical trials for the original ImPACT device to ensure its measurements accurately reflect neurocognitive function in the context of concussion.

§ 882.1471 Computerized cognitive assessment aid for concussion.

(a)
Identification. The computerized cognitive assessment aid for concussion is a prescription device that uses an individual's score(s) on a battery of cognitive tasks to provide an indication of the current level of cognitive function in response to concussion. The computerized cognitive assessment aid for concussion is used only as an assessment aid in the management of concussion to determine cognitive function for patients after a potential concussive event where other diagnostic tools are available and does not identify the presence or absence of concussion. It is not intended as a stand-alone diagnostic device.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Software, including any proprietary algorithm(s) used by the device to arrive at its interpretation of the patient's cognitive function, must be described in detail in the software requirements specification (SRS) and software design specification (SDS). Software verification, validation, and hazard analysis must be performed.
(2) Clinical performance data must be provided that demonstrates how the device functions as an interpretation of the current level of cognitive function in an individual that has recently received an injury that causes concern about a possible concussion. The testing must:
(i) Evaluate device output and clinical interpretation.
(ii) Evaluate device test-retest reliability of the device output.
(iii) Evaluate construct validity of the device cognitive assessments.
(iv) Describe the construction of the normative database, which includes the following:
(A) How the clinical workup was completed to establish a “normal” population, including the establishment of inclusion and exclusion criteria.
(B) Statistical methods and model assumptions used.
(3) The labeling must include:
(i) A summary of any clinical testing conducted to demonstrate how the device functions as an interpretation of the current level of cognitive function in a patient that has recently received an injury that causes concern about a possible concussion. The summary of testing must include the following:
(A) Device output and clinical interpretation.
(B) Device test-retest reliability of the device output.
(C) Construct validity of the device cognitive assessments.
(D) A description of the normative database, which includes the following:
(
1 ) How the clinical workup was completed to establish a “normal” population, including the establishment of inclusion and exclusion criteria.(
2 ) How normal values will be reported to the user.(
3 ) Representative screen shots and reports that will be generated to provide the user results and normative data.(
4 ) Statistical methods and model assumptions used.(
5 ) Whether or not the normative database was adjusted due to differences in age and gender.(ii) A warning that the device should only be used by health care professionals who are trained in concussion management.
(iii) A warning that the device does not identify the presence or absence of concussion or other clinical diagnoses.
(iv) A warning that the device is not a stand-alone diagnostic.
(v) Any instructions technicians must convey to patients regarding the administration of the test and collection of cognitive test data.