K Number
K181223
Device Name
ImPACT
Date Cleared
2018-10-20

(165 days)

Product Code
Regulation Number
882.1471
Panel
NE
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

ImPACT is intended for use as a computer-based neurocognitive test battery to aid in the assessment and management of concussion.

ImPACT is a neurocognitive test battery that provides healthcare professionals with objective measure of neurocognitive functioning as an assessment aid and in the management of concussion in individuals ages 12-59.

Device Description

ImPACT® (Immediate Post-Concussion Assessment and Cognitive Testing) is a computer-based neurocognitive test battery.

lmPACT is a software-based tool that allows healthcare professionals to conduct a series of neurocognitive tests on individuals to gather basic data related to the neurocognitive functioning of the test subject. This computerized cognitive test battery evaluates and provides a healthcare professional with measures of various neurocognitive functions, including the reaction time, memory, attention, spatial processing speed and symptoms of an individual.

lmPACT provides healthcare professionals with a set of well-developed and researched neurocognitive tasks that have been medically accepted as state-of-the-art best practices and is intended to be used as part of a multidisciplinary approach to making return to activity decisions.

AI/ML Overview

Here's a breakdown of the acceptance criteria and the study details for the ImPACT device, based on the provided FDA 510(k) summary:

Acceptance Criteria and Reported Device Performance

The document does not explicitly present a table of acceptance criteria with numerical targets. Instead, it describes general claims of conformance and successful verification/validation. The core acceptance criterion for the device modification (allowing unsupervised baseline testing) appears to be that this change does not affect the safety or effectiveness of ImPACT for its intended use, particularly that the number of invalid self-administered tests are not different when compared to the supervised environment.

Acceptance Criteria (Inferred)Reported Device Performance
Software verification and validation activities demonstrate device performance and functionality according to IEC 62304 and other software standards."Software verification and validation activities including code reviews, evaluations, analyses, traceability assessment, and manual testing were performed in accordance with IEC 62304 and other software standards to demonstrate device performance and functionality. All tests met the required acceptance criteria."
Risk Management activities assure all risks (including use-related and security) are appropriately mitigated per ISO 14971."Risk Management activities conducted in accordance with ISO 14971 assure all risk related to use of a computerized neurocognitive test, including use related risks and security risks, are appropriately mitigated."
Unsupervised baseline testing does not affect the validity of test results compared to supervised environments (i.e., invalid test rates are comparable)."5.8% of subjects reported invalid results. The results of the studies indicate that the number of invalid self-administered tests are not different when compared to the supervised environment reported in the literature."
Unsupervised baseline testing does not affect the test-retest reliability of ImPACT.Pearson correlations between baseline assessments ranged from .43 to .78. ICCs reflected higher reliability than Pearson's r, across all measures. Visual Motor Speed: mean ICC=.91. Reaction Time: ICC=.78. Visual Memory: ICC=.62. Verbal Memory: ICC=.55. Mean ImPACT composite and symptom scores showed no significant improvement between the two assessments. No significant practice effects across the two assessments (mean of 80 days). Scores reflected considerable stability as reflected in ICCs and UERs. All participants were able to complete the test independently, with no "Invalid Baselines" obtained in the reliability study.

Study Details

This submission describes changes to an already cleared device, primarily regarding the use environment for baseline testing (allowing unsupervised baseline testing). Therefore, the "study" is focused on demonstrating that this modification does not negatively impact the device's performance or safety.

1. A table of acceptance criteria and the reported device performance: (See table above).

2. Sample size used for the test set and the data provenance:

  • Usability/Validity Study: 162 subjects.
    • Composition: 74 college students, 44 Middle and High School students, and 44 adults.
    • Provenance: Not explicitly stated, but implied to be from the US given the FDA submission. It's a prospective study conducted specifically for this device modification.
  • Clinical Data (Test-Retest Reliability Study): 50 participants.
    • Provenance: Not explicitly stated, but implied to be from the US. This was a prospective study.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

  • The document does not mention the use of experts to establish ground truth for this specific validation study related to the device modification.
  • The clinical study focused on test-retest reliability within individuals performing the ImPACT test. The "truth" here is the individual's own performance consistency, not an external expert's diagnosis.
  • The overall ImPACT device, as a "computerized cognitive assessment aid," is designed to provide "objective measures of neurocognitive functioning" that healthcare professionals interpret. The "ground truth" for concussion diagnosis itself relies on a multidisciplinary approach by healthcare professionals, which the device aids in, rather than determines independently.

4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

  • No adjudication method is described for the test sets. The studies are evaluating the device's output (invalidity rates, reliability scores) directly from participant interactions with the modified device.

5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

  • No MRMC comparative effectiveness study was done or described. This submission is for enabling unsupervised baseline testing, not for evaluating human reader performance with or without AI assistance. The device is the assessment aid for healthcare professionals.

6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

  • Yes, both "studies" implicitly evaluated the standalone performance of the ImPACT device in the context of the proposed unsupervised use environment.
    • The usability assessment determined the rate of invalid tests when self-administered.
    • The clinical data study evaluated the test-retest reliability of the ImPACT scores themselves, independent of human interpretation during the testing phase.
    • The device's output (scores, invalidity indicators) is generated by the algorithm, which is then provided to healthcare professionals.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

  • For the usability/validity study (unsupervised testing): The "ground truth" was internal to the ImPACT system – the device's own "Invalidity Indicator" function was used to determine if results were "invalid." This is based on pre-established criteria within the device, not external expert consensus for each individual test. The comparison was against reported invalidity rates in supervised environments from literature.
  • For the test-retest reliability study: The "ground truth" was the consistency of an individual's performance on the test over time. This is assessed statistically using measures like Pearson correlations and Intraclass Correlation Coefficients (ICCs). There is no "ground truth diagnosis" being established here, but rather the reliability of the measurement.

8. The sample size for the training set:

  • The document does not provide information on the training set or how the core ImPACT algorithm was initially developed/trained. This submission is for a device modification (change in use environment, minor UI adjustment) of an already cleared predicate device (K170209). The fundamental neurocognitive test battery design and algorithms are stated to be "identical to the original version."

9. How the ground truth for the training set was established:

  • As mentioned above, this information is not provided in this 510(k) summary for the modified device. The document states that the neurocognitive tasks "have been medically accepted as state-of-the-art best practices," implying that the underlying methodology for generating the neurocognitive measures is well-established, but details on the initial training data and ground truth establishment for the original algorithm are absent.

§ 882.1471 Computerized cognitive assessment aid for concussion.

(a)
Identification. The computerized cognitive assessment aid for concussion is a prescription device that uses an individual's score(s) on a battery of cognitive tasks to provide an indication of the current level of cognitive function in response to concussion. The computerized cognitive assessment aid for concussion is used only as an assessment aid in the management of concussion to determine cognitive function for patients after a potential concussive event where other diagnostic tools are available and does not identify the presence or absence of concussion. It is not intended as a stand-alone diagnostic device.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Software, including any proprietary algorithm(s) used by the device to arrive at its interpretation of the patient's cognitive function, must be described in detail in the software requirements specification (SRS) and software design specification (SDS). Software verification, validation, and hazard analysis must be performed.
(2) Clinical performance data must be provided that demonstrates how the device functions as an interpretation of the current level of cognitive function in an individual that has recently received an injury that causes concern about a possible concussion. The testing must:
(i) Evaluate device output and clinical interpretation.
(ii) Evaluate device test-retest reliability of the device output.
(iii) Evaluate construct validity of the device cognitive assessments.
(iv) Describe the construction of the normative database, which includes the following:
(A) How the clinical workup was completed to establish a “normal” population, including the establishment of inclusion and exclusion criteria.
(B) Statistical methods and model assumptions used.
(3) The labeling must include:
(i) A summary of any clinical testing conducted to demonstrate how the device functions as an interpretation of the current level of cognitive function in a patient that has recently received an injury that causes concern about a possible concussion. The summary of testing must include the following:
(A) Device output and clinical interpretation.
(B) Device test-retest reliability of the device output.
(C) Construct validity of the device cognitive assessments.
(D) A description of the normative database, which includes the following:
(
1 ) How the clinical workup was completed to establish a “normal” population, including the establishment of inclusion and exclusion criteria.(
2 ) How normal values will be reported to the user.(
3 ) Representative screen shots and reports that will be generated to provide the user results and normative data.(
4 ) Statistical methods and model assumptions used.(
5 ) Whether or not the normative database was adjusted due to differences in age and gender.(ii) A warning that the device should only be used by health care professionals who are trained in concussion management.
(iii) A warning that the device does not identify the presence or absence of concussion or other clinical diagnoses.
(iv) A warning that the device is not a stand-alone diagnostic.
(v) Any instructions technicians must convey to patients regarding the administration of the test and collection of cognitive test data.