Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    DEN150037
    Date Cleared
    2016-08-22

    (377 days)

    Product Code
    Regulation Number
    882.1471
    Type
    Direct
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ImPACT is intended for use as a computer-based neurocognitive test battery to aid in the assessment and management of concussion.

    ImPACT is a neurocognitive test battery that provides healthcare professionals with objective measure of neurocognitive functioning as an assessment aid and in the management of concussion in individuals ages 12-59.

    ImPACT Pediatric is intended for use as a computer-based neurocognitive test battery to aid in the assessment and management of concussion.

    ImPACT Pediatric is a neurocognitive test battery that provides healthcare professionals with objective measure of neurocognitive functioning as an assessment aid and in the management of concussion in individuals ages 5-11.

    Device Description

    ImPACT® (Immediate Post-Concussion Assessment and Cognitive Testing) and ImPACT Pediatric are computer-based neurocognitive test batteries for use as an assessment aid in the management of concussion.

    ImPACT and ImPACT Pediatric are software-based tools that allows healthcare professionals to conduct a series of neurocognitive tests that provide data related to the neurocognitive functioning of the test taker. This computerized neurocognitive test battery measures various aspects of neurocognitive functioning including reaction time, memory, attention, and spatial processing speed. It also records symptoms of concussion in the test taker.

    ImPACT and ImPACT Pediatric provide healthcare professionals with a set of well-developed and researched neurocognitive tasks that have been medically accepted as state-of-the-art best practices. The devices are intended to be used as part of a multidisciplinary approach to concussion assessment and patient management.

    AI/ML Overview

    This response summarizes the acceptance criteria and supporting studies for the ImPACT and ImPACT Pediatric devices, as detailed in the provided document.

    Acceptance Criteria and Device Performance

    The acceptance criteria for ImPACT and ImPACT Pediatric are primarily demonstrated through the establishment of their psychometric properties: construct validity, test-retest reliability, and the development of a robust normative database. The device performance is reported through the results of various peer-reviewed studies and clinical data supporting these properties.

    Table 1: Acceptance Criteria and Reported Device Performance

    Acceptance Criteria CategorySpecific Criteria/Outcome DemonstratedReported Device Performance (Summary)
    Construct ValidityDemonstrate the device measures what it purports to measure (e.g., specific cognitive functions).ImPACT: Significant correlations with traditional neuropsychological measures (SDMT, WAIS-R, NFL Battery components) for processing speed, reaction time, memory. Demonstrated good convergent and discriminant validity. ImPACT Pediatric: Significant correlations (20/24 potential comparisons) with WRAML-2 subtests (e.g., Story Memory, Design Recall), indicating it measures important aspects of memory.
    Test-Retest ReliabilityDemonstrate consistent results over time for multiple administrations.ImPACT: Robust test-retest reliability with ICCs generally ranging from 0.46 to 0.88 across various intervals (30 days to 2 years) for Verbal Memory, Visual Memory, Visual Motor Speed, and Reaction Time. ImPACT Pediatric: ICCs ranging from 0.46 to 0.89 across test modules (e.g., Word List, Memory Touch, Stop & Go) over one-week intervals, indicating adequate to excellent stability for most.
    Normative DatabaseEstablish a representative database to compare patient performance against "normal" population.ImPACT: Standardization sample of 17,013 individuals (ages 10-59 years, diverse gender/age breakdown). Data collected by trained professionals in supervised settings. ImPACT Pediatric: Normative database of 915 children (ages 5-12 years) from multiple clinical sites (Atlanta, Annapolis, Marquette, Pittsburgh, Guelph, ON). Age-stratified, considering gender differences.
    Reliable Change Index (RCI)Provide a statistical calculation to determine if a change in score is clinically meaningful and not due to measurement error or practice effects.ImPACT: RCI calculation provided to indicate clinically significant improvement, reducing adverse impact of measurement error. ImPACT Pediatric: RCI calculated to highlight score changes not due to practice effects or measurement error, displayed in red on reports.
    Validity IndexProvide an index to identify invalid baseline examinations.ImPACT: Algorithm-based index identifies invalid tests based on sub-optimal performance on specific subtests (e.g., X's and O's Total Incorrect > 30, Word Memory Learning Pct Correct < 69%). Automated flagging in reports. Not explicitly detailed for ImPACT Pediatric, but importance of valid tests is implied.

    Study Details Proving Device Meets Acceptance Criteria

    The device relies on a collection of previously published peer-reviewed studies and additional clinical data submitted with the de novo request. The FDA's determination is based on the adequacy of this compiled evidence rather than a single, large prospective clinical trial specifically for this de novo submission.

    1. Sample Sizes Used for the Test Set and Data Provenance:

    • ImPACT:
      • Construct Validity Studies: Sample sizes for individual studies ranged from N=30 to N=100.
        • Iverson et al. (2005): N=72 athletes from PA high schools.
        • Maerlender et al. (2010): N=54 varsity athletes from Dartmouth College.
        • Schatz & Putz (2006): N=30 college students from St. Joseph's University, Philadelphia.
        • Allen & Gfeller (2011): N=100 psychology students from a private university (Midwestern).
      • Reliability Studies: Sample sizes for individual studies ranged from N=25 to N=369.
        • Schatz (2013): N=25
        • Cole (2013): N=44 (active duty military population)
        • Nakayama (2014): N=85
        • Elbin (2011): N=369 high school athletes
        • Schatz (2010): N=95
      • Normative Database: N=17,013 individuals. Data collected from high schools and colleges across the US, and adult athlete populations/coaches/administrators.
      • Data Provenance: Predominantly retrospective analysis of published literature and existing clinical data. Studies are from various locations within the United States (PA, Dartmouth, Philadelphia, Midwestern university) and some multinational data for normative database (South African sample mentioned for cross-cultural norming). The military population study (Cole, 2013) also indicates a specific cohort. Data is gathered from both concussed and concussion-free individuals for different study purposes (validity vs. norming).
    • ImPACT Pediatric:
      • Normative Database: N=915 children. Data collected from clinical sites in Atlanta, GA; Annapolis, MD; Marquette, MI; Pittsburgh, PA; and Guelph, Ontario, Canada.
      • Reliability Study: N=100 children (ages 5-12 years) participating in youth soccer and hockey leagues. (Unpublished study reported).
      • Construct Validity Study: N=83 participants (ages 5-12 years).
      • Data Provenance: A mix of clinical sites across the US and Canada. Primarily retrospective analysis of previously collected data.

    2. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:

    • The document does not specify a fixed number of experts for adjudicating individual cases within a test set, as this is not a diagnostic device relying on expert consensus on image interpretation.
    • Instead, the "ground truth" for these performance studies is established by:
      • Clinical Standards: For construct validity, ImPACT was compared to traditional neuropsychological measures (e.g., Symbol Digit Modalities Test, WAIS-R, NFL Battery components). These traditional tests are themselves well-established and interpreted by qualified neuropsychologists or clinicians.
      • Normative Data Collection: "Professionals who were specifically trained to administer the tests," including "Neuropsychologists, Psychologists and Neuropsychology/Psychology graduate students, Certified Athletic Trainers and Athletic Training Graduate Students and Nurses." Their expertise ensures correct test administration and data collection for establishing norms.
      • Reference Standards: For the ImPACT construct validity studies, diagnoses of concussion were made using "then-applicable clinical standards including grading of concussion using AAN guidelines." For ImPACT Pediatric, construct validity was assessed against the "Wide Range Assessment of Memory and Learning-2 (WRAML-2)," a recognized neuropsychological battery.

    3. Adjudication Method for the Test Set:

    • Not applicable in the typical sense of a diagnostic medical device relying on image interpretation and expert consensus for ground truth. The "adjudication" is inherent in the established methodologies of traditional neuropsychological testing and statistical correlations.

    4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    • No MRMC study was performed in the traditional sense of comparing human readers with and without AI assistance for interpretation.
    • This device is a Computerized Cognitive Assessment Aid, not an AI for image interpretation. Its function is to administer a neurocognitive test battery and provide objective measures. The "AI" component is more akin to sophisticated algorithms for scoring, norming, and identifying reliable change or invalid tests, rather than a system that assists human interpretation of complex medical imagery.
    • The benefit is the "Ability to have access to a non-invasive cognitive assessment battery that can be used to compare pre-injury (baseline cognitive performance) to post-injury cognitive performance" and "Ability to compare cognitive test performance to a large normative database in the absence of baseline testing," replacing or augmenting less standardized manual methods.

    5. Standalone Performance (Algorithm Only without Human-in-the-Loop Performance):

    • The device inherently operates as an "algorithm only" in generating scores and reports. However, it is explicitly not a standalone diagnostic device.
    • The "performance" of the algorithm is demonstrated through its ability to accurately measure cognitive functions (construct validity), consistently produce results (reliability), and correctly apply the normative and change index calculations.
    • The output (scores, RCIs, validity flags) is then interpreted by a healthcare professional. The device itself does not make a diagnosis. Therefore, its "standalone performance" is specifically in generating the objective data, which is then used by a human clinician as an aid.

    6. Type of Ground Truth Used:

    • Neuropsychological Test Scores: For construct validity, the device's scores were correlated with scores from established, traditional paper-and-pencil neuropsychological assessment batteries. These batteries serve as the "ground truth" for what cognitive functions are being measured.
    • Clinical Standards/Diagnosis: For some ImPACT studies, comparison was made to individuals diagnosed with concussion based on "then-applicable clinical standards."
    • Normative Data: The "normal" population for the normative database was established through "clinical work-up... including the establishment of inclusion and exclusion criteria" by trained professionals, ensuring subjects did not have medical conditions affecting test performance.

    7. Sample Size for the Training Set:

    • ImPACT: The standardization sample (normative database) served as the primary basis for the algorithm's "training" or establishment of "normal" values and internal psychometric properties. This sample consisted of 17,013 individuals.
    • ImPACT Pediatric: The normative database for ImPACT Pediatric comprised 915 children.
    • These are not "training sets" in the modern machine learning sense (where a model learns iteratively), but rather the large datasets used to establish the statistical properties and normative values from which the device's algorithms derive meaning (e.g., percentile rankings, Reliable Change Index).

    8. How the Ground Truth for the Training Set (Normative Database) Was Established:

    • ImPACT:
      • Data collected from high schools and colleges nationwide, ensuring representation of the intended use population.
      • Older adults included coaches, school administrators, nurses, and adult athletes.
      • Testing administered by specifically trained professionals: Neuropsychologists, Psychologists, Neuropsychology/Psychology graduate students, Certified Athletic Trainers, Athletic Training Graduate Students, and Nurses.
      • Tests completed in a supervised setting.
      • Data uploaded to a secure HIPAA-compliant server and de-identified.
      • Participants were English speakers, not reported to have underlying intellectual/developmental data, and not currently concussed or suffering from medical conditions that might affect performance.
    • ImPACT Pediatric:
      • Large, age-stratified sample of children (5-12 years) from multiple clinical sites.
      • Tests administered by a researcher, clinician, or educational professional trained in the use of ImPACT Pediatric.
      • Tests taken on an iPad in a one-on-one basis (no group testing).
      • Children instructed to respond by touching the screen.
      • "Ground truth" for "normal" performance was derived from statistical analysis (means, standard deviations, t-tests for gender differences) of this large, carefully collected, and screened normative dataset. Factor analysis was conducted on a subset to derive relevant score clusters.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1