Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K170551
    Date Cleared
    2017-06-21

    (117 days)

    Product Code
    Regulation Number
    882.1471
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ImPACT Quick Test is intended for use as a computerized cognitive test aid in the assessment and management of concussion in individuals ages 12-70.

    Device Description

    ImPACT Quick Test (ImPACT) QT is a brief computerized neurocognitive test designed to assist trained healthcare professionals in determining a patient's status after a suspected concussion. ImPACT QT provides basic data related to neurocognitive functioning, including working memory, processing speed, reaction time, and symptom recording.

    ImPACT QT is designed to be a brief 5-7 minute iPad-based test to aid sideline personnel and first responders in determining if an athlete/individual is in need of further evaluation or is able to immediately return to activity. ImPACT QT is not a substitute for a full neuropsychological evaluation or a more comprehensive computerized neurocognitive test (such as ImPACT).

    AI/ML Overview

    Here's an analysis of the acceptance criteria and the study that proves the device meets the acceptance criteria, based on the provided text:

    Acceptance Criteria and Device Performance

    The document does not explicitly state a table of pre-defined acceptance criteria for the ImPACT Quick Test itself, in terms of specific performance thresholds (e.g., a minimum correlation coefficient). Instead, the studies aim to demonstrate construct validity, concurrent validity, and test-retest reliability as evidence of the device's performance, aiming to show substantial equivalence to the predicate device.

    The reported device performance aligns with these goals:

    Acceptance Criteria (Implied)Reported Device Performance
    Concurrent ValidityCorrelations between ImPACT Quick Test and the predicate device (ImPACT) were in the moderate to high range (0.32-0.63, all p<.001). This suggests that although the two instruments measure similar constructs, the fact that ImPACT Quick Test contains a subset of ImPACT tests as well as unique content explains the moderate relationship between the two instruments.
    Construct ValidityImPACT Quick Test measures (Attention Tracker and Motor Speed) correlated more highly with established neuropsychological tests (BVMT-R, CTT) that assess similar constructs (attention and motor speed). Correlations varied from 0.28 to 0.61 (many significant at p<.001), demonstrating expected relationships with external measures. Lower correlation for Memory scale with BVMT-R was attributed to significant differences in format and task demands.
    Test-Retest ReliabilityTest-retest correlations for composite scores were: Memory (r=0.18), Attention Tracker (r=0.73), and Motor Speed (r=0.82). All significant at p<.001 or beyond. The majority of correlations were in the 0.6 to 0.8 range, reflecting "considerable stability" across the re-test period. Reliable Change Index (RCI) also indicated the percentage of cases falling outside confidence intervals.
    Clinical AcceptabilityThe device provides a "reliable measure of cognitive function to aid in assessment of concussion" and is "substantially equivalent to the Predicate Device."
    Software Validation & Risk ManagementImPACT QT software was developed, validated, and documented according to IEC 62304 and FDA Guidance. Risk Management (ISO 14971) was conducted, with all risks appropriately mitigated.
    Normative DataA normative database was developed from 772 subjects, representative of ages 12-70 based on 2010 U.S. Census for age, gender, and race.

    Study Details

    The provided document describes several studies to support the predicate device equivalence and performance of the ImPACT Quick Test.

    1. Sample sizes used for the test set and the data provenance:

      • Concurrent Validity Study: 92 subjects (41 males, 51 females; average age 36.5 years, range 12-76 years).
      • Construct Validity Study: 118 subjects (73 females, 45 males; average age 32.5 years, range 18-79 years).
      • Test-Retest Reliability Study: 76 individuals.
      • Normative Database Development: 772 subjects.
      • Data Provenance: All subjects were recruited from 11 sites across the United States. All subjects completed an IRB approved consent form and met eligibility criteria. The studies appear to be prospective in nature, collecting new data for these specific analyses.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • The concept of "ground truth" for the test set in this context applies differently. For the concurrent and construct validity studies, the "ground truth" or reference standards were the predicate device (ImPACT) and established traditional neuropsychological tests (BVMT-R, CTT, SDMT). The performance of these reference tests themselves is considered the standard.
      • For the administration of the tests and data collection, the document states: "All testing was completed by professionals who were specifically trained to administer the test. These professionals consisted of neuropsychologists, physicians, psychology/psychology/psychology graduate students, certified athletic trainers and athletic training graduate students. All testing was completed in a supervised setting." While this describes the administrators, it doesn't specify how many "experts" established a singular ground truth for any given case, as the outputs are quantitative cognitive scores.
    3. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

      • The document does not describe an adjudication method for the test set as would typically be seen in diagnostic studies where expert consensus determines a disease state. The outputs of the device and the comparison tests are quantitative scores, not subjective interpretations requiring adjudication.
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No, a multi-reader multi-case (MRMC) comparative effectiveness study focusing on human reader improvement with AI assistance was not done or reported in this document. The device is a "computerized cognitive assessment aid" providing quantitative scores, not an AI that directly assists human interpretation in the MRMC sense. It's an aid for trained healthcare professionals, but the study doesn't quantify their improvement with the aid versus without it.
    5. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

      • Yes, the studies evaluating concurrent validity, construct validity, and test-retest reliability are essentially standalone performance evaluations. The ImPACT Quick Test (algorithm/device) generated cognitive scores which were then compared to other tests or re-administered. While trained professionals administered the test, their role was in test administration, not in altering the device's output or performing an "in-the-loop" interpretation that influenced the device's score generation. The "performance" being measured is the scores produced by the device itself.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • The "ground truth" was established by comparison to established, validated neuropsychological tests and a predicate device (ImPACT).
        • For concurrent validity, the predicate device (ImPACT) was the reference standard.
        • For construct validity, traditional neuropsychological tests (Brief Visuospatial Motor Test (BVMT-R), Color Trails Test (CTT), Symbol Digit Modalities Test (SDMT)) served as the reference standards.
        • For test-retest reliability, the device's own repeated measurements served as the basis for evaluation, with consistency across measurements being the goal.
    7. The sample size for the training set:

      • The normative database used to establish percentiles for the ImPACT Quick Test was developed using 772 subjects. This serves as a "training set" in the sense that it provides the reference data against which individual patient's scores are compared. It is not a software machine learning training set in the typical sense, but rather a reference population.
      • The document states that the new device "reports symptoms and displays test results in a form of composites score percentiles based on normative data" and that "The standardization sample was developed to be representative of the population of individuals ages 12-70 years".
    8. How the ground truth for the training set was established:

      • For the normative database (which serves a similar function to a training set here), the "ground truth" was simply the measured performance of a large, representative sample of healthy individuals on the ImPACT Quick Test itself. These subjects were free from concussion within one year, neurological disorders, and psychoactive medication. Their scores define the "normal" range for different age, gender, and race stratifications, against which subsequent patient scores are compared. All subjects completed IRB approved consent and met eligibility criteria. Testing was completed by trained professionals in a supervised setting to ensure standardized administration.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1