Search Results
Found 2 results
510(k) Data Aggregation
(303 days)
The ANAM® Test System: Military, Military Expanded, Core, and Sports Batteries are intended for use as computerbased neurocognitive test batteries to aid in the assessment of mild traumatic brain injury. The ANAM Test System: Military, Military Expanded, Core, and Sports Batteries are neurocognitive test batteries that provide healthcare professionals with objective measure of neurocognitive functioning as assessment aids and in the management of mild traumatic brain injury in individuals ages 13-65.
The ANAM Test System should only be used as an adjunctive tool for evaluating cognitive function.
The ANAM Test System is a software only device that provides clinicians with objective measurements of cognitive performance in populations, to aid in the assessment and management of concussion. ANAM measures various aspects of neurocognitive functioning including reaction time, memory, attention, and spatial processing speed. It also records symptoms of concussion in the test taker.
The software is downloaded from the Vista LifeSciences website and is for use on Dell Inspiron 15 3000 Series or similar Windows PC model or Android Samsung Galaxy tablet or similar Android device. The hardware is not provided as part of the device but is purchased separately by the user. Each ANAM battery consists of a collection of pre-selected modules that are administered in a sequential manner.
Specific modules included in the ANAM Test System:
-
- Questionnaires
-
- Demographics
-
- Mood Scale
-
- Neurobehavioral Symptom Inventory (NSI)
-
- PTSD Checklist (PCL)
-
- Sleepiness Scale
-
- Symptoms Checklist
-
- TBI Questionnaire
Performance Tests
-
- Code Substitution Learning
-
- Code Substitution Delayed
-
- Go/No-Go"
-
- Matching to Sample*
-
- Mathematical Processing
-
- Memory Search
-
- Procedural Reaction Time
-
- Simple Reaction Time*
-
- Spatial Processing*
- *Available for tablet platform.
The tests and questionnaires can be combined into custom batteries or users can choose from pre-configured standardized batteries. The standardized batteries include ANAM Core, ANAM Sports. ANAM Military, and ANAM Military-Expanded. These standardized batteries have fixed test settings and parameters to ensure standardized presentation and enable comparison to normative data.
Here's a breakdown of the requested information regarding the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) summary for the ANAM Test System.
Please note: The provided document is a 510(k) summary, which focuses on demonstrating substantial equivalence to a predicate device rather than detailing specific acceptance criteria and the full study methodology in the way a clinical study report would. Therefore, some information, particularly regarding specific acceptance criteria values, sample sizes for test and training sets, number and qualification of experts, adjudication methods, and MRMC study specifics, is not explicitly stated in this document. The document primarily highlights the "numerous studies" that have examined concurrent validity.
Acceptance Criteria and Reported Device Performance
Given the nature of the 510(k) submission, the "acceptance criteria" are implicitly tied to demonstrating substantially equivalent performance to the predicate device, ImPACT, in providing an objective measure of neurocognitive functioning to aid in the assessment and management of mild traumatic brain injury (mTBI).
| Acceptance Criteria (Inferred from Substantial Equivalence) | Reported Device Performance (Summary from Document) |
|---|---|
| Intended Use Equivalence: Aid in assessment and management of mTBI by providing objective measures of neurocognitive functioning. | The ANAM Test System's intended use is identical to the predicate device: "intended for use as computer-based neurocognitive test batteries to aid in the assessment and management of mild traumatic brain injury." (p. 5) The ANAM Test System measures various aspects of neurocognitive functioning, including reaction time, memory, attention, and spatial processing speed, similar to the predicate's measurement of verbal and visual memory, visual motor speed, impulse control, and reaction time. |
| Safety and Effectiveness Equivalence: Reliable measure of cognitive function comparable to the predicate device. | The 510(k) submission states, "The 510(k) submission includes the results of numerous studies that have examined the concurrent validity of ANAM as a clinical tool by documenting correlations with traditional neuropsychological tests with both normal and concussed populations. The results of these studies demonstrate that ANAM provides a reliable measure of cognitive function for use as an assessment aid and in the management of concussion and is therefore substantially equivalent to the predicate device." (p. 5) |
| Patient Population Equivalence: Individuals aged 13-65. | The ANAM Test System is indicated for individuals aged 13-65, which is largely similar to the predicate device's age range of 12-59 years. |
| Fundamental Neurocognitive Functions Measured: Assess core cognitive domains relevant to mTBI. | ANAM measures "response speed, attention/concentration, immediate and delayed memory, spatial processing, inhibition, and decision processing speed and efficiency." This aligns with the types of functions measured by the predicate device (verbal and visual memory, visual motor speed, impulse control, and reaction time). (p. 5) |
| Reporting and Interpretation Features: Provide meaningful data for clinical interpretation, including comparison to normative data and reliable change indices. | ANAM provides raw scores, standard scores (from a normative database), and Reliable Change Indices (RCI) for individual tests and an ANAM Composite Score (ACS). This is comparable to the predicate device's provision of composite scores, percentile scores, and RCIs. (p. 5) |
Study Proving Device Meets Acceptance Criteria
The document states that the substantial equivalence determination is based on "numerous studies" that have examined the concurrent validity of ANAM.
-
Sample sizes used for the test set and the data provenance:
- Sample Size: Not explicitly stated for specific test sets within this summary. It mentions "numerous studies" with "normal and concussed populations."
- Data Provenance: Not specified (e.g., country of origin). The document implies the studies are part of the broader clinical evidence for the device. The studies examined "concurrent validity" by correlating ANAM with "traditional neuropsychological tests." This suggests retrospective or prospective clinical data involving real patient populations.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not specified within the 510(k) summary. The ground truth for "concurrent validity" studies would typically involve the results from the "traditional neuropsychological tests" administered by qualified neuropsychologists or clinicians.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not specified. The studies focused on "concurrent validity" which suggests a comparison to established, validated neuropsychological tests rather than an adjudication process between human readers for diagnostic consensus.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- Not applicable/Not mentioned. The ANAM Test System is described as a "computer-based neurocognitive test battery" and a "software only device." It functions as an adjunctive tool to aid healthcare professionals in assessment and management. This type of device typically provides quantitative measures of cognitive function directly, rather than assisting human 'readers' in interpreting medical images or complex data in an MRMC study setup. The studies focus on the validity of the test itself in measuring cognitive function.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, implicitly. The "performance data" section (p. 5) discusses the results of studies examining the "concurrent validity of ANAM as a clinical tool by documenting correlations with traditional neuropsychological tests." This refers to the standalone performance of the ANAM test in measuring cognitive function and its correlation with established measures, without direct human-in-the-loop assistance for interpreting the ANAM results, though a human clinician ultimately uses the results in their assessment.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc):
- The ground truth relies on the scores and results derived from "traditional neuropsychological tests" administered to "normal and concussed populations." This implies that expert diagnosis of concussion and the established measurements from these traditional gold-standard tests served as the reference for determining concurrent validity.
-
The sample size for the training set:
- Not specified in the 510(k) summary. Given that it's a "computer-based neurocognitive test battery" and not a typical AI/ML algorithm that requires a discrete training dataset in the same way, any "training" would likely refer to the iterative development and validation of the test components and norming data based on large population studies, rather than a separate "training set" for an explicit AI model. The document mentions "normative data" and a "normative database."
-
How the ground truth for the training set was established:
- For the establishment of "normative data" (which would serve a similar function to a training set for a traditional AI model), the ground truth would be established through the collection of ANAM test performance data from large, healthy populations across different demographics, age groups, and potentially educational levels. This process involves statistical analysis to define "normal" ranges and establish a reliable baseline against which individual performance can be compared. The summary mentions "standard scores (calculated with the normative database)" and "the summed T-score of a normative control group."
Ask a specific question about this device
(117 days)
ImPACT Quick Test is intended for use as a computerized cognitive test aid in the assessment and management of concussion in individuals ages 12-70.
ImPACT Quick Test (ImPACT) QT is a brief computerized neurocognitive test designed to assist trained healthcare professionals in determining a patient's status after a suspected concussion. ImPACT QT provides basic data related to neurocognitive functioning, including working memory, processing speed, reaction time, and symptom recording.
ImPACT QT is designed to be a brief 5-7 minute iPad-based test to aid sideline personnel and first responders in determining if an athlete/individual is in need of further evaluation or is able to immediately return to activity. ImPACT QT is not a substitute for a full neuropsychological evaluation or a more comprehensive computerized neurocognitive test (such as ImPACT).
Here's an analysis of the acceptance criteria and the study that proves the device meets the acceptance criteria, based on the provided text:
Acceptance Criteria and Device Performance
The document does not explicitly state a table of pre-defined acceptance criteria for the ImPACT Quick Test itself, in terms of specific performance thresholds (e.g., a minimum correlation coefficient). Instead, the studies aim to demonstrate construct validity, concurrent validity, and test-retest reliability as evidence of the device's performance, aiming to show substantial equivalence to the predicate device.
The reported device performance aligns with these goals:
| Acceptance Criteria (Implied) | Reported Device Performance |
|---|---|
| Concurrent Validity | Correlations between ImPACT Quick Test and the predicate device (ImPACT) were in the moderate to high range (0.32-0.63, all p<.001). This suggests that although the two instruments measure similar constructs, the fact that ImPACT Quick Test contains a subset of ImPACT tests as well as unique content explains the moderate relationship between the two instruments. |
| Construct Validity | ImPACT Quick Test measures (Attention Tracker and Motor Speed) correlated more highly with established neuropsychological tests (BVMT-R, CTT) that assess similar constructs (attention and motor speed). Correlations varied from 0.28 to 0.61 (many significant at p<.001), demonstrating expected relationships with external measures. Lower correlation for Memory scale with BVMT-R was attributed to significant differences in format and task demands. |
| Test-Retest Reliability | Test-retest correlations for composite scores were: Memory (r=0.18), Attention Tracker (r=0.73), and Motor Speed (r=0.82). All significant at p<.001 or beyond. The majority of correlations were in the 0.6 to 0.8 range, reflecting "considerable stability" across the re-test period. Reliable Change Index (RCI) also indicated the percentage of cases falling outside confidence intervals. |
| Clinical Acceptability | The device provides a "reliable measure of cognitive function to aid in assessment of concussion" and is "substantially equivalent to the Predicate Device." |
| Software Validation & Risk Management | ImPACT QT software was developed, validated, and documented according to IEC 62304 and FDA Guidance. Risk Management (ISO 14971) was conducted, with all risks appropriately mitigated. |
| Normative Data | A normative database was developed from 772 subjects, representative of ages 12-70 based on 2010 U.S. Census for age, gender, and race. |
Study Details
The provided document describes several studies to support the predicate device equivalence and performance of the ImPACT Quick Test.
-
Sample sizes used for the test set and the data provenance:
- Concurrent Validity Study: 92 subjects (41 males, 51 females; average age 36.5 years, range 12-76 years).
- Construct Validity Study: 118 subjects (73 females, 45 males; average age 32.5 years, range 18-79 years).
- Test-Retest Reliability Study: 76 individuals.
- Normative Database Development: 772 subjects.
- Data Provenance: All subjects were recruited from 11 sites across the United States. All subjects completed an IRB approved consent form and met eligibility criteria. The studies appear to be prospective in nature, collecting new data for these specific analyses.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- The concept of "ground truth" for the test set in this context applies differently. For the concurrent and construct validity studies, the "ground truth" or reference standards were the predicate device (ImPACT) and established traditional neuropsychological tests (BVMT-R, CTT, SDMT). The performance of these reference tests themselves is considered the standard.
- For the administration of the tests and data collection, the document states: "All testing was completed by professionals who were specifically trained to administer the test. These professionals consisted of neuropsychologists, physicians, psychology/psychology/psychology graduate students, certified athletic trainers and athletic training graduate students. All testing was completed in a supervised setting." While this describes the administrators, it doesn't specify how many "experts" established a singular ground truth for any given case, as the outputs are quantitative cognitive scores.
-
Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- The document does not describe an adjudication method for the test set as would typically be seen in diagnostic studies where expert consensus determines a disease state. The outputs of the device and the comparison tests are quantitative scores, not subjective interpretations requiring adjudication.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, a multi-reader multi-case (MRMC) comparative effectiveness study focusing on human reader improvement with AI assistance was not done or reported in this document. The device is a "computerized cognitive assessment aid" providing quantitative scores, not an AI that directly assists human interpretation in the MRMC sense. It's an aid for trained healthcare professionals, but the study doesn't quantify their improvement with the aid versus without it.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, the studies evaluating concurrent validity, construct validity, and test-retest reliability are essentially standalone performance evaluations. The ImPACT Quick Test (algorithm/device) generated cognitive scores which were then compared to other tests or re-administered. While trained professionals administered the test, their role was in test administration, not in altering the device's output or performing an "in-the-loop" interpretation that influenced the device's score generation. The "performance" being measured is the scores produced by the device itself.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- The "ground truth" was established by comparison to established, validated neuropsychological tests and a predicate device (ImPACT).
- For concurrent validity, the predicate device (ImPACT) was the reference standard.
- For construct validity, traditional neuropsychological tests (Brief Visuospatial Motor Test (BVMT-R), Color Trails Test (CTT), Symbol Digit Modalities Test (SDMT)) served as the reference standards.
- For test-retest reliability, the device's own repeated measurements served as the basis for evaluation, with consistency across measurements being the goal.
- The "ground truth" was established by comparison to established, validated neuropsychological tests and a predicate device (ImPACT).
-
The sample size for the training set:
- The normative database used to establish percentiles for the ImPACT Quick Test was developed using 772 subjects. This serves as a "training set" in the sense that it provides the reference data against which individual patient's scores are compared. It is not a software machine learning training set in the typical sense, but rather a reference population.
- The document states that the new device "reports symptoms and displays test results in a form of composites score percentiles based on normative data" and that "The standardization sample was developed to be representative of the population of individuals ages 12-70 years".
-
How the ground truth for the training set was established:
- For the normative database (which serves a similar function to a training set here), the "ground truth" was simply the measured performance of a large, representative sample of healthy individuals on the ImPACT Quick Test itself. These subjects were free from concussion within one year, neurological disorders, and psychoactive medication. Their scores define the "normal" range for different age, gender, and race stratifications, against which subsequent patient scores are compared. All subjects completed IRB approved consent and met eligibility criteria. Testing was completed by trained professionals in a supervised setting to ensure standardized administration.
Ask a specific question about this device
Page 1 of 1