Search Results
Found 3 results
510(k) Data Aggregation
(246 days)
PKQ
The CANTAB Mobile is intended to be used as an adjunctive tool to assess memory by testing visuospatial associative learning in patients aged 50 to 90 years.
Not Found
This is a premarket notification for the Cantab Mobile device, a computerized cognitive assessment aid. The provided text, however, focuses on the regulatory correspondence from the FDA and the device's indications for use. It does not contain the detailed technical information about acceptance criteria or the study used to prove the device meets them.
Therefore, I cannot fulfill your request for the specific information regarding acceptance criteria, study details, sample sizes, expert qualifications, adjudication methods, MRMC studies, standalone performance, or ground truth establishment based only on the provided text.
To answer your questions, I would need access to the actual 510(k) submission summary or a separate performance study report for the Cantab Mobile device.
Ask a specific question about this device
(217 days)
PKQ
The ANAM Test System: Military Battery provides clinicians with objective measurements of cognitive performance in military populations ages 18 to 65 years, to aid in the assessment of an individual's level of cognitive functioning. The ANAM Test System should only be used as an adjunctive tool for evaluating cognitive function.
ANAM Test System: Military Battery is a software only device that provides clinicians with objective measurements of cognitive performance in military populations, to aid in the assessment of an individual's level of cognitive function. The software is downloaded from the Vista LifeSciences website and is for use on a Dell Latitude E6440 Laptop. The laptop is not provided as part of the device, but is purchased separately by the user. Each ANAM battery consists of a collection of pre-selected modules that are administered in a sequential manner.
The ANAM Test System: Military Battery is a software-only device designed to provide objective measurements of cognitive performance in military populations (ages 18 to 65 years) as an adjunctive tool for evaluating cognitive function.
Here's an analysis of the provided information regarding its acceptance criteria and the supporting study:
1. Table of Acceptance Criteria and Reported Device Performance
The provided document does not explicitly state formal acceptance criteria with numerical thresholds. Instead, the "Performance Data" section discusses the general demonstration of concurrent validity with traditional neuropsychological tests. The "Summary / Conclusion of Substantial Equivalence Rationale" further states that "ANAM provides a reliable measure of cognitive function."
To present this in a tabular format based on the given information:
Acceptance Criterion (Implicit) | Reported Device Performance |
---|---|
Demonstrate Concurrent Validity with Traditional Neuropsychological Tests | The 510(k) included results of numerous studies demonstrating correlations with traditional neuropsychological tests. |
Provide a Reliable Measure of Cognitive Function | The results of these studies "demonstrate that ANAM provides a reliable measure of cognitive function." |
Aid in the assessment of an individual's level of cognitive functioning | The device is indicated to "aid in the assessment of an individual's level of cognitive functioning" and is considered "substantially equivalent to the predicate device." |
2. Sample Size Used for the Test Set and Data Provenance
The document states, "The 510(k) included the results of numerous studies that examined the concurrent validity of ANAM as a clinical tool by documenting correlations with traditional neuropsychological tests."
- Sample Size: The exact sample size for the test set(s) is not specified. The mention of "numerous studies" suggests multiple datasets were used, but no specific numbers are given.
- Data Provenance: The document does not specify the country of origin of the data or whether the studies were retrospective or prospective. The target population is "military populations ages 18 to 65 years," suggesting the data would likely be from military personnel.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Experts
The document does not provide information on the number of experts used to establish ground truth or their qualifications. The ground truth appears to be derived from "traditional neuropsychological tests," implying that the interpretation of these tests by qualified neuropsychologists or clinicians would form the reference standard, but this is not explicitly stated or detailed.
4. Adjudication Method for the Test Set
The document does not specify any adjudication method for the test set. Since the "performance data" describes correlations with traditional neuropsychological tests, it's likely that the results of these established tests served as the reference, rather than a separate adjudication process of the ANAM results.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No, an MRMC comparative effectiveness study is not mentioned. The evaluation focused on the device's standalone performance in correlation with traditional neuropsychological tests to establish substantial equivalence, not on how human readers' performance improves with or without AI assistance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, a standalone performance evaluation was done. The "Performance Data" section describes "the results of numerous studies that examined the concurrent validity of ANAM as a clinical tool by documenting correlations with traditional neuropsychological tests." This implies that the device's output (scores) was directly compared or correlated with the output of established cognitive tests, operating as a standalone algorithm. The device "provides clinicians with objective measurements," and "ANAM does not provide a recommendation that the patient is impaired vs. unimpaired. Clinical interpretation... includes comparison with the normative database." This further reinforces its role as a standalone measurement tool.
7. The Type of Ground Truth Used
The ground truth used appears to be the results or scores from "traditional neuropsychological tests." The studies aimed to establish "concurrent validity" by showing correlations between ANAM's measurements and these established tests, which serve as the accepted standard for assessing cognitive function.
8. The Sample Size for the Training Set
The document does not provide information on the sample size used for the training set. It focuses on the validation studies that demonstrate concurrent validity with established tests. The device utilizes a "military normative database" for interpreting results, but the size or development of this database is not detailed regarding "training."
9. How the Ground Truth for the Training Set Was Established
The document does not provide information on how the ground truth for any training set was established. The focus is on the validation of the ANAM system by correlating its output with "traditional neuropsychological tests." If a "military normative database" was used for training or calibration, the method for establishing its ground truth (e.g., through large-scale testing of healthy military populations by neuropsychologists) is not described.
Ask a specific question about this device
(723 days)
PKQ
Cognivue testing is indicated as an adjunctive tool for evaluating perceptual and memory function in individuals aged 55-95 years old.
Cognivue provides clinicians in a healthcare setting with objective measurements of cognitive function as a screening aid in the assessment of adults 55 years of age and older. This is done for the purpose of identifying a potential decline in cognitive function relative to baseline test performance of other age-normal adults, referring those adults for further testing where warranted, and monitoring changes in cognitive function over time.
Cognivue presents a series of visual stimuli and a wedge-shaped cursor. The display presents stimuli at varying signal strengths with a moving domain-specific target. The patient identifies the target location by moving the cursor using the rotatory mouse manipulandum. The patient is scored based on the timing and accuracy of the responses. Repeated trials of varying difficulty characterize the subject's performance in each of the tested functional domains.
Here's a breakdown of the acceptance criteria and the study that proves the CogniVue device meets those criteria, based on the provided text:
Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category | Acceptance Criteria (from study objective/endpoints) | Reported Device Performance (from study results) |
---|---|---|
Agreement with Reference Standard (SLUMS) | Cognivue agrees with the non-reference standard, SLUMS, in identifying unimpaired and impaired test subjects. | Agreement Analysis (SLUMS and Cognivue): |
- Overall relationship: Deming regression showed
Cognivue = -29.98 + 4.465 SLUMS
with r = 0.71 (p=27 (impairment cut-off values), Cognivue related PPA = 86%, NPA = 50%, Accuracy = 0.86. - For SLUMS ≥ 27 and Cognivue ≥ 75 (unimpaired cut-off values): PPA = 66%, NPA = 67%, Accuracy = 0.66. |
Study Details
-
Sample sizes used for the test set and the data provenance:
- Main Clinical Validation Test Set: 401 subjects.
- Test-Retest Reliability Test Set: 358 subjects (a subset of the 401, those available for repeat testing).
- Cut-off Determination Test Set (Training Set for Cut-offs): 92 subjects.
- Data Provenance: The subjects were enrolled at 13 older adult independent living communities and adult programs for independent living seniors who are living at home. The study was prospective in its design, involving specific recruitment and protocol execution for the study. No explicit country of origin is stated, but given the FDA De Novo filing, it's highly likely the data is from the USA.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- The primary ground truth for classification was the St. Louis University Mental Status (SLUMS) Examination. While the SLUMS is a clinical tool administered by a "Tester," the text doesn't specify if this "Tester" is explicitly an "expert" (e.g., a neurologist or psychologist) or a trained administrator. Given that SLUMS scores have validated cut-off scores, it relies on the established validity of the SLUMS itself, rather than real-time expert consensus for each case. The study states "Tester #2 administration of the neuropsychological test battery," implying trained personnel administered the SLUMS.
- The study used "licensed practitioners evaluating brain function" as the general user group for the device, implying that interpretation of SLUMS scores would typically fall under their purview.
-
Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- There was no adjudication method mentioned for establishing the ground truth (SLUMS scores). The SLUMS is a standardized psychometric test with established scoring rules and cut-offs.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No such MRMC comparative effectiveness study was conducted. The study design was a comparative study between the Cognivue device and a battery of non-reference standard tests (including SLUMS), not a study comparing human readers with and without AI assistance. The Cognivue is intended as an "adjunctive tool," meaning a clinician would use its output alongside other information, but the study did not quantify improvement of human performance.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone performance assessment was done. The entire clinical validation focuses on the Cognivue device's output (an average score classified as impaired, intermediate, or unimpaired) and its agreement with the SLUMS examination and other neuropsychological tests. The device itself generates the score and classification automatically without real-time human interpretation or modification of its core output.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc):
- The primary ground truth used was a validated clinical psychometric scale: the St. Louis University Mental Status (SLUMS) Examination. This is a well-established cognitive screening tool with predefined cut-off scores for unimpaired, intermediate (mildly impaired), and impaired cognitive function. Secondary ground truth involved other established neuropsychological tests for construct validity.
-
The sample size for the training set:
- The cut-off values for Cognivue (impaired, intermediate, unimpaired) were determined using a preliminary study of 92 subjects. This set of 92 subjects essentially served as a training set for calibrating the device's classification thresholds against the SLUMS.
-
How the ground truth for the training set was established:
- For the 92 subjects used to establish the Cognivue cut-offs, the ground truth was established by administering the St. Louis University Mental Status (SLUMS) Examination. The SLUMS scores were then used to classify these 92 subjects into impaired, intermediate, and unimpaired categories based on their established cut-offs (e.g., >26 unimpaired,
Ask a specific question about this device
Page 1 of 1