K Number
K980477
Device Name
IMAGE VUE EEG
Date Cleared
1998-08-06

(181 days)

Product Code
Regulation Number
882.1400
Panel
NE
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The software is intended for use by a qualified/trained EEG technologist or physician on both adult and pediatric subjects for the visualization of human brain function and structure by fusing a variety of EEG information with MRI images.
The software is intended for use by a qualified/trained EEG technologist or physician on the visualization of human brain electrical activity by fusing a variety of Lilit information with MKI images.

Device Description

The software imports digital EEG data (in a variety of formats) and MKI data and permits the fusing and viewing of both types of data. No direct intervention, treatment, or on-line monitoring of the patient is performed.

AI/ML Overview

The provided text is a 510(k) summary for the IMAGE VUE EEG Software, stating that the device is substantially equivalent to predicate devices. It does not contain a detailed study with acceptance criteria and reported device performance metrics in the way a modern, statistically rigorous clinical validation study would.

Instead, the document relies on "bench and user testing" to claim substantial equivalence. This type of submission, especially from 1998, typically means the manufacturer demonstrated that the new device functions similarly to existing, legally marketed devices and does not raise new questions of safety or effectiveness. There isn't an explicit "acceptance criteria" table or a detailed "study that proves the device meets the acceptance criteria" in the contemporary sense of evaluating AI/software performance.

Therefore, for aspects of your request that pertain to detailed performance metrics (like sensitivity, specificity, or improvement with AI over human readers), sample sizes for test sets, expert qualifications, adjudication methods, or separate training/test sets with ground truth, the information is not available in the provided text.

Here's an analysis based on the information that is present:

1. A table of acceptance criteria and the reported device performance

  • Acceptance Criteria: Not explicitly defined in terms of specific performance metrics (e.g., accuracy, sensitivity, specificity). The overarching acceptance criterion for a 510(k) submission is "substantial equivalence" to predicate devices for safety and effectiveness.
  • Reported Device Performance:
    • "The results of bench and user testing indicate that the new device is as safe and effective as the predicate devices."
    • "it is the conclusion of SAM Technology, Inc. that the IMAGE. VUF. F.F.G. software is as safe and effective as the predicate devices, has few technological differences, and has no new indications for use, thus rendering it substantially equivalent to the predicate devices."

2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

  • Sample Size for Test Set: Not specified. The document only mentions "bench and user testing."
  • Data Provenance: Not specified.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

  • Not specified. The methodology for "bench and user testing" is not detailed to this extent.

4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

  • Not specified.

5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

  • No, an MRMC comparative effectiveness study is not mentioned. The device described (IMAGE VUE EEG Software) is for "visualization of human brain function and structure by fusing a variety of EEG information with MRI images" and "No direct intervention, treatment, or on-line monitoring of the patient is performed." It's not described as an AI-assisted diagnostic tool that would typically undergo an MRMC study to compare human performance with and without AI.

6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

  • The text describes the software's function ("imports digital EEG data... and MRI data and permits the fusing and viewing of both types of data"). It does not mention a standalone performance evaluation in terms of diagnostic accuracy by the algorithm itself. The comparison is based on overall safety and effectiveness similar to predicate devices.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

  • Not specified. Given the nature of a 1998 510(k) for an EEG visualization software, "ground truth" would likely have been established through expert interpretation of standard EEG and MRI data, but this is not explicitly detailed.

8. The sample size for the training set

  • Not applicable/Not specified. This document pertains to a 510(k) submission from 1998 for software that primarily fuses and visualizes data. It is not described as a machine learning or AI-driven diagnostic algorithm that would typically have a distinct "training set" in the modern sense. The "bench and user testing" refers to validation of the software's functionality and comparison to existing devices.

9. How the ground truth for the training set was established

  • Not applicable/Not specified (as above).

§ 882.1400 Electroencephalograph.

(a)
Identification. An electroencephalograph is a device used to measure and record the electrical activity of the patient's brain obtained by placing two or more electrodes on the head.(b)
Classification. Class II (performance standards).