Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K092844
    Device Name
    GEOSOURCE
    Date Cleared
    2010-12-21

    (462 days)

    Product Code
    Regulation Number
    882.1400
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    GEOSOURCE

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    GeoSource is intended for use by a trained/qualified EEG technologist or physician on both adult and pediatric subjects at least 3 years of age for the visualization of human brain function by fusing a variety of EEG information with rendered images of an idealized head model and an idealized MRI image.

    Device Description

    GeoSource is an add-on software module to EGI's Net Station software and can only be used on EEG data generated by EGI hardware. It runs on a personal computer. It is used to approximate source localization of EEG signals and visualize those estimated locations. It uses the linear inverse methods LORETA, LAURA, and sLORETA and the sphere and Finite Difference forward head models.

    AI/ML Overview

    Here's a summary of the acceptance criteria and study details for the GeoSource device, based on the provided 510(k) notification:

    Acceptance Criteria and Study to Prove Device Meets Acceptance Criteria

    1. A table of acceptance criteria and the reported device performance

    The primary acceptance criterion for GeoSource was demonstrating substantial equivalence to predicate devices in terms of source localization accuracy. The study aimed to show that the GeoSource algorithms (LORETA, sLORETA, LAURA with FDM) provided similar source localization results to the predicate algorithm (LORETA with spherical head model).

    Since this was a substantial equivalence submission, specific quantitative performance metrics and acceptance thresholds (e.g., sensitivity > X%, accuracy > Y%) are not explicitly stated in the provided text. The "reported device performance" is essentially the conclusion that the GeoSource algorithms were found to be substantially equivalent to the predicate device.

    Acceptance CriteriaReported Device Performance (as concluded by the study)
    Substantial equivalence in source localization accuracy to predicate device algorithm (LORETA with spherical head model).The proposed GeoSource algorithms (LORETA, sLORETA, and LAURA with GeoSource finite difference model [FDM]) were demonstrated to be substantially equivalent to the predicate device algorithm (LORETA using a spherical head model).

    2. Sample size used for the test set and the data provenance

    • Sample Size for Test Set: 20 epilepsy subjects.
    • Data Provenance: Retrospective data analysis. Country of origin is not explicitly stated but implied to be from the University of Washington's Regional Epilepsy Center (USA).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Number of Experts: Three.
    • Qualifications of Experts: Experienced epileptologists from the University of Washington's Regional Epilepsy Center. Specific years of experience are not mentioned.

    4. Adjudication method for the test set

    The adjudication method involved each of the three experienced epileptologists reviewing the source localization results for each algorithm and summaries of the postoperative reports. They were then asked to rate whether each of the four algorithm solutions (GeoSource LORETA, sLORETA, LAURA with FDM, and predicate LORETA with spherical head model) were located within the resected brain regions. The text does not specify a specific consensus or majority voting method (e.g., 2+1, 3+1). It states "The results demonstrated...", implying a collective finding rather than individual expert opinions being the final ground truth.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • Was an MRMC comparative effectiveness study done? No, not in the typical sense of evaluating human reader performance with and without AI assistance. This study focused on the performance of the algorithms themselves by having experts evaluate the algorithm's outputs in relation to the ground truth. It did not directly measure how much human readers improved by using the GeoSource software.

    6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done

    • Was a standalone study done? Yes, in essence. The study's primary goal was to evaluate the GeoSource algorithms' source localization accuracy in a standalone manner, with expert epileptologists providing the "ground truth" assessment of whether the algorithm's output correlated with the resected region. The experts were evaluating the algorithms' predictions, not using the algorithm as assistance for their own interpretation.

    7. The type of ground truth used

    • Type of Ground Truth: A combination of clinical and outcome data:
      • Clinical Neurophysiologist review: Identification and averaging of spikes in EEG data.
      • Operative data: Descriptions of the resected zone from surgery.
      • Outcomes Data: Postoperative Engel 1 or 2 determination (indicating good seizure control after resection).
      • Expert Consensus/Evaluation: The decision of three experienced epileptologists on whether the source localization algorithm's output was within the resected brain regions, using all available clinical information (postoperative reports, resected zone descriptions) as context.

    8. The sample size for the training set

    The document does not provide information about a separate "training set" for the GeoSource algorithms. The mentioned clinical study was a retrospective data analysis of subjects who had previously undergone resection surgery, and this data was used to test the algorithms' performance, not to train them. Source localization algorithms like LORETA, sLORETA, and LAURA are typically model-based and do not require a separate "training set" in the same way machine learning models do.

    9. How the ground truth for the training set was established

    As no specific training set is identified for the GeoSource algorithms, the question of how its ground truth was established is not applicable in the context of this 510(k) submission. These linear inverse methods are derived from mathematical and biophysical principles rather than being "trained" on a dataset with a predefined ground truth.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1