Search Filters

Search Results

Found 22 results

510(k) Data Aggregation

    K Number
    K250239
    Device Name
    NeuroMatch
    Manufacturer
    Date Cleared
    2025-05-23

    (116 days)

    Product Code
    Regulation Number
    882.1400
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    OLX

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    1. LVIS NeuroMatch Software is intended for the review, monitoring and analysis of electroencephalogram (EEG) recordings made by EEG devices using scalp electrodes and to aid neurologists in the assessment of EEG. The device is intended to be used by qualified medical practitioners who will exercise professional judgement in using the information.

    2. The Seizure Detection component of LVIS NeuroMatch is intended to mark previously acquired sections of adult EEG recordings from patients greater than or equal to 18 years old that may correspond to electrographic seizures, in order to assist qualified medical practitioners in the assessment of EEG traces. EEG recordings should be obtained with a full scalp montage according to the electrodes from the International Standard 10-20 placement.

    3. The Spike Detection component of LVIS NeuroMatch is intended to mark previously acquired sections of adult EEG recordings from patients ≥18 years old that may correspond to spikes, in order to assist qualified medical practitioners in the assessment of EEG traces. LVIS NeuroMatch Spike Detection performance has not been assessed for intracranial recordings.

    4. LVIS NeuroMatch includes the calculation and display of a set of quantitative measures intended to monitor and analyze EEG waveforms. These include Artifact Strength, Asymmetry Spectrogram, Autocorrelation Spectrogram, and Fast Fourier Transform (FFT) Spectrogram. These quantitative EEG measures should always be interpreted in conjunction with review of the original EEG waveforms.

    5. LVIS NeuroMatch displays physiological signals such as electrocardiogram (ECG/EKG) if it is provided in the EEG recording.

    6. The aEEG functionality included in LVIS NeuroMatch is intended to monitor the state of the brain.

    7. LVIS NeuroMatch Artifact Reduction (AR) is intended to reduce muscle and eye movements, in EEG signals from the International Standard 10-20 placement. AR does not remove the entire artifact signal and is not effective for other types of artifacts. AR may modify portions of waveforms representing cerebral activity. Waveforms must still be read by a qualified medical practitioner trained in recognizing artifacts, and any interpretation or diagnosis must be made with reference to the original waveforms.

    8. LVIS NeuroMatch EEG source localization visualizes brain electrical activity on a 3D idealized head model. LVIS NeuroMatch source localization additionally calculates and displays summary trends based on source localization findings over time.

    9. This device does not provide any diagnostic conclusion about the patient's condition to the user.

    Device Description

    NeuroMatch is a cloud-based software as a medical device (SaMD) intended to review, monitor, display, and analyze previously acquired and/or near real-time electroencephalogram (EEG) data from patients greater than or equal to 18 years old. The device is not intended to substitute for real-time monitoring of EEG. The software includes advanced algorithms that perform artifact reduction, seizure detection, and spike detection.

    The subject device is identical to the NeuroMatch device cleared under K241390, with exception of the following additional features:

    1. Source localization;
    2. Source localization trends;

    Source localization and source localization trends are substantially equivalent to the Epilog PreOp (K172858). Apart from the proposed additional software changes and associated changes to the Indications for Use and labeling there are no changes to the intended use or to the software features that were previously cleared. Below is a description of the software functions that will be added to the cleared NeuroMatch Device.

    1. Source Localization

    The NeuroMatch Source Localization visualization feature is used to visualize recorded EEG activity from the scalp in an idealized 3D model of the brain. The idealized brain model is based on template MR images. Each single sample of EEG-measured brain activity corresponds to a single point/pixel referred to as a source localization node (i.e., "node"). Together, the source localization nodes form a 3D cartesian grid where EEG signals with higher standardized current density are depicted in red and signals with lower standardized current density are depicted in blue. Source localization can be performed for any selected segment of the EEG data. The maximum and minimum of the source localization values are the absolute maximum and minimum values across the selected EEG signal, respectively. Users can also set an absolute threshold for the minimum value of the source localization outputs.

    2. Source Localization Trends

    NeuroMatch provides three automatic source localization trends to assist physicians investigating the amplitude and the frequency of the signal of interest (e.g. seizure onset) at the source space. Two of the trends provide simple 3D views of the sources of the high amplitude / high frequency across the signal of interest. The third trend provides a similar 3D view of the high frequency source movement across time.

    • Maximum Amplitude Projection (MAP): This metric allows clinicians to readily determine which brain regions are active and have high amplitude source localization results. The metric is determined by iterating through each node within a specified analysis time window and outputting the maximum source localization amplitude at that node within the specified analysis time window. No value is reported for nodes which have not been identified as maximum at any time during the specified window. This metric can help show brain regions that have high amplitude during a seizure.

    • Node Visit Frequency (NVF): This metric is reported as the number of times that a node has been labeled as maximum over time. This metric can help clinicians identify which brain regions are frequently active during a seizure.

    • Node Transition Frequency (NTF): This metric allows clinicians to determine which brain regions are active in consecutive time frames over a selected time period. A node transition is defined as a transition from one maximum point to another over time, and the node transition frequency is calculated by iterating through all time points for a specified analysis window, counting the number of times a transition between two points occurs over that time, and then dividing it by the time window of analysis. This metric can help identify pairs of brain regions that are frequently active in sequential order.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and study details for the NeuroMatch device, based on the provided FDA 510(k) clearance letter:

    1. Table of Acceptance Criteria and Reported Device Performance

    The FDA letter does not explicitly state "acceptance criteria" in the traditional sense of pre-defined thresholds for performance metrics. Instead, the study's primary objective for Source Localization was to demonstrate non-inferiority to a reference device (CURRY) and comparable performance to a predicate device (Epilog PreOp). Therefore, the "acceptance criteria" can be inferred from the study's conclusions regarding non-inferiority and comparability.

    For Source Localization Trends, the acceptance criterion was functional correctness and clinician understanding.

    Feature / MetricAcceptance Criteria (Inferred)Reported Device Performance
    Source Localization
    Non-Inferiority to CURRY (Reference Device)Lower bound of one-sided 95% CI of success rate difference (NeuroMatch - CURRY) > pre-specified non-inferiority margin.NeuroMatch success rate: 90.7% (39/43 concordant patients)
    CURRY success rate: 86% (37/43 concordant patients)
    Lower bound of one-sided 95% CI of success rate difference: -4.65% (greater than pre-specified non-inferiority margin).
    This established non-inferiority.
    Comparability to Epilog PreOp (Predicate Device)Comparable success rate and 95% CI overlap.NeuroMatch success rate: 91.7% (95% CI: 79.16, 100)
    Epilog PreOp success rate: 91.7% (95% CI: 79.16, 100)
    This indicates comparable performance.
    Consistency across Gender (Source Localization)No considerable gender-related differences, consistently non-inferior to CURRY.Male: CURRY 81.3%, NeuroMatch 87.5%
    Female: CURRY 88.9%, NeuroMatch 92.6%
    Observation suggests no considerable gender-related differences.
    Consistency across Age Groups (Source Localization)Comparable performance to CURRY consistently across age groups.Age [18, 30): CURRY 81.8%, NeuroMatch 81.8%
    Age [30, 40): CURRY 91.7%, NeuroMatch 91.7%
    Age [40, 50): CURRY 85.7%, NeuroMatch 92.9%
    Age [50, 75): CURRY 83.3%, NeuroMatch 100.0%
    Results suggest comparable performance across age groups.
    Source Localization TrendsFunctional correctness (passes all test cases).
    Clinician understanding and perceived clinical utility.All test cases passed, confirming trends functioned as intended and yielded expected results.
    Clinical survey of 15 clinicians showed they were able to understand the function of each trend and provided information regarding clinical utility in their workflow.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size (Source Localization Test Set): 43 patients.
    • Data Provenance: Collected from three independent and geographically diverse medical institutions:
      • Two institutions in the United States.
      • One institution in South Korea.
      • The study utilized retrospective data, as it focused on "previously acquired sections" of EEG recordings and "normalized post-operative MRIs with distinctive resection regions," indicating these were historical cases with established outcomes.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Number of Experts: Three (3) US board-certified epileptologists.
    • Qualifications: "US board-certified epileptologists." (Specific years of experience are not mentioned, but board certification implies a high level of expertise in the field).

    4. Adjudication Method for the Test Set

    • Adjudication Method: The three board-certified epileptologists independently completed a survey. They were presented with source localization results from each device (NeuroMatch, CURRY, PreOp) and normalized post-operative MRIs with resection regions.
    • Ground Truth Establishment: Each physician independently determined the resection region at the sublobar level and then assessed whether the SL output of each device had any overlap with this determined resection region. For each patient and device, they responded to a "Yes/No" question asking about concordance. The method doesn't explicitly mention a consensus or adjudication process between the three experts for the final ground truth, but rather their individual assessments were used to determine the concordance rate. However, implying the "resected brain areas" as the primary ground truth, their task was to evaluate if the SL output agreed with this established ground truth from the MRIs. The "concordance" was then aggregated across their individual assessments against the known resection region.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Yes, a form of MRMC study was done, but not in the traditional sense of measuring human reader improvement with AI assistance.
      • The study involved multiple readers (3 epileptologists) assessing multiple cases (43 patients).
      • However, the comparison was between AI algorithms (NeuroMatch vs. CURRY vs. PreOp), with the human readers acting as independent evaluators to establish concordance with a post-operative ground truth (resected brain areas).
      • Effect Size of Human Reader Improvement with AI vs. Without AI Assistance: This specific metric was not assessed or reported. The study evaluated the standalone AI performance of NeuroMatch compared to other AI devices, using human experts to determine the "correctness" of the AI's output in relation to surgical outcomes. It did not measure how human readers' diagnostic accuracy or efficiency changed when using NeuroMatch as an aid.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    • Yes, a standalone study was done for the Source Localization feature. The study directly compared the performance of the NeuroMatch algorithm against the CURRY reference device and the PreOp predicate device. The output was a "Yes/No" concordance with the resected brain area, as assessed by the experts. The experts evaluated the device's output, not their own performance using the device.

    7. Type of Ground Truth Used

    • Source Localization: The ground truth used was the resected brain areas as identified on normalized post-operative MRIs. This is a form of outcomes data combined with anatomical pathology (surgical intervention). The epileptologists were tasked with identifying whether the source localization output (from the algorithms) "overlapped" with these resected regions.
    • Source Localization Trends: For the trends (MAP, NVF, NTF), the ground truth for functional correctness was EEG datasets with known solutions (i.e., simulated or carefully crafted data where the expected output of the algorithms was precisely predictable). For clinical utility, the ground truth was clinical feedback and perceived understanding from the 15 clinicians.

    8. Sample Size for the Training Set

    • The document does not specify the sample size for the training set for any of the algorithms. It only details the test set used for validation.

    9. How the Ground Truth for the Training Set Was Established

    • The document does not specify how the ground truth for the training set was established. Since the training set size is not provided, this information is also absent.
    Ask a Question

    Ask a specific question about this device

    K Number
    K241513
    Device Name
    Sourcerer
    Date Cleared
    2024-09-27

    (121 days)

    Product Code
    Regulation Number
    882.1400
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    OLX

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The software is intended for use by a trained/qualified EEG technologist or physician on both adult and pediatric subjects at least 16 years of age for the visualization of human brain function by fusing a variety of EEG information with rendered images of an idealized head model and an idealized MRI image.

    Device Description

    Sourcerer is an EEG source localization software that uses EEG and MRI-derived information to estimate and visualize cortex projections of human brain activity. Sourcerer is designed in a client-server model wherein the server components integrate directly with FLOW - BEL's software. Inverse source projections are computed on the server using EEG and MRI data from FLOW using the Electro-magnetic Inverse Module (EMIM API). The inverse results are interactively visualized in the Chrome browser running on the client computer using the Electro-magnetic Functional Anatomy Viewer (EMFAV).

    AI/ML Overview

    Here's an analysis of the provided text to extract the acceptance criteria and study details:

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance CriteriaReported Device Performance
    Algorithmic Testing (HexaFEM)
    Consistency with analytical solutions for three-layer spherical modelHexaFEM solutions are consistent with the analytical solutions for the three-layer spherical model.
    Consistency with FDM solutions for a realistic head model using the same conductivity valuesHexaFEM and FDM solutions are the same for one realistic head model using the same conductivity values.
    Algorithmic Testing (Inverse Model - EMIM Module)
    LORETA: Localization error distance similar to reported values by its creator.Average localization error is about 7 mm, similar to what is reported for LORETA from its creator.
    sLORETA: Exact source estimation results for simulated signal sources, replicating creator's reported results.Source estimation results are exact for the simulated signal sources, fully replicating simulated results reported by sLORETA's creator.
    MSP: Zero localization error for simulated signal sources.Shows 100% (zero localization error), as expected.
    Clinical Performance Testing
    Performance of Sourcerer to be equivalent to GeoSource (Predicate Device).Performance of Sourcerer was shown to be equivalent to GeoSource (comparison based on Euclidian distance between maximal amplitude location and resected boundary in epileptic patients).
    Software Verification and Validation Testing
    Accuracy of Sourcerer validated through algorithm testing.Algorithm testing validated the accuracy of Sourcerer. Product deemed fit for clinical use.
    Developed according to FDA's "Guidance for the Content of Premarket Submissions for Software Contained in Medical Device".Sourcerer was designed and developed as recommended by the FDA guidance.
    Safety classification set to Class B according to AAMI/ANSI/IEC 62304 Standard.Sourcerer safety classification set to Class B.
    "Basic Documentation Level" applied."Basic Documentation Level" applied to this device.

    2. Sample size used for the test set and the data provenance

    The text explicitly mentions:

    • Clinical Performance Testing: "The clinical data used in the evaluation is obtained from epileptic patients during standard presurgical evaluation." The sample size for the clinical test set is not explicitly stated as a number, but rather as "each patient's pre-operative hdEEG recording." It's implied there were multiple patients, but the exact count is missing.
    • Data Provenance: The clinical data is retrospective ("obtained from epileptic patients during standard presurgical evaluation") and appears to be from a clinical setting, presumably within the country of origin of the device manufacturer (USA, as indicated by the FDA submission).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Clinical Performance Testing Ground Truth: The ground truth for the clinical test set was established by:
      • Resected region (from MRI): This implies surgical and pathological confirmation of the epileptic zone, which would typically involve neurosurgeons and neuropathologists.
      • Clinical outcome: This refers to the patient's post-surgical seizure control, indicating the success of the resection.
        No specific number of experts or their qualifications (e.g., number of years of experience) are provided in the document.

    4. Adjudication method for the test set

    The document does not explicitly describe an adjudication method for establishing ground truth, such as 2+1 or 3+1. The ground truth for the clinical performance testing relied on the "resected region (from MRI)" and "clinical outcome," which are objective clinical findings rather than subjective expert interpretations requiring adjudication.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    There is no mention of a multi-reader multi-case (MRMC) comparative effectiveness study. The clinical performance testing compared the device's output (Electrical Source Imaging - ESI) to the predicate device (GeoSource) and the ground truth (resected region, clinical outcome), not improved human reader performance with AI assistance.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, extensive standalone (algorithm only) performance testing was done:

    • Algorithmic Testing of HexaFEM: Compared HexaFEM solutions to analytical solutions and FDM solutions.
    • Algorithmic Testing of Inverse Model (EMIM Module): Tested LORETA, sLORETA, and MSP solvers using "test files with known signal sources." This involved comparing the algorithm's estimated source generator to the known (simulated) source.

    7. The type of ground truth used

    • Algorithmic Testing (HexaFEM):
      • Mathematical/Analytical Ground Truth: Comparison with "analytical solutions for the three-layer spherical model."
      • Comparative Ground Truth: Comparison with "FDM solutions for one realistic head model."
    • Algorithmic Testing (Inverse Model - EMIM Module):
      • Simulated/Known Ground Truth: "known signal sources" from forward projections were used as ground truth for "recovering the source generator (known)."
    • Clinical Performance Testing:
      • Outcomes Data/Pathology/Clinical Consensus: "resected region (from MRI)" and "clinical outcome" were used to establish the ground truth for epileptic focus localization.

    8. The sample size for the training set

    The document does not specify the sample size for the training set. It focuses on verification and validation, but not the training of the underlying algorithms.

    9. How the ground truth for the training set was established

    Since the document does not specify the training set, it does not describe how its ground truth was established. The ground truth description is primarily for the test/validation sets.

    Ask a Question

    Ask a specific question about this device

    K Number
    K233985
    Manufacturer
    Date Cleared
    2024-05-15

    (149 days)

    Product Code
    Regulation Number
    882.1400
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    OLX

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The TRIUX™ neo non-invasively measures the magnetoencephalographic (MEG) signals (and, optionally, electroencephalographic (EEG) signals) produced by electrically active tissue of the brain. These signals are recorded by a computerized data acquisition system, displayed, and may then be interpreted by trained physicians to help localize these active areas. The locations may then be correlated with anatomical information of the brain.

    MEG is routinely used to identify the locations of visual, auditory, somatosensory, and motor cortices in the brain when used in coniunction with evoked response stimulators. MEG is also used to noninvasively locate regions of epileptic activity within the brain. The localization information provided by MEG may be used, in conjunction with other diagnostic data, in neurosurgical planning.

    TRIUX™ neo may be used for patients of all ages as appropriate for magnetoencephalography.

    MEGreview™ is used for detection and localization of epileptic spontaneous brain activity. In addition, MEGreview™ may be used for localization of eloquent cortex, such as visual, auditory, somatosensory, and motor functions. Results interpreted by a trained clinician in conjunction with other imaging modalities can contribute to presurgical evaluation.

    MEGreview™ is intended for patients of all ages as appropriate for magnetoencephalography.

    Device Description

    TRIUX™ neo NM27000N (TRIUX™ neo below) is a magnetoencephalographic (MEG) device, designed to non-invasively detect and display biomagnetic signals produced by electrically active nerve tissue in the brain. This system enables diagnostic capabilities by providing information about the location of active nerve tissues relative to brain anatomy. It measures both MEG and electroencephalographic (EEG) signals, which are then recorded, displayed, and interpreted by trained clinicians to aid in neurosurgical planning and locating regions of epileptic activity.

    TRIUX™ neo employs 306 SQUID (Superconducting Quantum Interference Device) detectors to measure magnetic signals with minimal distortion, allowing for localization of brain activity. The detectors are housed in a cryogenic Dewar vessel, along with an internal helium recycler to maintain optimal operating conditions.

    The TRIUX™ neo svstem features a probe unit with a modular structure, a patient-support system with a couch and chair for various positioning needs, and an electronics setup housed outside the magnetically shielded room. The software component, MEGflow™ facilitates data acquisition, preprocessing, and analysis, and includes functionalities for clinical epilepsy workflows, MRI integration, and visualization tools.

    MEGreview™ is a software for off-line visualization, and localization of brain activity measured with magnetoencephalography (MEG) and, optionally, visualization of brain activity measured with scalp electroencephalography (EEG). MEGreview™ provides workflows for epilepsy focus localization and functional mapping including signal processing, source localization, integration with anatomical MRI and visualization of the results overlayed on anatomical information, as well as reporting and exporting the results.

    MEGreview™ is intended to be used with TRIUX™ neo or equivalent MEG devices.

    AI/ML Overview

    Here's a summary of the acceptance criteria and the study details for the TRIUX™ neo and MEGreview™ devices, based on the provided FDA 510(k) summary:

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria (Stated Goal)Device Performance (Reported Outcome)
    Preserve signal quality for data analysisSuccessfully preserved signal quality for data analysis (clinical investigations)
    Reduce localization errorReduced localization error (clinical investigations)
    Localization error of evoked responses and epileptiform events
    Ask a Question

    Ask a specific question about this device

    K Number
    K210199
    Device Name
    RICOH MEG
    Date Cleared
    2021-07-02

    (158 days)

    Product Code
    Regulation Number
    882.1400
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    OLX

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The RICOH MEG non-invasively measures the magnetoencephalographic (MEG) produced by electrically active tissue of the brain. These signals are recorded by a computerized data acquisition system, displayed, and may then be interpreted by trained physicians to help localize these active areas. The locations may then be correlated with anatomical information of the brain. MEG is routinely used to identify the locations of visual, auditory, and somatosensory activity in the brain when used in conjunction with evoked response averaging devices. MEG is also used to non-invasively locate regions of epilentic activity within the brain. The localization information provided by the device may be used, in conjunction with other diagnostic data, as an aid in neurosurgical planning.

    Device Description

    The RICOH MEG Analysis is an analysis software package used for processing and analyzing MEG data. It displays digitized MEG signals, EEG signals, topographic maps, and registered MRI images. Universal functions such as data retrieval, storage, management, querying and listing, and output are handled by the basic MEGvision Software of Eagle Technology, Inc. (K040051).

    The RICOH MEG Analysis is designed to aid clinicians in the assessment of patient anatomy, physiology, electrophysiology and pathology and to visualize source localization of MEG signals.

    AI/ML Overview

    The provided document describes the RICOH MEG, a magnetoencephalograph (MEG) device, and its acceptance criteria and the study performed to demonstrate conformance.

    Here's a breakdown of the requested information:

    1. Table of Acceptance Criteria and Reported Device Performance

    Test NameAcceptance Criteria (Purpose of Testing)Reported Device Performance (Summary of results)
    [1] Matching Module design testVerify that the Matching Module meets the design specifications by a black box test, including image correction and alignment techniques of co-registration to MRI/3D digitizer data.Satisfied the pass/fail criteria. Pass. (Planned test cases: 39; Tested cases: 39; Failures occurred: 0; Failures corrected: 0)
    [2] Analysis System design testVerify that the Analysis System meets the design specifications by a black box test. Verify that the updated function does not affect the continued performance of the rest of the device.Satisfied the pass/fail criteria. Pass. (Planned test cases: 20; Tested cases: 20; Failures occurred: 0; Failures corrected: 0)
    [3] Analysis System ValidationPerform validation by a person who can substitute the intended user (operates without training, has knowledge to instruct users). Validate product validity, usability validity, and software validity based on the Intended Use.Pass/Fail: Pass.

    2. Sample size used for the test set and the data provenance:

    • Sample Size: The document does not specify a "test set" in terms of patient data or number of cases for the performance evaluation. Instead, it refers to "test cases" for software verification and validation.
      • Matching Module design test: 39 test cases.
      • Analysis System design test: 20 test cases.
      • Analysis System Validation: No specific number of cases is provided, but it states "performance the validation by a person who can substitute the intended user."
    • Data Provenance: Not specified for any biological or clinical data. The tests described are bench tests and software verification/validation.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • The document primarily describes engineering/software verification and validation tests rather than clinical performance evaluation requiring expert ground truth.
    • For the "Analysis System Validation," it states that the validation was performed by "a person who can substitute the intended user," characterized as someone who can "operate the product without training" and has "the knowledge to be able to instruct the user on the operation of the product." This individual acts as a proxy for an expert user, but their specific qualifications (e.g., years of experience, medical specialty) are not detailed.

    4. Adjudication method for the test set:

    • Not applicable. The tests described are focused on functional and design specification verification through pass/fail criteria for engineered systems, not on clinical interpretation requiring expert adjudication.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No, a multi-reader, multi-case comparative effectiveness study with or without AI assistance was not done. The device is a MEG system for measuring brain activity and aiding in localization, not an AI-assisted diagnostic tool in the typical sense of needing reader performance comparisons.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • The RICOH MEG is a device that records and processes MEG signals. The "RICOH MEG Analysis" is an analysis software package used for processing and analyzing MEG data. It displays signals and visualizes source localization. The document states that signals "may then be interpreted by trained physicians." This indicates that human interpretation is an integral part of its intended use. Therefore, a standalone algorithm-only performance study without human-in-the-loop is not described as relevant or performed given the nature of the device.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • For the described tests, the "ground truth" is primarily based on design specifications and functional requirements for the software modules. For example, verifying that the Matching Module performs image correction and alignment according to its design.
    • For "Analysis System Validation," the ground truth for "product validity, usability validity, and software validity" is established by a qualified individual (a "substitute for the intended user") assessing whether the system meets its intended purpose. No external clinical "ground truth" (e.g., pathology, outcomes data) is referenced for these specific performance tests.

    8. The sample size for the training set:

    • Not applicable. The document describes a medical device and its associated software for MEG data analysis. It does not mention any machine learning or AI components that would require a "training set" in the context of developing a diagnostic algorithm.

    9. How the ground truth for the training set was established:

    • Not applicable, as no training set for an AI/ML algorithm is mentioned.
    Ask a Question

    Ask a specific question about this device

    K Number
    K201910
    Device Name
    EZTrack
    Manufacturer
    Date Cleared
    2020-12-22

    (166 days)

    Product Code
    Regulation Number
    882.1400
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    OLX

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EZTrack is intended for use by a trained/qualified EEG technologist or physician on both adult and pediatric subjects with focal or multifocal epilepsy at least 3 years of age for the visualization of human brain from analysis of electroencephalographic (EEG) signals produced by electrically active tissue of the brain. EZTrack calculates and displays the Fragility Index, a quantitative index based on an analysis of spatiotemporal EEG patterns that is intended for interpretation by trained physicians to aid in the evaluation of patients with focal or multifocal epilepsy.

    The device does not provide any diagnostic conclusion about the patient's condition to the user and should be interpreted along with other clinical data, including the original EEG, medical imaging, and other standard neurological and neuropsychological assessments.

    Device Description

    EZTrack is a web-based software-only device that allows visualization of human brain function based on the analysis of electroencephalographic (EEG) signals. The EZTrack algorithm produces a fragility score for each EEG recording node. The EZTrack fragility values are shown to correlate with regions that clinicians have annotated as seizure onset zones (SOZ) prior to resective surgery, and may be used in conjunction with other clinical data such as EEG, medial imaging, neuropsychological testing, and other neurologic assessments in order to aid in the evaluation of patients with focal or multifocal epilepsy. The device does not provide any diagnostic conclusion about the patient's condition. EZtrack displays the fragility of each EEG channel in a heatmap to aid in interpretation.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the EZTrack device:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided text does not explicitly state acceptance criteria in terms of specific performance thresholds (e.g., sensitivity, specificity, AUC). Instead, the performance is described in terms of a statistically significant correlation and an effect size.

    Acceptance Criterion (Implicit)Reported Device Performance
    Fragility data correlates with clinically annotated Seizure Onset Zone (SOZ) and differentiates treatment success from failure.EZTrack demonstrated a statistically significant difference (p-value=0.02) between the successful and failed Confidence Statistic distributions. An average effect size difference between the two groups of 0.627 was observed, meaning fragility had a 0.627 higher standardized confidence in the clinically annotated SOZ in successful outcomes compared to failed outcomes.
    Software meets verification and validation requirements.Software verification and validation testing were conducted and documentation provided as recommended by FDA guidance. The software was considered a Moderate Level of Concern.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: 91 patients (comprising 462 seizures).
      • 44 patients had successful outcomes (seizure-free).
      • 47 patients had failed outcomes (seizure recurrence).
    • Data Provenance: Retrospective study. The country of origin is not explicitly stated in the provided text.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Number of Experts: Not explicitly stated, but it mentions "clinicians" and "Consensus agreement of the spatial distribution of visual EEG signatures together with pre-implantation data were used to construct the clinically annotated SOZ." This implies multiple clinicians were involved in a consensus process.
    • Qualifications of Experts: The text refers to "clinicians" attempting to identify visual EEG signatures to isolate the SOZ during invasive monitoring. It also mentions "trained physicians" who would interpret the Fragility Index. Without further detail, it's difficult to specify exact qualifications (e.g., "Radiologist with 10 years of experience"), but they are clearly medical professionals with expertise in EEG and epilepsy.

    4. Adjudication Method for the Test Set

    The adjudication method for establishing the clinically annotated SOZ appears to be based on clinician consensus using "visual EEG signatures (e.g. HFOs, spikes, or burst activity) together with pre-implantation data." The text does not specify a numerical adjudication method like "2+1" or "3+1."

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly mentioned or described in the provided text. The study focused on the EZTrack algorithm's correlation with SOZ and outcome, not on how human readers perform with or without AI assistance.

    6. Standalone (Algorithm Only) Performance Study

    Yes, a standalone performance study was conducted. The clinical study described is a retrospective study that demonstrates the EZTrack algorithm's output (fragility data) correlates with the clinically annotated Seizure Onset Zone (SOZ) and differentiates between successful and failed patient outcomes. The text explicitly states, "EZTrack demonstrated a statistically significant difference (p-value=0.02) between the successful and failed Confidence Statistic distributions, and an average effect size difference between the two groups of 0.627." This is a direct measurement of the algorithm's performance without human intervention in the analysis.

    7. Type of Ground Truth Used

    The ground truth used was expert consensus / clinical outcome data.

    • Clinically annotated SOZ: Established by "clinicians" via "consensus agreement of the spatial distribution of visual EEG signatures together with pre-implantation data." This implies a form of expert consensus derived from clinical evaluation.
    • Patient Outcome: Categorized as "seizure free (success), or having seizure recurrence (failure) at their 6-12 months post-op evaluations." This is objective outcomes data used to assess the clinical relevance of the SOZ identification.

    8. Sample Size for the Training Set

    The document does not provide information regarding the sample size used for the training set. The clinical study described is a retrospective study used for performance evaluation, not necessarily for training the algorithm.

    9. How the Ground Truth for the Training Set Was Established

    The document does not provide information on how the ground truth for any potential training set was established. The clinical study described focuses on evaluating the device's performance against established clinical SOZ and patient outcomes.

    Ask a Question

    Ask a specific question about this device

    K Number
    K172858
    Device Name
    PreOp
    Manufacturer
    Date Cleared
    2018-01-08

    (110 days)

    Product Code
    Regulation Number
    882.1400
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    OLX

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    PreOp is intended for use by a trained/qualified EEG technologist or physician on both adult and pediativ subjects at least 3 years of age for the visualization of human brain function by fusing a variety of EEG information with rendered images of an individualized head model and an individualized MRI image.

    Device Description

    PreOp is medical device software that combines EEG data and MR images to visualize recorded EEG activity in 3D in the brain. PreOp can be subdivided in 3 main modules: 3D Electrical Source Imaging (i.e. 3D ESI), Report generation and Viewer generation. The device's input is the MRI and EEG data that are uploaded by the user to the PreOp cloud environment. The output of the device is a report containing the results of the visualization and the ability to evaluate the results in 3D using the 3D viewer. The user can access the output through the PreOp cloud environment.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the PreOp device, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state quantitative acceptance criteria in a dedicated section. Instead, the acceptance is based on demonstrating "substantial equivalence" to a predicate device, particularly in source localization performance.

    Acceptance Criterion (Implicit)Reported Device Performance (PreOp vs. Predicate)
    Source Localization Equivalence (Study 1): The PreOp algorithms should be substantially equivalent to the predicate device algorithm in terms of source localization accuracy for epileptic spikes."The results demonstrated that the proposed PreOp algorithms were substantially equivalent to the predicate device algorithm" based on concordance ratings by three experienced epileptologists on a sublobular level. This comparison was between sLORETA with FDM using individualized anatomical MRI (PreOp) and sLORETA with FDM using idealized anatomical MRI (predicate).
    Source Localization Consistency (Study 2): Performance of spike source localization should be consistent between HD-EEG and LD-EEG recordings within PreOp.In 13 epileptic spikes across 8 patients, both algorithms (HD-EEG vs. LD-EEG within PreOp) provided identical source locations. In only 3 spikes, the localization was not 100% equivalent but "very close to each other."
    Clinical Usability: The device should meet usability requirements."Usability validation is part of the Clinical Performance data and PreOp was tested and meets the requirements of following standard: AAMI/ANSI/IEC 62366:2007, Medical devices - Application of usability engineering to medical devices."
    Software Verification and Validation: The software should be fit for clinical use and meet relevant standards."Validation testing involved algorithm testing which validated the accuracy of PreOp. The product was deemed fit for clinical use." "PreOp was designed and developed as recommended by FDA’s Guidance, 'Guidance for the Content of Premarket Submissions for Software Contained in Medical Device'." "According to AAMI/ANSI/IEC 62304 Standard, PreOp safety classification has been set to Class B."

    2. Sample Size Used for the Test Set and Data Provenance

    • Study 1 (Source Localization Equivalence):
      • Sample Size: 18 epilepsy subjects.
      • Data Provenance: Retrospective data analysis. Country of origin is not explicitly stated, but given Epilog is based in Belgium, it's likely European or a mix.
    • Study 2 (Spike Source Localization Consistency):
      • Sample Size: Data from 8 patients, evaluating 16 epileptic spikes (13 identical + 3 very close).
      • Data Provenance: Not explicitly stated as retrospective or prospective, but likely retrospective. Country of origin is not specified.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Study 1:
      • Number of Experts: Three experienced epileptologists.
      • Qualifications: "Experienced epileptologists." Specific years of experience are not provided.
    • Study 2: Not applicable in the same way, as this study was comparing internal algorithm performance rather than having experts establish a new ground truth based on device output.

    4. Adjudication Method for the Test Set

    • Study 1: The document states that the three experienced epileptologists "were asked to rate whether each of the algorithm solutions (sLORETA with the finite difference model [FDM] using an idealized or individualized anatomical MRI) were concordant on a sublobular level." The specific method for consolidating these ratings (e.g., 2 out of 3 agreement) is not detailed, but it implies a consensus-based approach for determining "substantial equivalence."

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done, What was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

    No, a traditional MRMC comparative effectiveness study evaluating human reader improvement with AI assistance was not performed. The study focused on comparing the performance of the device's algorithms against a predicate device's algorithms (Study 1) or comparing different internal algorithm configurations (Study 2), with human experts acting as adjudicators for the output in Study 1.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done

    Yes, the studies primarily assess the standalone performance of the algorithms.

    • Study 1 directly compares "the source localization accuracy of the PreOp software algorithms" to "the predicate algorithm." Human experts then rated the concordance of these algorithm outputs with post-operative reports. This is a standalone comparison.
    • Study 2 compares "the performance of spike source localization using HD-EEG recordings... and Low Density LD-EEG recordings" within the PreOp algorithms themselves. This is also a standalone assessment.

    7. The Type of Ground Truth Used

    • Study 1 (Source Localization Equivalence):
      • Ground Truth: Clinical outcomes data combined with expert consensus. Specifically, the resected zone (operative data) for subjects who were Engel I postoperatively (favorable outcome) was used as the reference. The "summaries of the postoperative reports" were provided to the epileptologists, who then rated the concordance of the algorithm solutions with this clinical ground truth.
    • Study 2 (Spike Source Localization Consistency):
      • Ground Truth: The "ground truth" here is internal consistency of the PreOp algorithm itself under different input conditions (HD-EEG vs. LD-EEG). There isn't an external ground truth like pathology for this specific study; instead, it verifies the consistency of the algorithm's output.

    8. The Sample Size for the Training Set

    The document does not provide information on the sample size for the training set used to develop the PreOp algorithms. The studies described are validation studies (test sets) for the already developed software.

    9. How the Ground Truth for the Training Set Was Established

    Since the document does not mention the training set size, it also does not detail how the ground truth for any training set was established.

    Ask a Question

    Ask a specific question about this device

    K Number
    K092844
    Device Name
    GEOSOURCE
    Date Cleared
    2010-12-21

    (462 days)

    Product Code
    Regulation Number
    882.1400
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    OLX

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    GeoSource is intended for use by a trained/qualified EEG technologist or physician on both adult and pediatric subjects at least 3 years of age for the visualization of human brain function by fusing a variety of EEG information with rendered images of an idealized head model and an idealized MRI image.

    Device Description

    GeoSource is an add-on software module to EGI's Net Station software and can only be used on EEG data generated by EGI hardware. It runs on a personal computer. It is used to approximate source localization of EEG signals and visualize those estimated locations. It uses the linear inverse methods LORETA, LAURA, and sLORETA and the sphere and Finite Difference forward head models.

    AI/ML Overview

    Here's a summary of the acceptance criteria and study details for the GeoSource device, based on the provided 510(k) notification:

    Acceptance Criteria and Study to Prove Device Meets Acceptance Criteria

    1. A table of acceptance criteria and the reported device performance

    The primary acceptance criterion for GeoSource was demonstrating substantial equivalence to predicate devices in terms of source localization accuracy. The study aimed to show that the GeoSource algorithms (LORETA, sLORETA, LAURA with FDM) provided similar source localization results to the predicate algorithm (LORETA with spherical head model).

    Since this was a substantial equivalence submission, specific quantitative performance metrics and acceptance thresholds (e.g., sensitivity > X%, accuracy > Y%) are not explicitly stated in the provided text. The "reported device performance" is essentially the conclusion that the GeoSource algorithms were found to be substantially equivalent to the predicate device.

    Acceptance CriteriaReported Device Performance (as concluded by the study)
    Substantial equivalence in source localization accuracy to predicate device algorithm (LORETA with spherical head model).The proposed GeoSource algorithms (LORETA, sLORETA, and LAURA with GeoSource finite difference model [FDM]) were demonstrated to be substantially equivalent to the predicate device algorithm (LORETA using a spherical head model).

    2. Sample size used for the test set and the data provenance

    • Sample Size for Test Set: 20 epilepsy subjects.
    • Data Provenance: Retrospective data analysis. Country of origin is not explicitly stated but implied to be from the University of Washington's Regional Epilepsy Center (USA).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Number of Experts: Three.
    • Qualifications of Experts: Experienced epileptologists from the University of Washington's Regional Epilepsy Center. Specific years of experience are not mentioned.

    4. Adjudication method for the test set

    The adjudication method involved each of the three experienced epileptologists reviewing the source localization results for each algorithm and summaries of the postoperative reports. They were then asked to rate whether each of the four algorithm solutions (GeoSource LORETA, sLORETA, LAURA with FDM, and predicate LORETA with spherical head model) were located within the resected brain regions. The text does not specify a specific consensus or majority voting method (e.g., 2+1, 3+1). It states "The results demonstrated...", implying a collective finding rather than individual expert opinions being the final ground truth.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • Was an MRMC comparative effectiveness study done? No, not in the typical sense of evaluating human reader performance with and without AI assistance. This study focused on the performance of the algorithms themselves by having experts evaluate the algorithm's outputs in relation to the ground truth. It did not directly measure how much human readers improved by using the GeoSource software.

    6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done

    • Was a standalone study done? Yes, in essence. The study's primary goal was to evaluate the GeoSource algorithms' source localization accuracy in a standalone manner, with expert epileptologists providing the "ground truth" assessment of whether the algorithm's output correlated with the resected region. The experts were evaluating the algorithms' predictions, not using the algorithm as assistance for their own interpretation.

    7. The type of ground truth used

    • Type of Ground Truth: A combination of clinical and outcome data:
      • Clinical Neurophysiologist review: Identification and averaging of spikes in EEG data.
      • Operative data: Descriptions of the resected zone from surgery.
      • Outcomes Data: Postoperative Engel 1 or 2 determination (indicating good seizure control after resection).
      • Expert Consensus/Evaluation: The decision of three experienced epileptologists on whether the source localization algorithm's output was within the resected brain regions, using all available clinical information (postoperative reports, resected zone descriptions) as context.

    8. The sample size for the training set

    The document does not provide information about a separate "training set" for the GeoSource algorithms. The mentioned clinical study was a retrospective data analysis of subjects who had previously undergone resection surgery, and this data was used to test the algorithms' performance, not to train them. Source localization algorithms like LORETA, sLORETA, and LAURA are typically model-based and do not require a separate "training set" in the same way machine learning models do.

    9. How the ground truth for the training set was established

    As no specific training set is identified for the GeoSource algorithms, the question of how its ground truth was established is not applicable in the context of this 510(k) submission. These linear inverse methods are derived from mathematical and biophysical principles rather than being "trained" on a dataset with a predefined ground truth.

    Ask a Question

    Ask a specific question about this device

    K Number
    K100126
    Date Cleared
    2010-12-03

    (318 days)

    Product Code
    Regulation Number
    882.1400
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Product Code :

    OLX

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The GridView software is indicated for use by qualified and trained medical personnel for the visualization and reporting of the electrical activity of the brain in adult patients with intracerebral electrodes. This reporting is obtained by user annotation of images of the patient's brain (MRI) on which images of the electrodes are superimposed.

    Device Description

    The GridView software is indicated for use by qualified and trained medical personnel for the visualization and reporting of the electrical activity of the brain in adult patients with intracerebral electrodes. This reporting is obtained by user annotation of images of the patient's brain (MRI) on which images of the electrodes are superimposed.

    AI/ML Overview

    I'm sorry, but the provided text does not contain the detailed information necessary to answer your request about the acceptance criteria and the study that proves the device meets them. The document is an FDA 510(k) clearance letter for the "Stellate Gridview" (later referred to as "GridView Software") and it primarily focuses on the substantial equivalence determination.

    Specifically, the text doesn't include:

    • A table of acceptance criteria and reported device performance.
    • Details about sample sizes for test sets, data provenance, or training sets.
    • Information about the number or qualifications of experts, or how ground truth was established for either test or training sets.
    • Adjudication methods.
    • Results of any Multi-Reader Multi-Case (MRMC) comparative effectiveness study, or details about standalone algorithm performance studies.
    • The type of ground truth used (e.g., pathology, outcomes data).

    The document states the indications for use of the GridView software, which is "for the visualization and reporting of the electrical activity of the brain in adult patients with intracerebral electrodes. This reporting is obtained by user annotation of images of the patient's brain (MRI) on which images of the electrodes are superimposed." However, it does not provide study details demonstrating how well the device performs these functions against specific criteria.

    Ask a Question

    Ask a specific question about this device

    K Number
    K091393
    Manufacturer
    Date Cleared
    2010-10-26

    (533 days)

    Product Code
    Regulation Number
    882.1400
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    OLX

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Elekta Neuromag® with MaxFilter 2.1 is intended for use as a magnetoencephalographic (MEG) device which non-invasively detects and displays biomagnetic signals produced by electrically active nerve tissue in the brain. When interpreted by a trained clinician, the data enhances the diagnostic capability by providing useful information about the location relative to brain anatomy of active nerve tissue responsible for critical brain functions.

    Elekta Neuromag® with MaxFilter™ non-invasively measures the magnetoencephalographic (MEG) signals (and, optionally, electroencephalographic (EEG) signals) produced by electrically active tissue of the brain. These signals are recorded by a computerized data acquisition system, displayed and may then be interpreted by trained physicians to help localize these active areas. The locations may then be correlated with anatomical information of the brain. MEG is routinely used to identify the locations of visual, auditory, somatosensory, and motor cortex in the brain when used in conjunction with evoked response averaging devices. MEG is also used to non-invasively locate regions of epileptic activity within the brain. The localization information provided by MEG may be used, in conjunction with other diagnostic data, in neurosurgical planning.

    Device Description

    This premarket notification represents modifications made to our current product. The present device differs from the predicate device, K050035, Elekta Neuromag® with Maxwell Filter only in the following areas of functionality: Spatiotemporal interference elimination, Graphical user interface; and Offline averager. The modification also adds compatibility with internal active shielding, an interference removal method described in K081430. MaxFilter™ is intended to be used with Elekta Neuromag® MEG products in reducing measurement artifacts.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the Elekta Neuromag® with MaxFilter 2.1, based on the provided 510(k) summary:

    1. Table of Acceptance Criteria and Reported Device Performance

    Feature/MetricAcceptance Criteria (Predicate)Reported Device Performance (MaxFilter™ 2.1)Supporting Evidence
    Spatiotemporal Interference EliminationNo (Predicate K050035 had SSS only)Yes (tSSS technology)Performance testing, clinical study. Substantially equivalent measurement accuracy in clinical study with non-moving heads.
    Source Localization Accuracy (Phantom)Not explicitly stated but implied by substantial equivalence to K050035Within 2 mm accuracyPhantom testing
    Graphical User Interface (GUI)No (Predicate had command-line UI)YesUser friendliness enhancement; performs same software modules as command-line. No clinical utility impact.
    Offline Averager FunctionNo (Predicate had online averager only)YesUser friendliness enhancement; same functionality as online version. No clinical utility impact.
    Support for Internal Active Shielding (K081430)NoYesFunctional modification
    Automated Detection of Bad ChannelsYesYesFunctional equivalence to predicate

    Note: The 510(k) summary focuses on demonstrating substantial equivalence to the predicate device, K050035. The "acceptance criteria" for the new features are primarily that they provide enhanced functionality without negatively impacting the existing performance, and for the core function of MEG measurement, the new device maintains accuracy substantially equivalent to the predicate.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: Not explicitly stated. The document mentions "clinical study" but does not detail the number of subjects or cases.
    • Data Provenance: Not explicitly stated. Given that Elekta Oy is based in Helsinki, Finland, and the 510(k) is for the US FDA, the clinical study could involve data from various countries. The document does not specify if it was retrospective or prospective.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • This information is not provided in the summary. The "clinical study" is mentioned for measurement accuracy with non-moving heads, but details about ground truth establishment by experts for localization or diagnostic capabilities are absent. The intended use states data "may then be interpreted by trained physicians," implying expert interpretation, but doesn't specify how ground truth for the study was established.

    4. Adjudication Method for the Test Set

    • This information is not provided in the summary.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • A MRMC study comparing human readers with and without AI assistance is not mentioned in the summary. The study focuses on the device's technical performance and substantial equivalence.

    6. Standalone (Algorithm Only) Performance Study

    • A standalone performance study was implicitly done for the technical accuracy of the MaxFilter 2.1 algorithm. The document states "Performance testing consisted of software validation, phantom testing and clinical testing." The "within 2 mm accuracy of the source in a phantom" refers to the algorithm's performance in a controlled environment. However, this is not a "standalone performance" in terms of diagnostic effectiveness without human interpretation, as the device is intended for use by trained clinicians.

    7. Type of Ground Truth Used

    • For phantom testing: The ground truth would be the known, precisely controlled source location within the phantom, allowing for direct comparison of the device's localization output to this known truth.
    • For clinical testing: The document states "provided substantially equivalent measurement accuracy in a clinical study with non-moving heads." This suggests that the ground truth for "measurement accuracy" in a clinical setting likely referred to established and accepted methods for assessing MEG signal quality and source localization, potentially compared against the predicate device's output or other established neurophysiological markers. However, specific details of how this clinical ground truth was established are not provided. It is not explicitly stated if pathology, expert consensus on clinical finding, or outcomes data were used as ground truth for clinical diagnostic performance.

    8. Sample Size for the Training Set

    • The document does not explicitly mention a training set or its sample size. MaxFilter 2.1 is described as a modification to an existing product (K050035) with new algorithms (tSSS). The development of these algorithms would involve theoretical work and potentially internal data sets for optimization, but these are not referred to as a "training set" in a machine learning context within this 510(k) summary.

    9. How the Ground Truth for the Training Set Was Established

    • As a training set is not explicitly referred to, the method for establishing its ground truth is not provided. The development of the tSSS algorithm would rely on established physics and signal processing principles for MEG data, rather than a "ground truth" derived from patient data in the way a machine learning model would.
    Ask a Question

    Ask a specific question about this device

    K Number
    K081430
    Manufacturer
    Date Cleared
    2008-07-28

    (68 days)

    Product Code
    Regulation Number
    882.1400
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    OLX

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Elekta Neuromag® with active shielding non-invasively measures the magnetoencephalographic (MEG) signals (and, optionally, electroencephalographic (EEG) signals) produced by electrically active tissue of the brain. These signals are recorded by a computerized data acquisition system, displayed and may then be interpreted by trained physicians to help localize these active areas. The locations may then be correlated with anatomical information of the brain. MEG is routinely used to identify the locations of visual, auditory, somatosensory, and motor cortex in the brain when used in conjunction with evoked response averaging devices. MEG is also used to non-invasively locate regions of epileptic activity within the brain. The localization information provided by MEG may be used, in conjunction with other diagnostic data, in neurosurgical planning.

    Device Description

    This premarket notification represents modifications made to our current product. Internal active shielding has been added to enhance the signal to noise ratio. The internal active shielding system is a magnetic shielding technique intended to be an integrated, optional part of Elekta Neuromag® magnetoencephalograph. The internal active shielding system increases the dynamic range of the magnetometers. related external magnetic interferences, by internal feedback compensation that uses the sensor array of the biomagnetometer as a zero indicator and compensation coils placed inside the magnetically shielded room to deliver a cancellation field for attenuating the interference.

    AI/ML Overview

    The provided 510(k) summary for the "Elekta Neuromag® with internal active shielding" does not contain the detailed information necessary to complete most of the requested fields regarding acceptance criteria and the comprehensive study design.

    This submission focuses on demonstrating substantial equivalence to predicate devices (K041264 and K050035) based on modifications to an existing product (the addition of internal active shielding to enhance the signal-to-noise ratio). The "performance" section briefly states that "The results of laboratory, bench testing, clinical testing, and software validation activities show that the internal active shielding modification poses no new issues of safety or effectiveness, and is therefore substantially equivalent to our predicate devices." However, it does not elaborate on specific acceptance criteria, study methodologies, sample sizes, or ground truth establishment.

    Here's a breakdown of what can be extracted and what is missing:


    1. Table of acceptance criteria and the reported device performance

    Acceptance Criteria (Stated or Implied)Reported Device Performance
    SafetyNo new issues of safety are identified due to the modification.
    EffectivenessNo new issues of effectiveness are identified due to the modification.
    Signal-to-Noise Ratio Enhancement"Internal active shielding has been added to enhance the signal to noise ratio." (This is the primary technical modification and its intended benefit, implying an improvement over the non-shielded version, though no quantifiable metrics or targets are provided in this summary.)
    Substantial EquivalenceThe device is deemed substantially equivalent to predicate devices (K041264, K050035).
    Dynamic Range Increase"The internal active shielding system increases the dynamic range of the magnetometers." (Implied positive performance, but no specific values or targets given.)
    Attenuating InterferenceThe system delivers a cancellation field for "attenuating the interference." (Implied effectiveness in reducing external magnetic interference.)

    2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)

    • Sample Size for Test Set: Not specified. The document mentions "clinical testing" but provides no details on participant numbers or types.
    • Data Provenance: Not specified. No information about country of origin or whether the studies were retrospective or prospective.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)

    • Number of Experts & Qualifications: Not specified. The document states that MEG signals "may then be interpreted by trained physicians to help localize these active areas," but does not detail how ground truth was established for any specific test sets used to validate the device's performance.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

    • Adjudication Method: Not specified.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC Study: Not applicable. This device is an electroencephalograph/magnetoencephalograph, not an AI-powered diagnostic tool. The submission does not discuss AI assistance or human reader improvement with AI.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Standalone Performance: Not applicable as this is a medical device for signal acquisition and display, not an AI algorithm. The device's output is "interpreted by a trained clinician."

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • Type of Ground Truth: Not specified for any performance testing. The intended use describes that MEG helps "localize active areas" which "may then be correlated with anatomical information of the brain," and identifies "regions of epileptic activity" or "visual, auditory, somatosensory, and motor cortex." This suggests that the ground truth for localization would typically involve clinical correlation (e.g., with fMRI, intracranial EEG, surgical outcomes, or known functional areas), but the submission does not detail how this was applied in their specific "clinical testing."

    8. The sample size for the training set

    • Sample Size for Training Set: Not applicable. As this is not an AI/ML device, there would not be a "training set" in the conventional sense for algorithm development.

    9. How the ground truth for the training set was established

    • Ground Truth for Training Set: Not applicable. Same reason as above.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 3