Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K241513
    Device Name
    Sourcerer
    Date Cleared
    2024-09-27

    (121 days)

    Product Code
    Regulation Number
    882.1400
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K092844

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The software is intended for use by a trained/qualified EEG technologist or physician on both adult and pediatric subjects at least 16 years of age for the visualization of human brain function by fusing a variety of EEG information with rendered images of an idealized head model and an idealized MRI image.

    Device Description

    Sourcerer is an EEG source localization software that uses EEG and MRI-derived information to estimate and visualize cortex projections of human brain activity. Sourcerer is designed in a client-server model wherein the server components integrate directly with FLOW - BEL's software. Inverse source projections are computed on the server using EEG and MRI data from FLOW using the Electro-magnetic Inverse Module (EMIM API). The inverse results are interactively visualized in the Chrome browser running on the client computer using the Electro-magnetic Functional Anatomy Viewer (EMFAV).

    AI/ML Overview

    Here's an analysis of the provided text to extract the acceptance criteria and study details:

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance CriteriaReported Device Performance
    Algorithmic Testing (HexaFEM)
    Consistency with analytical solutions for three-layer spherical modelHexaFEM solutions are consistent with the analytical solutions for the three-layer spherical model.
    Consistency with FDM solutions for a realistic head model using the same conductivity valuesHexaFEM and FDM solutions are the same for one realistic head model using the same conductivity values.
    Algorithmic Testing (Inverse Model - EMIM Module)
    LORETA: Localization error distance similar to reported values by its creator.Average localization error is about 7 mm, similar to what is reported for LORETA from its creator.
    sLORETA: Exact source estimation results for simulated signal sources, replicating creator's reported results.Source estimation results are exact for the simulated signal sources, fully replicating simulated results reported by sLORETA's creator.
    MSP: Zero localization error for simulated signal sources.Shows 100% (zero localization error), as expected.
    Clinical Performance Testing
    Performance of Sourcerer to be equivalent to GeoSource (Predicate Device).Performance of Sourcerer was shown to be equivalent to GeoSource (comparison based on Euclidian distance between maximal amplitude location and resected boundary in epileptic patients).
    Software Verification and Validation Testing
    Accuracy of Sourcerer validated through algorithm testing.Algorithm testing validated the accuracy of Sourcerer. Product deemed fit for clinical use.
    Developed according to FDA's "Guidance for the Content of Premarket Submissions for Software Contained in Medical Device".Sourcerer was designed and developed as recommended by the FDA guidance.
    Safety classification set to Class B according to AAMI/ANSI/IEC 62304 Standard.Sourcerer safety classification set to Class B.
    "Basic Documentation Level" applied."Basic Documentation Level" applied to this device.

    2. Sample size used for the test set and the data provenance

    The text explicitly mentions:

    • Clinical Performance Testing: "The clinical data used in the evaluation is obtained from epileptic patients during standard presurgical evaluation." The sample size for the clinical test set is not explicitly stated as a number, but rather as "each patient's pre-operative hdEEG recording." It's implied there were multiple patients, but the exact count is missing.
    • Data Provenance: The clinical data is retrospective ("obtained from epileptic patients during standard presurgical evaluation") and appears to be from a clinical setting, presumably within the country of origin of the device manufacturer (USA, as indicated by the FDA submission).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Clinical Performance Testing Ground Truth: The ground truth for the clinical test set was established by:
      • Resected region (from MRI): This implies surgical and pathological confirmation of the epileptic zone, which would typically involve neurosurgeons and neuropathologists.
      • Clinical outcome: This refers to the patient's post-surgical seizure control, indicating the success of the resection.
        No specific number of experts or their qualifications (e.g., number of years of experience) are provided in the document.

    4. Adjudication method for the test set

    The document does not explicitly describe an adjudication method for establishing ground truth, such as 2+1 or 3+1. The ground truth for the clinical performance testing relied on the "resected region (from MRI)" and "clinical outcome," which are objective clinical findings rather than subjective expert interpretations requiring adjudication.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    There is no mention of a multi-reader multi-case (MRMC) comparative effectiveness study. The clinical performance testing compared the device's output (Electrical Source Imaging - ESI) to the predicate device (GeoSource) and the ground truth (resected region, clinical outcome), not improved human reader performance with AI assistance.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, extensive standalone (algorithm only) performance testing was done:

    • Algorithmic Testing of HexaFEM: Compared HexaFEM solutions to analytical solutions and FDM solutions.
    • Algorithmic Testing of Inverse Model (EMIM Module): Tested LORETA, sLORETA, and MSP solvers using "test files with known signal sources." This involved comparing the algorithm's estimated source generator to the known (simulated) source.

    7. The type of ground truth used

    • Algorithmic Testing (HexaFEM):
      • Mathematical/Analytical Ground Truth: Comparison with "analytical solutions for the three-layer spherical model."
      • Comparative Ground Truth: Comparison with "FDM solutions for one realistic head model."
    • Algorithmic Testing (Inverse Model - EMIM Module):
      • Simulated/Known Ground Truth: "known signal sources" from forward projections were used as ground truth for "recovering the source generator (known)."
    • Clinical Performance Testing:
      • Outcomes Data/Pathology/Clinical Consensus: "resected region (from MRI)" and "clinical outcome" were used to establish the ground truth for epileptic focus localization.

    8. The sample size for the training set

    The document does not specify the sample size for the training set. It focuses on verification and validation, but not the training of the underlying algorithms.

    9. How the ground truth for the training set was established

    Since the document does not specify the training set, it does not describe how its ground truth was established. The ground truth description is primarily for the test/validation sets.

    Ask a Question

    Ask a specific question about this device

    K Number
    K201910
    Device Name
    EZTrack
    Manufacturer
    Date Cleared
    2020-12-22

    (166 days)

    Product Code
    Regulation Number
    882.1400
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K092844

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EZTrack is intended for use by a trained/qualified EEG technologist or physician on both adult and pediatric subjects with focal or multifocal epilepsy at least 3 years of age for the visualization of human brain from analysis of electroencephalographic (EEG) signals produced by electrically active tissue of the brain. EZTrack calculates and displays the Fragility Index, a quantitative index based on an analysis of spatiotemporal EEG patterns that is intended for interpretation by trained physicians to aid in the evaluation of patients with focal or multifocal epilepsy.

    The device does not provide any diagnostic conclusion about the patient's condition to the user and should be interpreted along with other clinical data, including the original EEG, medical imaging, and other standard neurological and neuropsychological assessments.

    Device Description

    EZTrack is a web-based software-only device that allows visualization of human brain function based on the analysis of electroencephalographic (EEG) signals. The EZTrack algorithm produces a fragility score for each EEG recording node. The EZTrack fragility values are shown to correlate with regions that clinicians have annotated as seizure onset zones (SOZ) prior to resective surgery, and may be used in conjunction with other clinical data such as EEG, medial imaging, neuropsychological testing, and other neurologic assessments in order to aid in the evaluation of patients with focal or multifocal epilepsy. The device does not provide any diagnostic conclusion about the patient's condition. EZtrack displays the fragility of each EEG channel in a heatmap to aid in interpretation.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the EZTrack device:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided text does not explicitly state acceptance criteria in terms of specific performance thresholds (e.g., sensitivity, specificity, AUC). Instead, the performance is described in terms of a statistically significant correlation and an effect size.

    Acceptance Criterion (Implicit)Reported Device Performance
    Fragility data correlates with clinically annotated Seizure Onset Zone (SOZ) and differentiates treatment success from failure.EZTrack demonstrated a statistically significant difference (p-value=0.02) between the successful and failed Confidence Statistic distributions. An average effect size difference between the two groups of 0.627 was observed, meaning fragility had a 0.627 higher standardized confidence in the clinically annotated SOZ in successful outcomes compared to failed outcomes.
    Software meets verification and validation requirements.Software verification and validation testing were conducted and documentation provided as recommended by FDA guidance. The software was considered a Moderate Level of Concern.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: 91 patients (comprising 462 seizures).
      • 44 patients had successful outcomes (seizure-free).
      • 47 patients had failed outcomes (seizure recurrence).
    • Data Provenance: Retrospective study. The country of origin is not explicitly stated in the provided text.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Number of Experts: Not explicitly stated, but it mentions "clinicians" and "Consensus agreement of the spatial distribution of visual EEG signatures together with pre-implantation data were used to construct the clinically annotated SOZ." This implies multiple clinicians were involved in a consensus process.
    • Qualifications of Experts: The text refers to "clinicians" attempting to identify visual EEG signatures to isolate the SOZ during invasive monitoring. It also mentions "trained physicians" who would interpret the Fragility Index. Without further detail, it's difficult to specify exact qualifications (e.g., "Radiologist with 10 years of experience"), but they are clearly medical professionals with expertise in EEG and epilepsy.

    4. Adjudication Method for the Test Set

    The adjudication method for establishing the clinically annotated SOZ appears to be based on clinician consensus using "visual EEG signatures (e.g. HFOs, spikes, or burst activity) together with pre-implantation data." The text does not specify a numerical adjudication method like "2+1" or "3+1."

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly mentioned or described in the provided text. The study focused on the EZTrack algorithm's correlation with SOZ and outcome, not on how human readers perform with or without AI assistance.

    6. Standalone (Algorithm Only) Performance Study

    Yes, a standalone performance study was conducted. The clinical study described is a retrospective study that demonstrates the EZTrack algorithm's output (fragility data) correlates with the clinically annotated Seizure Onset Zone (SOZ) and differentiates between successful and failed patient outcomes. The text explicitly states, "EZTrack demonstrated a statistically significant difference (p-value=0.02) between the successful and failed Confidence Statistic distributions, and an average effect size difference between the two groups of 0.627." This is a direct measurement of the algorithm's performance without human intervention in the analysis.

    7. Type of Ground Truth Used

    The ground truth used was expert consensus / clinical outcome data.

    • Clinically annotated SOZ: Established by "clinicians" via "consensus agreement of the spatial distribution of visual EEG signatures together with pre-implantation data." This implies a form of expert consensus derived from clinical evaluation.
    • Patient Outcome: Categorized as "seizure free (success), or having seizure recurrence (failure) at their 6-12 months post-op evaluations." This is objective outcomes data used to assess the clinical relevance of the SOZ identification.

    8. Sample Size for the Training Set

    The document does not provide information regarding the sample size used for the training set. The clinical study described is a retrospective study used for performance evaluation, not necessarily for training the algorithm.

    9. How the Ground Truth for the Training Set Was Established

    The document does not provide information on how the ground truth for any potential training set was established. The clinical study described focuses on evaluating the device's performance against established clinical SOZ and patient outcomes.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1