Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K243681
    Date Cleared
    2025-07-23

    (236 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K223502, K223532

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Neuro Insight V1.0 is an image processing solution. It is intended to assist appropriately trained medical professionals in their analysis workflow on neurological MRI images.

    Neuro Insight V1.0 is composed of two subsets, including an image processing application package (NeuroPro) and an optional user interface (Neuro Synchronizer).

    NeuroPro is an image processing application package that computes maps, extracts and communicates metrics which are to be used in the analysis of multiphase or monophase neurological MR images.

    NeuroPro can be integrated and deployed through technical integration environment, responsible for transferring, storing, converting formats and displaying of DICOM imaging data.

    Neuro Synchronizer is an optional dedicated interface allowing the viewing, manipulation, and comparison of neurological medical imaging and/or multiple time-points, including post-processing results provided by NeuroPro or any other results from compatible processing applications.

    Neuro Synchronizer is a medical image management application intended to enable the user to edit and modify parameters that are optional inputs of aforementioned applications. These modified parameters are provided through the technical integration environment as inputs to the application to reprocess outputs. If necessary, Neuro Synchronizer provides the user with the option to validate the information.

    Neuro Synchronizer can be integrated in compatible technical integration environments.

    The device does not alter the original medical image. Neuro Insight V1.0 is not intended to be used as a standalone diagnostic device and should not be used as the sole basis for patient management decisions. The results of Neuro Insight V1.0 are intended to be used in conjunction with other patient information and based on professional judgment to assist with reading and interpretation of medical images. Users are responsible for viewing full images per the standard of care.

    Device Description

    Neuro Insight (NEU_INS_MM) V1.0 product is a neurological image analysis solution, composed of several image processing applications and optional visualization and manipulation features.

    Neuro Insight V1.0 is composed of two subsets:

    • NeuroPro (NEU_PRO_MR) as an image application package, responsible for the processing of specific neurological MR Images.
    • Neuro Synchronizer (NEU_HMI_MM) as an optional image analysis environment, that provides the user interface which has visualization and manipulation tools and allows the user to edit the parameters of compatible applications.

    Neuro Insight does not alter the original medical image and is not intended to be used as a diagnostic device.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the Neuro Insight V1.0 device, based on the provided FDA 510(k) clearance letter:

    Acceptance Criteria and Device Performance

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided document describes a validation study comparing Neuro Insight V1.0 to the predicate device, Olea Sphere® V3.0, focusing on parametric maps computation and co-registration.

    Feature EvaluatedAcceptance Criteria / Performance GoalReported Device Performance (Neuro Insight V1.0)
    Parametric Maps ComputationStatistical and/or visual analysis supports substantial equivalence to Olea Sphere® V3.0 for ADC, CBF, CBV, CBV_Corr, K2, MTT, TTP, Tmax/Delay, tMIP.Met: For each DWI and DSC parametric map, the statistical and/or visual analysis of results derived from comparison with Olea Sphere® V3.0 supported substantial equivalence.
    Intra- and Inter-exam Co-registration (FLAIR-DWI, FLAIR-DSC, FLAIR-T1, FLAIR-T1g, FLAIR-T2, FLAIR-follow-up FLAIR)All 6 co-registrations provided by Neuro Insight V1.0 are considered acceptable for reading and interpretation.Met: Visual analysis reported that all 6 co-registrations provided by Neuro Insight V1.0 were considered acceptable for reading and interpretation by the experts.
    Brain Extraction Tool (BET) - Deep Learning Algorithm (for spatial overlap)Average DICE coefficient of 0.95Met: Achieved an average DICE coefficient of 0.97 (ranging from 0.907 to 0.988), exceeding the predetermined acceptance threshold of 0.95.

    2. Sample Size Used for the Test Set and Data Provenance

    • Parametric Maps & Co-registration Study:

      • Sample Size:
        • Parametric maps: 30 anonymized brain MRI cases
        • Co-registration: 60 anonymized brain MRI cases
      • Data Provenance: Not explicitly stated as retrospective or prospective, or country of origin for these specific comparison studies. However, the BET deep learning algorithm training and testing data mentions sourcing from multiple MRI system manufacturers (GE Healthcare, Siemens, Philips, Canon/Toshiba) implying a diverse, likely multi-center, dataset. Given the anonymization and comparison with a predicate, it's highly likely this was a retrospective study.
    • Brain Extraction Tool (BET) Validation (Deep Learning):

      • Test Set Sample Size: 100 cases
      • Data Provenance: Sourced to ensure broad representativeness depending on manufacturer, magnetic field, acquisition parameters, origin, patient age and sex. Cases collected from multiple MRI system manufacturers (GE Healthcare, Siemens, Philips, Canon/Toshiba) and varying magnetic fields (1.5T, 3T). Patients included 51% male, 43% female, with varied age (mean age 60 years, range 14 to 100 years for available data). This suggests diverse origin, likely global or at least multi-site.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Parametric Maps & Co-registration Study:

      • Number of Experts: Three (3)
      • Qualifications: US board-certified neuroradiologists.
    • Brain Extraction Tool (BET) Validation (Deep Learning):

      • Number of Experts: Expert clinicians performed manual segmentation, following criteria defined by a US board-certified neuroradiologist. Each segmentation was reviewed by a neuroradiologist and a research engineer.

    4. Adjudication Method for the Test Set

    • Parametric Maps & Co-registration Study: The document states that the comparison was done "by three US board-certified neuroradiologists." For the parametric maps, it involved "statistical and/or visual analysis of the results." For co-registration, it was "visual analysis." This implies a consensus or agreement among the three experts, but a specific adjudication method (e.g., majority vote, independent review with arbitration) is not explicitly detailed.

    • Brain Extraction Tool (BET) Validation (Deep Learning): Manual segmentation was "reviewed by a neuroradiologist and a research engineer to ensure consistency and accuracy across the dataset." This suggests a review process, but not a specific adjudication method like 2+1.

    5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done

    • No, a traditional MRMC comparative effectiveness study aiming to quantify the improvement of human readers with AI vs. without AI assistance was not explicitly described.
      • The studies focused on the substantial equivalence of the device's output to a predicate (for parametric maps) and the acceptability of the device's output (for co-registration), as evaluated by human readers. It also validated the performance of the deep learning algorithm (BET) against expert-annotated ground truth. These are standalone evaluations of the device's output, not human-in-the-loop performance studies.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    • Yes, a standalone evaluation of the deep learning Brain Extraction Tool (BET) algorithm was performed.
      • The algorithm's performance was assessed by comparing its automated segmentations to expert-annotated ground truth masks. The metric used was the DICE coefficient, which is a common measure of spatial overlap for segmentation tasks. This is a direct measure of the algorithm's performance without a human in the loop for the actual output generation being assessed.

    7. The Type of Ground Truth Used

    • Parametric Maps & Co-registration Study:

      • Predicate Device Output: For parametric maps, the ground truth was effectively the results generated by the predicate device (Olea Sphere® V3.0), against which Neuro Insight V1.0's outputs were compared.
      • Expert Visual Assessment: For co-registration, the ground truth was based on the "acceptable for reading and interpretation" visual assessment by three US board-certified neuroradiologists.
    • Brain Extraction Tool (BET) Validation (Deep Learning):

      • Expert Consensus/Manual Annotation: Ground truth brain masks were created by "experienced clinicians following a standardized annotation protocol defined by a U.S. board-certified neuroradiologist." Each segmentation was "reviewed by a neuroradiologist and a research engineer to ensure consistency and accuracy." This strongly indicates expert consensus / manual annotation.

    8. The Sample Size for the Training Set

    • Brain Extraction Tool (BET) Validation (Deep Learning):
      • Training Set Sample Size: 199 cases
      • Validation Set Sample Size: 63 cases (used for model tuning during development)

    9. How the Ground Truth for the Training Set Was Established

    • Brain Extraction Tool (BET) Validation (Deep Learning):
      • Ground truth brain masks were created specifically by "experienced clinicians following a standardized annotation protocol defined by a U.S. board-certified neuroradiologist."
      • The protocol included all brain structures (hemispheres and lesions) while explicitly excluding non-brain anatomical elements (skull, eyeballs, optic nerves).
      • Each segmentation was "reviewed by a neuroradiologist and a research engineer to ensure consistency and accuracy across the dataset." This method of establishing ground truth for the training set is consistent with the test set and involves expert consensus/manual annotation and review.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1