Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K243681
    Date Cleared
    2025-07-23

    (236 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K223502, K223532

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Neuro Insight V1.0 is an image processing solution. It is intended to assist appropriately trained medical professionals in their analysis workflow on neurological MRI images.

    Neuro Insight V1.0 is composed of two subsets, including an image processing application package (NeuroPro) and an optional user interface (Neuro Synchronizer).

    NeuroPro is an image processing application package that computes maps, extracts and communicates metrics which are to be used in the analysis of multiphase or monophase neurological MR images.

    NeuroPro can be integrated and deployed through technical integration environment, responsible for transferring, storing, converting formats and displaying of DICOM imaging data.

    Neuro Synchronizer is an optional dedicated interface allowing the viewing, manipulation, and comparison of neurological medical imaging and/or multiple time-points, including post-processing results provided by NeuroPro or any other results from compatible processing applications.

    Neuro Synchronizer is a medical image management application intended to enable the user to edit and modify parameters that are optional inputs of aforementioned applications. These modified parameters are provided through the technical integration environment as inputs to the application to reprocess outputs. If necessary, Neuro Synchronizer provides the user with the option to validate the information.

    Neuro Synchronizer can be integrated in compatible technical integration environments.

    The device does not alter the original medical image. Neuro Insight V1.0 is not intended to be used as a standalone diagnostic device and should not be used as the sole basis for patient management decisions. The results of Neuro Insight V1.0 are intended to be used in conjunction with other patient information and based on professional judgment to assist with reading and interpretation of medical images. Users are responsible for viewing full images per the standard of care.

    Device Description

    Neuro Insight (NEU_INS_MM) V1.0 product is a neurological image analysis solution, composed of several image processing applications and optional visualization and manipulation features.

    Neuro Insight V1.0 is composed of two subsets:

    • NeuroPro (NEU_PRO_MR) as an image application package, responsible for the processing of specific neurological MR Images.
    • Neuro Synchronizer (NEU_HMI_MM) as an optional image analysis environment, that provides the user interface which has visualization and manipulation tools and allows the user to edit the parameters of compatible applications.

    Neuro Insight does not alter the original medical image and is not intended to be used as a diagnostic device.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the Neuro Insight V1.0 device, based on the provided FDA 510(k) clearance letter:

    Acceptance Criteria and Device Performance

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided document describes a validation study comparing Neuro Insight V1.0 to the predicate device, Olea Sphere® V3.0, focusing on parametric maps computation and co-registration.

    Feature EvaluatedAcceptance Criteria / Performance GoalReported Device Performance (Neuro Insight V1.0)
    Parametric Maps ComputationStatistical and/or visual analysis supports substantial equivalence to Olea Sphere® V3.0 for ADC, CBF, CBV, CBV_Corr, K2, MTT, TTP, Tmax/Delay, tMIP.Met: For each DWI and DSC parametric map, the statistical and/or visual analysis of results derived from comparison with Olea Sphere® V3.0 supported substantial equivalence.
    Intra- and Inter-exam Co-registration (FLAIR-DWI, FLAIR-DSC, FLAIR-T1, FLAIR-T1g, FLAIR-T2, FLAIR-follow-up FLAIR)All 6 co-registrations provided by Neuro Insight V1.0 are considered acceptable for reading and interpretation.Met: Visual analysis reported that all 6 co-registrations provided by Neuro Insight V1.0 were considered acceptable for reading and interpretation by the experts.
    Brain Extraction Tool (BET) - Deep Learning Algorithm (for spatial overlap)Average DICE coefficient of 0.95Met: Achieved an average DICE coefficient of 0.97 (ranging from 0.907 to 0.988), exceeding the predetermined acceptance threshold of 0.95.

    2. Sample Size Used for the Test Set and Data Provenance

    • Parametric Maps & Co-registration Study:

      • Sample Size:
        • Parametric maps: 30 anonymized brain MRI cases
        • Co-registration: 60 anonymized brain MRI cases
      • Data Provenance: Not explicitly stated as retrospective or prospective, or country of origin for these specific comparison studies. However, the BET deep learning algorithm training and testing data mentions sourcing from multiple MRI system manufacturers (GE Healthcare, Siemens, Philips, Canon/Toshiba) implying a diverse, likely multi-center, dataset. Given the anonymization and comparison with a predicate, it's highly likely this was a retrospective study.
    • Brain Extraction Tool (BET) Validation (Deep Learning):

      • Test Set Sample Size: 100 cases
      • Data Provenance: Sourced to ensure broad representativeness depending on manufacturer, magnetic field, acquisition parameters, origin, patient age and sex. Cases collected from multiple MRI system manufacturers (GE Healthcare, Siemens, Philips, Canon/Toshiba) and varying magnetic fields (1.5T, 3T). Patients included 51% male, 43% female, with varied age (mean age 60 years, range 14 to 100 years for available data). This suggests diverse origin, likely global or at least multi-site.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Parametric Maps & Co-registration Study:

      • Number of Experts: Three (3)
      • Qualifications: US board-certified neuroradiologists.
    • Brain Extraction Tool (BET) Validation (Deep Learning):

      • Number of Experts: Expert clinicians performed manual segmentation, following criteria defined by a US board-certified neuroradiologist. Each segmentation was reviewed by a neuroradiologist and a research engineer.

    4. Adjudication Method for the Test Set

    • Parametric Maps & Co-registration Study: The document states that the comparison was done "by three US board-certified neuroradiologists." For the parametric maps, it involved "statistical and/or visual analysis of the results." For co-registration, it was "visual analysis." This implies a consensus or agreement among the three experts, but a specific adjudication method (e.g., majority vote, independent review with arbitration) is not explicitly detailed.

    • Brain Extraction Tool (BET) Validation (Deep Learning): Manual segmentation was "reviewed by a neuroradiologist and a research engineer to ensure consistency and accuracy across the dataset." This suggests a review process, but not a specific adjudication method like 2+1.

    5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done

    • No, a traditional MRMC comparative effectiveness study aiming to quantify the improvement of human readers with AI vs. without AI assistance was not explicitly described.
      • The studies focused on the substantial equivalence of the device's output to a predicate (for parametric maps) and the acceptability of the device's output (for co-registration), as evaluated by human readers. It also validated the performance of the deep learning algorithm (BET) against expert-annotated ground truth. These are standalone evaluations of the device's output, not human-in-the-loop performance studies.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    • Yes, a standalone evaluation of the deep learning Brain Extraction Tool (BET) algorithm was performed.
      • The algorithm's performance was assessed by comparing its automated segmentations to expert-annotated ground truth masks. The metric used was the DICE coefficient, which is a common measure of spatial overlap for segmentation tasks. This is a direct measure of the algorithm's performance without a human in the loop for the actual output generation being assessed.

    7. The Type of Ground Truth Used

    • Parametric Maps & Co-registration Study:

      • Predicate Device Output: For parametric maps, the ground truth was effectively the results generated by the predicate device (Olea Sphere® V3.0), against which Neuro Insight V1.0's outputs were compared.
      • Expert Visual Assessment: For co-registration, the ground truth was based on the "acceptable for reading and interpretation" visual assessment by three US board-certified neuroradiologists.
    • Brain Extraction Tool (BET) Validation (Deep Learning):

      • Expert Consensus/Manual Annotation: Ground truth brain masks were created by "experienced clinicians following a standardized annotation protocol defined by a U.S. board-certified neuroradiologist." Each segmentation was "reviewed by a neuroradiologist and a research engineer to ensure consistency and accuracy." This strongly indicates expert consensus / manual annotation.

    8. The Sample Size for the Training Set

    • Brain Extraction Tool (BET) Validation (Deep Learning):
      • Training Set Sample Size: 199 cases
      • Validation Set Sample Size: 63 cases (used for model tuning during development)

    9. How the Ground Truth for the Training Set Was Established

    • Brain Extraction Tool (BET) Validation (Deep Learning):
      • Ground truth brain masks were created specifically by "experienced clinicians following a standardized annotation protocol defined by a U.S. board-certified neuroradiologist."
      • The protocol included all brain structures (hemispheres and lesions) while explicitly excluding non-brain anatomical elements (skull, eyeballs, optic nerves).
      • Each segmentation was "reviewed by a neuroradiologist and a research engineer to ensure consistency and accuracy across the dataset." This method of establishing ground truth for the training set is consistent with the test set and involves expert consensus/manual annotation and review.
    Ask a Question

    Ask a specific question about this device

    K Number
    K230552
    Manufacturer
    Date Cleared
    2023-04-26

    (57 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K223502

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    MR DWI/FLAIR Measurement V1.0 is an image processing application indicated for use in the analysis of:

    (1) MR Diffusion-weighted imaging (DWI)

    (2) MR FLAIR images.

    The device is intended to be used by trained professionals with medical imaging education including, but not limited to, physicians and medical technicians in the imaging assessment workflow:

    • computation of the map relative to the water diffusion, i.e., ADC map; .
    • . extraction and communication of metrics derived from the above map, i.e., hypointense area on ADC, and FLAIR series as well as ratios with contralateral information on FLAIR images.

    The results of MR DWI/FLAIR Measurement V1.0 are intended to be used in conjunction with other patient information and, based on professional judgment, to assist the clinician in the medical imaging assessment. Trained professionals are responsible for viewing the full set of native images per the standard of care.

    The device does not alter the original medical image. MR DWI/FLAIR Measurement V1.0 is not intended to be used as a standalone diagnostic device and shall not be used to take decisions with diagnosis or therapeutic purposes. Patient management decisions should not solely be based on MR DWI/FLAIR Measurement V1.0 results.

    MR DWI/FLAIR Measurement V1.0 can be integrated and deployed through technical platforms responsible for transferring, storing, converting formats, notifying of detected image variations and display of DICOM imaging data.

    Device Description

    Olea Medical proposes MR DWI/FLAIR Measurement V1.0 as an image processing application, Picture Archiving Communications System (PACS) software module that is intended for use in a technical environment which incorporates a Medical Image Communications Device as its technical platform.

    MR DWI/FLAIR Measurement V1.0 is an executable application which can run on the OLEA Platform. The OLEA Platform is a Medical Image Communications Device and outside the scope of this submission. MR DWI/FLAIR Measurement V1.0 is a docker totally independent from the OLEA platform in which it is integrated and has a dedicated Input/Output channels to be able to be integrated and deployed through any compatible configurable technical platform. Input DICOM images are received via the dedicated file system in which the application is integrated. When launched, the MR DWI/FLAIR Measurement V1.0 will retrieve and automatically analyze the image series. The output images will be sent to the same dedicated file system and can be visualized from any DICOM viewer by loading these results from the allocated file system.

    To be used, the MR DWI/FLAIR Measurement V1.0 docker needs an independent technical base, which is provided by a Medical Image Communications Device (MICD). The technical platform allows the docker to:

    • receive the inputs
    • provide the outputs
    • . visualize the outputs through Olea Platform viewer and/or export to other third party DICOM viewers.
    AI/ML Overview

    The provided text describes the 510(k) clearance for Olea Medical's MR DWI/FLAIR Measurement V1.0. While it details performance testing, it does not explicitly state "acceptance criteria" in the format of a table with specific thresholds. Instead, it presents the results of comparative testing against a predicate device (Olea Sphere V3.0) and indicates that the performance demonstrated substantial equivalence.

    Here's a breakdown of the requested information based on the provided text, with an acknowledgment where specific details (like explicit acceptance criteria thresholds) are not explicitly stated:

    Device Performance and Acceptance Criteria Study Details

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are not explicitly stated as numerical thresholds for specific metrics (e.g., "Dice score > 0.90"). Instead, the acceptance is demonstrated through comparative testing showing substantial equivalence to the predicate device, Olea Sphere V3.0. The performance is reported in terms of agreement and similarity with the predicate.

    Metric (Implicit Acceptance Criteria: Substantial Equivalence to Predicate)Reported Device Performance (Compared to Predicate Olea Sphere V3.0)
    Relative FLAIR (Bias)Average estimated bias (average of differences) was close to zero (0.004).
    Relative FLAIR (95% Limits of Agreement)95% of measurement differences ranged between -0.013 and +0.021.
    DICE Index for ADC Hypointense Area Segments (FLAIR images)Excellent, ranging from 0.816 to 0.976. (Applies to both Relative and Normalized FLAIR)
    Normalized FLAIR (Bias)Average estimated bias (average of differences) was close to zero (0.05).
    Normalized FLAIR (95% Limits of Agreement)95% of measurement differences ranged between -0.100 and +0.199.

    Implicit Acceptance: The performance metrics, particularly the low bias and tight limits of agreement for FLAIR measurements and the excellent DICE indexes, demonstrate that the MR DWI/FLAIR Measurement V1.0 performs comparably to the predicate device, thus meeting the implicit acceptance criterion of substantial equivalence.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set (for comparative clinical image study): Not explicitly stated how many cases were used for the comparative clinical image study that produced the Bland-Altman and DICE index results.
    • Test Set (for Diffusion Brain Extraction Tool - BET): 28 cases from multiple institutions.
    • Data Provenance: Cases came from multiple institutions (for the BET algorithm testing), different from the training and validation sets. DICOM data were sourced from Siemens, General Electric, Philips, and Canon manufacturers. The text does not specify the country of origin or whether the data was retrospective or prospective.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Number of Experts: Not explicitly stated for the overall "comparative clinical image study."
    • Qualifications of Experts: For the Diffusion Brain Extraction Tool (BET) algorithm, the reference standard was established by "expert clinicians." Specific qualifications (e.g., years of experience, subspecialty) are not provided beyond "expert clinicians."

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not explicitly stated. The text mentions "manual segmentation performed by expert clinicians" for the BET ground truth, implying individual expert assessment, but does not detail a consensus or adjudication process.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • MRMC Study: No, a multi-reader multi-case (MRMC) comparative effectiveness study was not reported. The study focused on comparing the device's measurements directly to a predicate device's measurements (algorithm-to-algorithm comparison for the main performance metrics and algorithm-to-expert segmentation for the BET component), rather than human readers with and without AI assistance.
    • Effect Size: Not applicable, as no MRMC study was performed.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    • Yes, the performance data presented (Bland-Altman analysis for FLAIR measurements and DICE indexes) represents a standalone comparison between the subject device's algorithmic outputs and the predicate device's outputs. The Diffusion Brain Extraction Tool (BET) component also had standalone performance evaluated against expert manual segmentations.
    • The device is explicitly stated as "not intended to be used as a standalone diagnostic device" and results are to be used "in conjunction with other patient information" and to "assist the clinical imaging assessment." The performance study, however, validates the algorithm's output accuracy against a reference.

    7. Type of Ground Truth Used

    • For the Diffusion Brain Extraction Tool (BET) algorithm, the ground truth was expert manual segmentation.
    • For the overall device performance (Relative/Normalized FLAIR measurements, ADC hypointense area segmentation), the "ground truth" was essentially the outputs of the predicate device (Olea Sphere V3.0), as the study aimed to demonstrate substantial equivalence by comparing the new device's outputs to the established predicate's outputs.

    8. Sample Size for the Training Set

    • Training Set (for Diffusion Brain Extraction Tool - BET): 218 cases.

    9. How the Ground Truth for the Training Set Was Established

    • The text states, "The reference standard was established by manual segmentation performed by expert clinicians" for the Diffusion Brain Extraction Tool (BET) algorithm. This implies that the ground truth for training (and validation/testing) was established through expert manual segmentation.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1