Search Results
Found 2 results
510(k) Data Aggregation
(269 days)
Olea Medical S.A.S.
Neurovascular Insight V1.0 is an optional user interface for use on a compatible technical integration environment and designed to be used by trained professionals with medical imaging education including, but not limited to, physicians. Neurovascular Insight V1.0 is intended to:
- Display and, if necessary, export neurological DICOM series and outputs provided by compatible processing docker applications, through the technical integration environment.
- Allow the user to edit and modify parameters that are optional inputs of aforementioned applications. These modified parameters are provided by the technical integration environment as inputs to the docker application to reprocess the outputs. When available, Neurovascular Insight V1.0 display can be updated with the reprocessed outputs.
- If requested by an application, allow the user to confirm information before displaying associated outputs and export them.
The device does not alter the original image information and is not intended to be used as a diagnostic device. The outputs of each compatible application must be interpreted by the predefined intended users, as specified in the application's own labeling. Moreover, the information displayed is intended to be used in conjunction with other patient information and based on professional judgment, to assist the clinician in the medical imaging assessment. It is not intended to be used in lieu of the standard care imaging.
Trained professionals are responsible for viewing the full set of native images per the standard of care.
Neurovascular Insight V1.0 is an optional user interface for use on a compatible technical integration environment and designed to be used by trained professionals with medical imaging education including, but not limited to, physicians and medical technicians.
It is worth noting that Neurovascular Insight V1.0 is an evolution of the FDA cleared medical device Olea S.I.A. Neurovascular V1.0 (K223532).
Neurovascular Insight V1.0 does not contain any calculation feature or any algorithm (deterministic or AI).
The provided FDA 510(k) clearance letter for Neurovascular Insight V1.0 states that the device "does not contain any calculation feature or any algorithm (deterministic or AI)." Furthermore, it explicitly mentions, "Neurovascular Insight V1.0 provides no output. Therefore, the comparison to predicate was based on the comparison of features available within both devices. No performance feature requires a qualitative or quantitative comparison and validation."
Based on this, it's clear that the device is a user interface and does not include AI algorithms or generate outputs that would require a study involving acceptance criteria for AI performance (e.g., sensitivity, specificity, accuracy). Therefore, the questions related to AI-specific performance criteria, ground truth establishment, training sets, and MRMC studies are not applicable to this particular device.
The "study" conducted for this device was a series of software verification and validation tests to ensure its functionality as a user interface and its substantial equivalence to its predicate.
Here's a breakdown of the requested information based on the provided document, highlighting where the requested information is not applicable due to the device's nature:
1. A table of acceptance criteria and the reported device performance
Note: As the device is a user interface without AI or output generation, there are no quantitative performance metrics like sensitivity, specificity, or accuracy that would typically be associated with AI algorithms. The acceptance criteria relate to the successful execution of software functionalities.
Acceptance Criteria (Based on information provided) | Reported Device Performance |
---|---|
Product risk assessment successfully completed | Confirmed |
Software modules verification tests successfully completed | Confirmed |
Software validation test successfully completed | Confirmed |
System provides all capabilities necessary to operate according to its intended use | Confirmed |
System operates in a manner substantially equivalent to the predicate device | Confirmed |
All features tested during verification phases (Software Test Description) | Successfully performed as reported in Software Test Report (STR) |
Specific features highlighted by risk analysis tested during usability process (human factor considered) | User Guide followed, no clinically blocking bugs, no incidents during processing |
2. Sample size used for the test set and the data provenance
- Sample Size for Test Set: Not explicitly stated as a number of patient cases or images, as the testing was focused on software functionality rather than AI performance on a dataset. The testing refers to "software modules verification tests" and "software validation test."
- Data Provenance: Not applicable in the context of clinical data for AI development/validation, as the device doesn't use or produce clinical outputs requiring such data. The testing was internal software validation.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Not Applicable: Given that the device is a user interface and does not utilize AI or produce diagnostic outputs, there was no need to establish clinical ground truth for a test set by medical experts in the traditional sense. The "ground truth" for its functionality would be the design specifications and successful execution of intended features. The document mentions "operators" who "reported no issue" during usability testing, but these are likely system testers/engineers, not clinical experts establishing diagnostic ground truth.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
- Not Applicable: No clinical ground truth was established, so no adjudication method was required.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No: The document explicitly states, "Neurovascular Insight V1.0 does not contain any calculation feature or any algorithm (deterministic or AI)." Therefore, an MRMC study comparing human readers with and without AI assistance was not performed.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- No: The device does not contain an algorithm, only a user interface. Standalone algorithm performance testing is not applicable.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Not Applicable: No clinical ground truth was established, as the device is a user interface without AI or diagnostic output generation. The "ground truth" for its validation was adherence to software specifications and intended functionalities.
8. The sample size for the training set
- Not Applicable: The device does not contain any AI algorithms, therefore, no training set was used.
9. How the ground truth for the training set was established
- Not Applicable: No training set was used.
Ask a specific question about this device
(236 days)
Olea Medical S.A.S.
Neuro Insight V1.0 is an image processing solution. It is intended to assist appropriately trained medical professionals in their analysis workflow on neurological MRI images.
Neuro Insight V1.0 is composed of two subsets, including an image processing application package (NeuroPro) and an optional user interface (Neuro Synchronizer).
NeuroPro is an image processing application package that computes maps, extracts and communicates metrics which are to be used in the analysis of multiphase or monophase neurological MR images.
NeuroPro can be integrated and deployed through technical integration environment, responsible for transferring, storing, converting formats and displaying of DICOM imaging data.
Neuro Synchronizer is an optional dedicated interface allowing the viewing, manipulation, and comparison of neurological medical imaging and/or multiple time-points, including post-processing results provided by NeuroPro or any other results from compatible processing applications.
Neuro Synchronizer is a medical image management application intended to enable the user to edit and modify parameters that are optional inputs of aforementioned applications. These modified parameters are provided through the technical integration environment as inputs to the application to reprocess outputs. If necessary, Neuro Synchronizer provides the user with the option to validate the information.
Neuro Synchronizer can be integrated in compatible technical integration environments.
The device does not alter the original medical image. Neuro Insight V1.0 is not intended to be used as a standalone diagnostic device and should not be used as the sole basis for patient management decisions. The results of Neuro Insight V1.0 are intended to be used in conjunction with other patient information and based on professional judgment to assist with reading and interpretation of medical images. Users are responsible for viewing full images per the standard of care.
Neuro Insight (NEU_INS_MM) V1.0 product is a neurological image analysis solution, composed of several image processing applications and optional visualization and manipulation features.
Neuro Insight V1.0 is composed of two subsets:
- NeuroPro (NEU_PRO_MR) as an image application package, responsible for the processing of specific neurological MR Images.
- Neuro Synchronizer (NEU_HMI_MM) as an optional image analysis environment, that provides the user interface which has visualization and manipulation tools and allows the user to edit the parameters of compatible applications.
Neuro Insight does not alter the original medical image and is not intended to be used as a diagnostic device.
Here's a breakdown of the acceptance criteria and the study details for the Neuro Insight V1.0 device, based on the provided FDA 510(k) clearance letter:
Acceptance Criteria and Device Performance
1. Table of Acceptance Criteria and Reported Device Performance
The provided document describes a validation study comparing Neuro Insight V1.0 to the predicate device, Olea Sphere® V3.0, focusing on parametric maps computation and co-registration.
Feature Evaluated | Acceptance Criteria / Performance Goal | Reported Device Performance (Neuro Insight V1.0) |
---|---|---|
Parametric Maps Computation | Statistical and/or visual analysis supports substantial equivalence to Olea Sphere® V3.0 for ADC, CBF, CBV, CBV_Corr, K2, MTT, TTP, Tmax/Delay, tMIP. | Met: For each DWI and DSC parametric map, the statistical and/or visual analysis of results derived from comparison with Olea Sphere® V3.0 supported substantial equivalence. |
Intra- and Inter-exam Co-registration (FLAIR-DWI, FLAIR-DSC, FLAIR-T1, FLAIR-T1g, FLAIR-T2, FLAIR-follow-up FLAIR) | All 6 co-registrations provided by Neuro Insight V1.0 are considered acceptable for reading and interpretation. | Met: Visual analysis reported that all 6 co-registrations provided by Neuro Insight V1.0 were considered acceptable for reading and interpretation by the experts. |
Brain Extraction Tool (BET) - Deep Learning Algorithm (for spatial overlap) | Average DICE coefficient of 0.95 | Met: Achieved an average DICE coefficient of 0.97 (ranging from 0.907 to 0.988), exceeding the predetermined acceptance threshold of 0.95. |
2. Sample Size Used for the Test Set and Data Provenance
-
Parametric Maps & Co-registration Study:
- Sample Size:
- Parametric maps: 30 anonymized brain MRI cases
- Co-registration: 60 anonymized brain MRI cases
- Data Provenance: Not explicitly stated as retrospective or prospective, or country of origin for these specific comparison studies. However, the BET deep learning algorithm training and testing data mentions sourcing from multiple MRI system manufacturers (GE Healthcare, Siemens, Philips, Canon/Toshiba) implying a diverse, likely multi-center, dataset. Given the anonymization and comparison with a predicate, it's highly likely this was a retrospective study.
- Sample Size:
-
Brain Extraction Tool (BET) Validation (Deep Learning):
- Test Set Sample Size: 100 cases
- Data Provenance: Sourced to ensure broad representativeness depending on manufacturer, magnetic field, acquisition parameters, origin, patient age and sex. Cases collected from multiple MRI system manufacturers (GE Healthcare, Siemens, Philips, Canon/Toshiba) and varying magnetic fields (1.5T, 3T). Patients included 51% male, 43% female, with varied age (mean age 60 years, range 14 to 100 years for available data). This suggests diverse origin, likely global or at least multi-site.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
-
Parametric Maps & Co-registration Study:
- Number of Experts: Three (3)
- Qualifications: US board-certified neuroradiologists.
-
Brain Extraction Tool (BET) Validation (Deep Learning):
- Number of Experts: Expert clinicians performed manual segmentation, following criteria defined by a US board-certified neuroradiologist. Each segmentation was reviewed by a neuroradiologist and a research engineer.
4. Adjudication Method for the Test Set
-
Parametric Maps & Co-registration Study: The document states that the comparison was done "by three US board-certified neuroradiologists." For the parametric maps, it involved "statistical and/or visual analysis of the results." For co-registration, it was "visual analysis." This implies a consensus or agreement among the three experts, but a specific adjudication method (e.g., majority vote, independent review with arbitration) is not explicitly detailed.
-
Brain Extraction Tool (BET) Validation (Deep Learning): Manual segmentation was "reviewed by a neuroradiologist and a research engineer to ensure consistency and accuracy across the dataset." This suggests a review process, but not a specific adjudication method like 2+1.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done
- No, a traditional MRMC comparative effectiveness study aiming to quantify the improvement of human readers with AI vs. without AI assistance was not explicitly described.
- The studies focused on the substantial equivalence of the device's output to a predicate (for parametric maps) and the acceptability of the device's output (for co-registration), as evaluated by human readers. It also validated the performance of the deep learning algorithm (BET) against expert-annotated ground truth. These are standalone evaluations of the device's output, not human-in-the-loop performance studies.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Yes, a standalone evaluation of the deep learning Brain Extraction Tool (BET) algorithm was performed.
- The algorithm's performance was assessed by comparing its automated segmentations to expert-annotated ground truth masks. The metric used was the DICE coefficient, which is a common measure of spatial overlap for segmentation tasks. This is a direct measure of the algorithm's performance without a human in the loop for the actual output generation being assessed.
7. The Type of Ground Truth Used
-
Parametric Maps & Co-registration Study:
- Predicate Device Output: For parametric maps, the ground truth was effectively the results generated by the predicate device (Olea Sphere® V3.0), against which Neuro Insight V1.0's outputs were compared.
- Expert Visual Assessment: For co-registration, the ground truth was based on the "acceptable for reading and interpretation" visual assessment by three US board-certified neuroradiologists.
-
Brain Extraction Tool (BET) Validation (Deep Learning):
- Expert Consensus/Manual Annotation: Ground truth brain masks were created by "experienced clinicians following a standardized annotation protocol defined by a U.S. board-certified neuroradiologist." Each segmentation was "reviewed by a neuroradiologist and a research engineer to ensure consistency and accuracy." This strongly indicates expert consensus / manual annotation.
8. The Sample Size for the Training Set
- Brain Extraction Tool (BET) Validation (Deep Learning):
- Training Set Sample Size: 199 cases
- Validation Set Sample Size: 63 cases (used for model tuning during development)
9. How the Ground Truth for the Training Set Was Established
- Brain Extraction Tool (BET) Validation (Deep Learning):
- Ground truth brain masks were created specifically by "experienced clinicians following a standardized annotation protocol defined by a U.S. board-certified neuroradiologist."
- The protocol included all brain structures (hemispheres and lesions) while explicitly excluding non-brain anatomical elements (skull, eyeballs, optic nerves).
- Each segmentation was "reviewed by a neuroradiologist and a research engineer to ensure consistency and accuracy across the dataset." This method of establishing ground truth for the training set is consistent with the test set and involves expert consensus/manual annotation and review.
Ask a specific question about this device
Page 1 of 1