(53 days)
MR Diffusion Perfusion Mismatch V1.0 is an automatic calculation tool indicated for use in radiology. The device is an image processing software allowing computation of parametric maps from (1) MR Diffusion-weighted imaging (DWI) and (2) MR Perfusion-weighted imaging (PWI) and extraction of volumes of interest based on numerical thresholds applied to the aforementioned maps. Computation of mismatch between extracted volumes is automatically provided.
The device is intended to assist trained radiologists and surgeons in the imaging assessment workflow by extraction and communication of metrics from MR Diffusion-weighted imaging (DWI) and MR Perfusion-weighted imaging (PWI).
The results of MR Diffusion Perfusion Mismatch V1.0 are intended to be used in conjunction with other patient information and, based on professional judgment, to assist the clinician in the medical imaging assessment. Trained radiologists and surgeons are responsible for viewing the full set of native images per the standard of care.
The device does not alter the original medical image. MR Diffusion Mismatch V1.0 is not intended to be used as a standalone diagnostic device and shall not be used to make decisions with diagnosis or therapeutic purposes. Patient management decisions should not solely be based on MR Diffusion Perfusion Mismatch V1.0 results.
MR Diffusion Perfusion Mismatch V1.0 can be integrated and deployed through technical platforms, responsible for transferring, storing, converting formats, notifying of detected image variations and display of DICOM imaging data.
The MR Diffusion Perfusion Mismatch V1.0 application can be used to automatically compute gualitative as well as quantitative perfusion maps based on the dynamic (first-pass) effect of a contrast agent (CA). The perfusion application assumes that the input data describes a well-defined and transient signal response following rapid administration of a contrast agent.
Olea Medical proposes MR Diffusion Perfusion Mismatch V1.0 as an image processing application, Picture Archiving Communications System (PACS) software module that is intended for use in a technical environment which incorporates a Medical Image Communications Device as its technical platform.
1. Table of Acceptance Criteria and Reported Device Performance:
Metric | Acceptance Criteria (Stated) | Reported Device Performance (MR Diffusion Perfusion Mismatch V1.0 vs. Olea Sphere® V3.0) |
---|---|---|
Parametric Maps | Not explicitly quantified as a numeric threshold. Implied as "statistically equivalent" and "substantially equivalent" by an expert. | ADC, CBF, CBV, MTT, and tMIP: Pearson and Spearman correlation coefficients > 0.8 (statistically equivalent). |
TTP and Tmax: Did not meet "acceptance criteria" due to sensitivity to acquisition grid variations, but a US board-certified neuroradiologist concluded all parametric maps were substantially equivalent qualitatively. | ||
Volume 1 | Bland-Altman: Average estimated bias close to zero, 95% of differences within an acceptable range based on literature and expert opinion. | |
DICE index: Excellent similarity. | ||
Absolute mean of differences: Low. | Bland-Altman: Average estimated bias = -0.33 ml; 95% of differences between -1.83 ml and +1.16 ml (acceptable per literature and US board-certified neuroradiologist). | |
Mean DICE index = 0.96 (excellent). | ||
Absolute mean of differences = 0.63 ml (acceptable). | ||
Visual inspection by neuroradiologist: equivalent for all 30 cases. | ||
Volume 2 | Bland-Altman: Average estimated bias close to zero, 95% of differences within an acceptable range based on literature and expert opinion. | |
DICE index: Acceptable similarity. | ||
Absolute mean of differences: Low. | Bland-Altman: Average estimated bias = -3.74 ml; 95% of differences between -33.59 ml and +26.10 ml (acceptable per literature and US board-certified neuroradiologist). | |
Mean DICE index = 0.75. | ||
Absolute mean of differences = 11.77 ml (acceptable per US board-certified neuroradiologist). | ||
Visual inspection by neuroradiologist: equivalent for all 30 cases. | ||
Mismatch Ratio | Bland-Altman: Average estimated bias close to zero, 95% of differences within an acceptable range based on expert opinion. | |
Absolute mean of differences: Low based on expert opinion. | Bland-Altman: Average estimated bias = 0.88; 95% of differences between -11.01 and +12.77 (acceptable per US board-certified neuroradiologist). | |
Absolute mean of differences = 1.87 (acceptable per US board-certified neuroradiologist). | ||
Mismatch Volume | Bland-Altman: Average estimated bias close to zero, 95% of differences within an acceptable range based on expert opinion. | |
Absolute mean of differences: Low based on expert opinion. | Bland-Altman: Average estimated bias = -3.09 ml; 95% of differences between -32.82 ml and +26.64 ml (acceptable per US board-certified neuroradiologist). | |
Absolute mean of differences = 11.81 ml (acceptable per US board-certified neuroradiologist). | ||
Relative Mismatch | Bland-Altman: Average estimated bias close to zero, 95% of differences within an acceptable range based on expert opinion. | |
Absolute mean of differences: Low based on expert opinion. | Bland-Altman: Average estimated bias = -6.57 %; 95% of differences between -57.28 % and +44.15 % (acceptable per US board-certified neuroradiologist). | |
Absolute mean of differences = 13.21 % (acceptable per US board-certified neuroradiologist). |
2. Sample Size Used for the Test Set and Data Provenance:
The sample size used for the comparative clinical image study (test set) was 30 cases. The data provenance (country of origin, retrospective/prospective) is not explicitly stated in the provided text.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts:
The evaluation was performed by one US board-certified neuroradiologist. The specific experience in years is not mentioned, but the board certification implies a certain level of expertise.
4. Adjudication Method for the Test Set:
The text describes qualitative assessments ("substantially equivalent," "visually equivalent") and quantitative statistical analyses (Bland-Altman, DICE index, correlations). These were reviewed and deemed acceptable by a single US board-certified neuroradiologist. There is no mention of a multi-reader adjudication method like 2+1 or 3+1.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
No, an MRMC comparative effectiveness study involving human readers' improvement with AI assistance versus without AI assistance was not conducted or reported in this document. The study focused on the equivalence between the new device and a predicate device.
6. If a Standalone (i.e. algorithm only without human-in-the loop performance) was done:
Yes, the study primarily evaluated the standalone performance of the MR Diffusion Perfusion Mismatch V1.0 algorithm in comparison to the predicate device, Olea Sphere® V3.0. The "human-in-the-loop" aspect was limited to the expert's qualitative assessment of the output generated by the algorithms.
7. The Type of Ground Truth Used:
The ground truth for the comparison was essentially the output of the predicate device, Olea Sphere® V3.0, which served as the reference for evaluating the performance of MR Diffusion Perfusion Mismatch V1.0. This was augmented by expert qualitative assessment from a US board-certified neuroradiologist to confirm "substantial equivalence" and "acceptability" of the differences.
8. The Sample Size for the Training Set:
The sample size for the training set is not mentioned in the provided text. The document focuses on the performance testing against a predicate device.
9. How the Ground Truth for the Training Set Was Established:
The method for establishing ground truth for any training set is not mentioned as the document does not elaborate on the training process for the device.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).