Search Results
Found 1 results
510(k) Data Aggregation
(85 days)
MICSI-RMT is an image processing software that can be used for image enhancement in MRI images. It can be used to reduce image noise for head MRI.
MICSI-RMT Denoising (MICSI-RMT) is a Software as a Medical Device (SaMD) intended to enhance magnetic resonance imaging (MRI) images by reducing the image noise for head MRI images. The software can analyze both functional (fMRI) MRI and diffusion (dMRI) images. The device is intended to be used by radiologists in an imaging center, clinic, or hospital. The device is compatible with DICOM 3.0 standard.
The subject device has no user interface. The DICOM images obtained from a compatible MRI machine are streamed from the scanner to the designated DICOM destination, e.g., picture archiving and communication system (PACS), then to a DICOM router and processor (Mercure). It is within this router and processor where the subject device denoises the images. For fMRI images, parametric maps will be generated using pre-existing hospital software. For dMRI images, the subject device uses DTI to perform parameter estimation after denoising. Once the processing is complete, the device outputs are then routed to the target PACS (which can be the same one as before or a new DICOM destination) where the intended user can view the new images and the original images. Image processing time varies as it is dependent on the central processing unit (CPU), input imaqe size, and the number of input images. When multiple jobs have been submitted to MICSI-RMT, they are queued and processed sequentially based on the order received.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:
Acceptance Criteria and Reported Device Performance
Metric | Acceptance Criteria | Reported Device Performance |
---|---|---|
Qualitative Metrics | ||
Image Quality (Likert Scale) | Mean Likert score > 3. (A score of 3 or greater was defined as the ability to visualize small structures in brain white matter, adequate contrast to distinguish adjacent tissue types, and for fMRI, that activation relevant to language tasks was anatomically appropriate.) | For each test dataset, the MICSI-RMT enhanced images were rated at a Likert score of 3 or greater, indicating that this test passed. |
Artifact Presence (Likert Scale) | Mean Likert score > 3. (Criteria for scoring included artificial signal blurring, noise that could occlude anatomy of interest, and for fMRI, the presence of false positive activation.) | For each test dataset, the MICSI-RMT enhanced images were rated at a Likert score of 3 or greater, indicating that this test passed. |
Quantitative Metrics | ||
SNR Change | Greater than or equal to 5% change in SNR in enhanced images compared to original images over a region of interest spanning all white matter voxels. | Source diffusion and fMRI images enhanced by MICSI-RMT resulted in a greater than or equal to 5% change (compared to original images) over a region of interest spanning all white matter voxels. |
fMRI Activation Map Change | No specific numerical criterion beyond "improvement" is explicitly stated as an acceptance criterion within the context of the Broca's and Wernicke's regions, but the results demonstrate improvement. The text implies a desired higher activation level (more precise detection). | Median z-score improved from 3.01 to 3.64 in Broca's region and from 2.80 to 3.41 in Wernicke's region. (Higher activation levels correspond to stronger confidence in the detection of neural activity, which can lead to a reduction in false positives and improved reliability of activation maps.) |
dMRI STD Change | Reduction in standard deviation (STD) of mean diffusivity (MD) and fractional anisotropy (FA) in the posterior limb of the internal capsule, indicating greater precision. (Specific numerical targets for reduction are not stated as criteria, but the results show reduction.) | MD images enhanced with MICSI-RMT had an STD reduction from 0.16 to 0.075. FA images had an STD reduction from 0.09 to 0.069. This demonstrated that the image enhancement process leads to more precise parametric maps. |
Study Details:
-
Sample sizes used for the test set and the data provenance:
- The document states that the study evaluated "each test dataset" multiple times, implying there were several datasets used for testing. However, the exact sample size (number of images or cases) for the test set is not explicitly provided.
- Data Provenance: The study was retrospective and HIPAA-compliant. The country of origin of the data is not specified.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- The ground truth for the qualitative metrics (Image Quality and Artifact Presence) was established by a "group of expert neuroradiologist raters."
- The exact number of experts is not specified.
- Qualifications: "Expert neuroradiologist raters." No specific number of years of experience or other details about their qualifications are provided.
-
Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- The document states that the scores from the expert neuroradiologist raters were "aggregated to determine the mean Likert score for each image." This implies that individual scores were combined, but it does not specify a formal adjudication method like 2+1 or 3+1 for resolving disagreements. It suggests a mean-based consensus.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- The study described is a rater study where expert neuroradiologists rated images.
- However, it was conducted with raters blinded to the processing technique used (MICSI-RMT vs. SOC). This means it was a study comparing the output of the MICSI-RMT algorithm to standard of care processed images.
- It was not explicitly an MRMC comparative effectiveness study assessing human reader performance with AI assistance vs. without AI assistance. The focus was on the quality of the processed images themselves, as judged by experts. Therefore, no effect size on human reader improvement with AI assistance is reported.
-
If a standalone (i.e. algorithm only without human-in-the loop performance) was done:
- Yes, the quantitative metrics (SNR change, fMRI activation map change, and dMRI STD change) directly evaluate the standalone performance of the algorithm in processing images. While expert raters assessed the qualitative aspects of the output, the quantitative measures are independent of human interpretation of the algorithm's output during the measurement itself.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc):
- For qualitative metrics: Expert consensus (based on aggregated Likert scores from expert neuroradiologists).
- For quantitative metrics: Calculated values (SNR, z-scores for fMRI activation, and standard deviation for dMRI) directly derived from the processed image data, implicitly comparing to the original images. The acceptability of these values is defined by pre-determined numerical criteria.
-
The sample size for the training set:
- The document does not provide information regarding the sample size used for the training set.
-
How the ground truth for the training set was established:
- The document does not provide information regarding how the ground truth for the training set was established.
Ask a specific question about this device
Page 1 of 1