Search Results
Found 1 results
510(k) Data Aggregation
(255 days)
MREplus+ Software is an assisted ROI drawing tool for liver MRE and Fat/Water images and is used for receiving, display, ROI selection, and analysis generation. It displays to a trained reader MRE and Fat/Water images, preliminary ROIs that it calculates from these images, and statistical analysis calculated from the ROIs and images are presented in a way for review and, optionally, modification by the trained reader.
The MREplus+ software is a tool for assisted Magnetic Resonance Elastography (MRE) and multipoint Dixon Fat/Water (FW) image analysis which calculates preliminary automated regions of interest (ROIs) and provides the environment for the trained readers to review the relevant MRE and FW information and approve or modify the ROIs. MREplus+ is intended to be used only with liver MRE and FW data. The inputs for MREplus+ are the MRE and FW images. In the case of MRE, this includes magnitude images (showing anatomy), wave images (showing wave propagation) with multiple time points across the wave cycle, and the elasticity and confidence images calculated by MRE's on-scanner MMDI algorithm or an offline MMDI packaged with MREplus+. In the case of FW, images include in-phase, out-of-phase, fat, water, fat fraction, and R2*. MREplus+ includes a DICOM receiver which can recognize and accept these images when sent from the MRI scanner or workstation using standard protocols. From these images, MREplus+ calculates automated ROIs. ROIs can be reviewed by an authorized trained reader for review, modification and approval. MREplus+ performs statistical calculations from the ROIs. MREplus+ outputs images, ROIs, and calculated results in an archive compatible report.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) summary for the MREplus+ Software:
Acceptance Criteria and Reported Device Performance
The FDA 510(k) summary does not explicitly list numerical "acceptance criteria" but rather describes the testing performed to demonstrate substantial equivalence to a predicate device. The performance is described qualitatively and in terms of failure rates and modification rates.
| Acceptance Criteria Category (Derived from study descriptions) | Specific Criteria (Implicit) | Reported Device Performance |
|---|---|---|
| Functional Equivalence (MRE) | MREplus+ should accurately process MRE images, calculate preliminary automated ROIs, and allow for review, modification, and approval by a trained reader, achieving results comparable to the predicate device (GE Advantage Workstation). | MREplus+ demonstrated <1% processing failures and <20% ROI modifications in a study of 1347 patient cases. This indicates that the automated ROIs are largely accurate and require minimal human intervention, aligning with the "assisted ROI drawing tool" claim and demonstrating functional equivalence to manual drawing on the predicate. The overall conclusion states MREplus+ proved to be an "accurate software tool that facilitates liver MRE analysis with an accuracy of >99% compared to the standard predicate methodology," though the exact metric for this 99% accuracy is not quantified (e.g., consistency of stiffness values within a certain delta). |
| Functional Equivalence (Fat-Water) | MREplus+ should accurately calculate Fat Fraction from Fat-Water images, allowing for review, modification, and approval by a trained reader, achieving results comparable to the predicate device. | In all 92 Fat-Water cases, MREplus+ was able to accurately and reliably calculate Fat Fraction from Fat-Water images. This suggests complete success for this specific functionality. |
| Usability/Workflow (Implicit) | The software should function as an "assisted ROI drawing tool" where preliminary ROIs are provided, allowing for efficient review and optional modification by a trained reader, leading to the generation of an archive-compatible report. The workflow should be comparable to the predicate while potentially offering improvements in automation. | The "MRE study involving 1347 patient cases... MREplus+ demonstrated <1% processing failures and <20% ROI modifications" indicates a highly efficient workflow where automated ROIs are successful in the vast majority of cases, reducing manual effort compared to a fully manual system like the predicate. The "archive compatible report" output also matches the predicate's functionality. |
| Safety and Effectiveness | The device should be as safe and effective as the legally marketed predicate device and present no new concerns. | "The nonclinical and retrospective clinical validation tests conducted demonstrate that the device is as safe, as effective, and performs as well as or better than the legally marketed predicate device." The low processing failure and modification rates support this claim by showing it generates reliable results that require minimal correction by human users. |
Study Details Proving Device Meets Acceptance Criteria:
-
Sample Sizes Used for the Test Set and Data Provenance:
- MRE Study (Test Set): 1347 patient cases.
- Fat-Water Study (Test Set): 92 patient cases.
- Data Provenance: The document states "retrospective clinical data bases." The country of origin is not explicitly stated, but the submission is to the U.S. FDA, and the submitter's address is in Minnesota, USA, implying the data is likely from the U.S. or at least acceptable for submission to the FDA.
-
Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Those Experts:
- The document implies that "standard expert reviewer readings using the predicate device" were used as the gold standard for comparison (ground truth). However, it does not specify the exact number of experts who established this ground truth or their specific qualifications (e.g., "radiologist with 10 years of experience"). It broadly refers to "trained readers" and "technician or radiologist."
-
Adjudication Method for the Test Set:
- The document states that the MREplus+ calculates "preliminary automated regions of interest (ROIs)" which are then "reviewed by an authorized trained reader for review, modification and approval." For the MRE study, it reports "<20% ROI modifications." This implies a human-in-the-loop (HITL) review and optional modification process. The ground truth was established by comparing to "standard expert reviewer readings using the predicate device," which would inherently involve human review and drawing of ROIs. There is no mention of a multi-reader adjudication method like 2+1 or 3+1 for resolving discrepancies amongst multiple human readers if such were used to establish the "standard expert reviewer readings."
-
If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size:
- The document describes a comparison to human expert readings using a predicate device, which is a form of comparative effectiveness study. However, it is not explicitly framed as a Multi-Reader Multi-Case (MRMC) comparative effectiveness study in the standard sense of comparing multiple human readers with and without AI assistance to measure a statistically significant improvement.
- The study does not report an effect size of how much human readers improve with AI vs. without AI assistance in a quantified statistical manner (e.g., increase in AUC, sensitivity, or specificity with AI). Instead, it focuses on the accuracy of the AI's preliminary output (low modification rate) and the overall accuracy compared to the predicate method. The claim of "accuracy of >99% compared to the standard predicate methodology" serves as the primary performance metric, implicitly suggesting that if the AI's output closely matches the expert's, the human reader's task becomes one of verification rather than creation, thereby improving efficiency and consistency.
-
If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done:
- The description of the MREplus+ as an "assisted ROI drawing tool" and the detail about "preliminary automated ROIs" followed by human review and optional modification suggest that the device is not intended for standalone (algorithm-only) use in a clinical setting for diagnosis.
- The reported performance metrics, like "<20% ROI modifications," inherently reflect the performance of the algorithm before human intervention. While these are algorithm-generated performances, the device's intended use is with a human-in-the-loop. The statement "MREplus+ proved to be an accurate software tool that facilitates liver MRE analysis with an accuracy of >99% compared to the standard predicate methodology" likely refers to the accuracy of the MREplus+ suggested ROIs' derived values when compared to the predicate's values, rather than a standalone diagnostic performance metric.
-
The Type of Ground Truth Used:
- The ground truth for the test sets was established by "standard expert reviewer readings using the predicate device." This is a form of expert consensus or expert-derived ground truth, where the "standard" implies an accepted clinical practice for deriving measurements from MRE and Fat/Water images. It's not pathology or outcomes data.
-
The Sample Size for the Training Set:
- The document does not specify the sample size used for the training set for the MREplus+ software. It only provides details about the validation (test) sets.
-
How the Ground Truth for the Training Set Was Established:
- The document does not provide information on how the ground truth for the training set was established. Since it is an AI-assisted tool, it would presumably require a large, annotated dataset for training, but these details are not present in the provided summary.
Ask a specific question about this device
Page 1 of 1