Search Results
Found 1 results
510(k) Data Aggregation
(127 days)
uWS-MR is a software solution intended to be used for viewing, manipulation, communication, and storage of medical images. It supports interpretation and evaluation of examinations within healthcare institutions. It has the following additional indications:
The MR Stitching is intended to create full-format images from overlapping MR volume data sets acquired at multiple stages.
The Dynamic application is intended to provide a general post-processing tool for time course studies.
The Image Fusion application is intended to combine two different image series so that the displayed anatomical structures match in both series.
MRS (MR Spectroscopy) is intended to evaluate the molecule constitution and spatial distribution of cell metabolism. It provides a set of tools to view, process, and analyze the complex MRS data. This application supports the analysis for both SVS (Single Voxel Spectroscopy) and CSI (Chemical Shift Imaging) data.
The MAPs application is intended to provide a number of arithmetic and statistical functions for evaluating dynamic processes and images. These functions are applied to the grayscale values of medical images.
The MR Breast Evaluation application provides the user a tool to calculate parameter maps from contrast-enhanced time-course images.
The Brain Perfusion application is intended to allow the visualization of temporal variations in the dynamic susceptibility time series of MR datasets.
MR Vessel Analysis is intended to provide a tool for viewing, manipulating, and evaluating MR vascular images.
The Inner view application is intended to perform a virtual camera view through hollow structures (cavities), such as vessels.
The DCE analysis is intended to view, manipulate, and evaluate dynamic contrast-enhanced MRI images.
The United Neuro is intended to view, manipulate, and evaluate MR neurological images.
uWS-MR is a comprehensive software solution designed to process, review and analyze MR (Magnetic Resonance Imaging) studies. It can transfer images in DICOM 3.0 format over a medical imaging network or import images from external storage devices such as CD/DVDs or flash drives. These images can be functional data, as well as anatomical datasets. It can be at one or more time-points or include one or more time-frames. Multiple display formats including MIP and volume rendering and multiple statistical analysis including mean, maximum and minimum over a user-defined region is supported. A trained, licensed physician can interpret these displayed images as well as the statistics as per standard practice.
This subject device contains the following modifications/improvements in comparison to the predicate device uWS-MR:
-
Modified Indications for Use Statement;
-
Added some advanced applications:
- DCE Analysis
- United Neuro
The provided text from the FDA 510(k) summary (K183164) for uWS-MR describes a software solution primarily for viewing, manipulation, and storage of medical images, with added advanced applications such as DCE Analysis and United Neuro. However, the document does not provide specific acceptance criteria or detailed study data to prove the device meets those criteria in the typical format of a clinical performance study.
The submission is for a modification to an existing device (uWS-MR, K172999) by adding two new features: DCE Analysis and United Neuro. The "study" described is primarily a performance verification demonstrating that these new features function as intended and are substantially equivalent to previously cleared reference devices with similar functionalities.
Here's a breakdown of the requested information based on the provided document, noting where specific details are absent:
Acceptance Criteria and Reported Device Performance
The document does not explicitly state quantitative acceptance criteria or a table of reported device performance metrics like sensitivity, specificity, or AUC, as would be typical for an AI-driven diagnostic device. Instead, the "performance" is assessed through verification that the new features (DCE Analysis and United Neuro) operate comparably to predicate devices and meet functional and safety standards.
The comparison tables (Table 2 for DCE Analysis and the subsequent table for United Neuro) serve as a substitute for performance criteria by demonstrating functional equivalence to reference devices.
Implicit Acceptance Criteria (based on comparison to reference devices): The new functionalities (DCE Analysis and United Neuro) must exhibit the same core operational features (e.g., image loading, viewing, motion correction, parametric maps, ROI analysis, result saving, reporting for DCE; and motion correction, functional activation calculation, diffusion parameter analysis, display adjustment, fusion, fiber tracking, time-intensity curve, ROI statistics, result saving, reporting for United Neuro) as their respective predicate/reference devices.
Reported Device Performance (Functional Equivalence):
The document states that the proposed device's functionalities are "Same" or "Yes" compared to the reference devices for all listed operational features, implying that they perform equivalently in terms of available functions.
Application | Function Name | Proposed Device (uWS-MR) | Reference Device Comparison | Remark (from document) |
---|---|---|---|---|
DCE Analysis | Type of imaging scans | MR | MR | Same |
Image Loading and Viewing | Yes | Yes | Same | |
Motion Correction | Yes | Yes | Same | |
Series Registration | Yes | Yes | Same | |
Parametric Maps (Ktrans, kep, ve, vp, iAUC, Max Slop, CER) | Yes (for all) | Yes (for most) | Same (for most, / for others but implied functional) | |
ROI Analysis | Yes | Yes | / (Implied Same) | |
Result Saving | Yes | Yes | Same | |
Report | Yes | Yes | Same | |
United Neuro | Type of imaging scan | MR | MR | Same |
Motion correction | Yes | Yes | Same | |
Functional activation calculation | Yes | Yes | Same | |
Diffusion parameter analysis | Yes | Yes | Same | |
Adjust display parameter | Yes | Yes | Same | |
Fusion | Yes | Yes | Same | |
Fiber tracking | Yes | Yes | Same | |
Time-Intensity curve | Yes | Yes | Same | |
ROI Statistics | Yes | Yes | Same | |
Result Saving | Yes | Yes | Same | |
Report | Yes | Yes | Same |
(Note: The document uses "/" in some rows for reference devices, which generally means the feature wasn't applicable or explicitly listed for that device, but the "Remark" column still concludes "Same" or "Yes," implying that the proposed device's functionality is considered equivalent or non-inferior within the context of substantial equivalence.)
Study Details:
-
Sample sizes used for the test set and the data provenance:
The document mentions "Performance Evaluation Report For MR DCE" and "Performance Evaluation Report For MR United neuro" as part of the "Performance Verification." However, it does not specify any sample sizes (number of cases or images) used for these performance evaluations. It also does not specify the data provenance (e.g., country of origin, retrospective or prospective nature of the data). -
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
The document does not provide any information regarding the number or qualifications of experts used for establishing ground truth, as it's a software verification and validation rather than a clinical reader study. -
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
Since no reader study or expert review for ground truth establishment is described, there's no mention of an adjudication method. -
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
No MRMC comparative effectiveness study was reported or required. The submission is for a software post-processing tool, not an AI-assisted diagnostic tool that directly impacts human reader performance or diagnosis. -
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
The performance verification for the new features (DCE Analysis and United Neuro) can be considered a form of standalone testing, as it demonstrates the algorithms' functional capabilities and outputs. However, the document doesn't provide quantitative metrics like accuracy, sensitivity, or specificity for these algorithms. It focuses on functional equivalence to existing cleared devices rather than a standalone performance benchmark against a clinical ground truth. -
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
The document does not specify the type of "ground truth" used for the performance verification of the DCE Analysis and United Neuro features. Given the nature of a 510(k) for a post-processing software with added functionalities, the "ground truth" likely refers to the expected algorithmic output or behavior as defined by engineering specifications and comparison to the outputs of the predicate/reference devices, rather than a clinical ground truth derived from expert consensus, pathology, or outcomes for disease detection. -
The sample size for the training set:
The document does not mention any training set size. This is because the device is described as "MR Image Post-Processing Software" and its functionalities seem to be based on established image processing algorithms rather than a machine learning model that requires a distinct training phase. -
How the ground truth for the training set was established:
As no training set is mentioned for a machine learning model, this information is not applicable/not provided.
Ask a specific question about this device
Page 1 of 1