Search Results
Found 4 results
510(k) Data Aggregation
(265 days)
uMR Omega with uWS-MR-MRS
The uMR Omega system is indicated for use as a magnetic device (MRDD) that produces sagittal, transverse, coronal, and oblique cross sectional images, and that display internal anatomical structure and/or function of the head, body and extremities.
These images and the physical parameters derived from the images when interpreted by a trained physician yield information that may assist the diagnosis. Contrast agents may be used depending on the region of interest of the scan.
u WS-MR is a software solution intended to be used for viewing, manipulation, and storage of medical images. It supports interpretation and evaluation of examinations within healthcare institutions. It has the following additional indications:
The MR Stitching is intended to create full-format images from overlapping MR volume data sets acquired at multiple stages.
The Dynamic application is intended to provide a general post-processing tool for time course studies.
The Image Fusion application is intended to combine two different image series so that the displayed anatomical structures match in both series.
MRS (MR Spectroscopy) is intended to evaluate the molecule constitution and spatial distribution of cell metabolism. It provides a set of tools to view, process, and analyze the complex MRS data. This application supports the analysis for both SVS (Single Voxel Spectroscopy) and CSI (Chemical Shift Imaging) data.
The MAPs application is intended to provide a number of arithmetic and statistical functions for evaluating dynamic processes and images. These functions are applied to the grayscale values of medical images.
The MR Breast Evaluation application provides the user a tool to calculate parameter maps from contrast-enhanced timecourse images.
The Brain Perfusion application is intended to allow the visualizations in the dynamic susceptibility time series of MR datasets.
MR Vessel Analysis is intended to provide a tool for viewing, and evaluating MR vascular images.
The Inner view application is intended to perform a virtual camera view through hollow structures (cavities), such as vessels.
The DCE analysis is intended to view, manipulate, and evaluate dynamic contrast-enhanced MRI images.
The United Neuro is intended to view, manipulate, and evaluate MR neurological images.
The MR Cardiac Analysis application is intended to be used for viewing, post-processing and quantitative evaluation of cardiac magnetic resonance data.
The uMR Omega is a 3.0T superconducting magnetic resonance diagnostic device with a 75cm size patient bore. It consists of components such as magnet, RF power amplifier, RF coils, gradient power amplifier, gradient coils, patient table, spectrometer, computer, equipment cabinets, power distribution system, internal communication system, and vital signal module etc. The uMR Omega Magnetic Resonance Diagnostic Device is designed to conform to NEMA and DICOM standards.
uWS-MR is a comprehensive software solution designed to process, review and analyze MR (Magnetic Resonance Imaging) studies. It can be used as a stand-alone SaMD or a post processing application option for cleared UIH (Shanghai United Imaging Healthcare Co.,Ltd.) MR Scanners.
The uMR 780 is a 3.0T superconducting magnetic resonance diagnostic device with a 65cm size patient bore. It consists of components such as magnet, RF power amplifier. RF coils, gradient power amplifier, gradient coils, patient table, spectrometer, computer, equipment cabinets, power distribution system, internal communication system, and vital signal module etc. The uMR 780 Magnetic Resonance Diagnostic Device is designed to conform to NEMA and DICOM standards.
The document describes the performance testing for the "DeepRecon" feature, an artificial intelligence (AI)-assisted image processing algorithm, of the uMR Omega with uWS-MR-MRS device.
Here's a breakdown of the requested information:
1. A table of acceptance criteria and the reported device performance
Evaluation Item | Acceptance Criteria | Reported Device Performance (Test Result) | Results |
---|---|---|---|
Image SNR | DeepRecon images achieve higher SNR compared to the images without DeepRecon (NADR) | NADR: 209.41±1.08, DeepRecon: 302.48±0.78 | PASS |
Image uniformity | Uniformity difference between DeepRecon images and NADR images under 5% | 0.15% | PASS |
Image contrast | Intensity difference between DeepRecon images and NADR images under 5% | 0.9% | PASS |
Structure Measurements | Measurements on NADR and DeepRecon images of same structures, measurement difference under 5% | 0% | PASS |
2. Sample size used for the test set and the data provenance
- Sample Size for Test Set: 77 US subjects.
- Data Provenance: The testing data was collected from various clinical sites in the US, ensuring diverse demographic distributions covering various genders, age groups, ethnicities, and BMI groups. The data was collected during separated time periods and on subjects different from the training data, making it completely independent and having no overlap with the training data.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: Not explicitly stated, but the document mentions "American Board of Radiologists certificated physicians" evaluated the DeepRecon images. This implies a group of such experts.
- Qualifications of Experts: American Board of Radiologists certificated physicians.
4. Adjudication method for the test set
- The document states that "All DeepRecon images were rated with equivalent or higher scores in terms of diagnosis quality" by the radiologists. This suggests a consensus or rating process, but the specific adjudication method (e.g., majority vote, sequential review) is not detailed.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and if so, what was the effect size of how much human readers improve with AI vs without AI assistance
- The document implies a human-in-the-loop evaluation as "DeepRecon images were evaluated by American Board of Radiologists certificated physicians," and they "verified that DeepRecon meets the requirements of clinical diagnosis." It also states "All DeepRecon images were rated with equivalent or higher scores in terms of diagnosis quality." However, this is not explicitly described as a formal MRMC comparative effectiveness study designed to quantify human reader improvement with vs. without AI assistance. The focus seems to be on the diagnostic quality of the DeepRecon images themselves. No specific effect size is provided for human reader improvement with AI assistance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Yes, a standalone (algorithm only) performance evaluation was conducted based on objective metrics like Image SNR, Image Uniformity, Image Contrast, and Structure Measurements, as detailed in Table b. The radiologist evaluation appears to be a subsequent step to confirm clinical utility.
7. The type of ground truth used
- For the objective performance metrics (SNR, uniformity, contrast, structure measurements), the ground truth for comparison appears to be the images "without DeepRecon (NADR)".
- For the expert evaluation, the ground truth is implicitly based on the expert consensus of the American Board of Radiologists certificated physicians regarding the diagnostic quality of the images.
- For the training data ground truth (see point 9), it was established using "multiple-averaged images with high-resolution and high SNR."
8. The sample size for the training set
- The training data for DeepRecon was collected from 264 volunteers.
9. How the ground truth for the training set was established
- The ground truth for the training dataset was established by collecting "multiple-averaged images with high-resolution and high SNR" from each subject. The input images for training were then generated by sequentially reducing the SNR and resolution of these high-quality ground-truth images. All data used for training underwent manual quality control.
Ask a specific question about this device
(255 days)
uWS-MR
uWS-MR is a software solution intended to be used for viewing, manipulation, and storage of medical images. It supports interpretation and evaluation of examinations within healthcare institutions. It has the following additional indications:
The MR Stitching is intended to create full-format images from overlapping MR volume data sets acquired at multiple stages.
The Dynamic application is intended to provide a general post-processing tool for time course studies.
The Image Fusion application is intended to combine two different image series so that the displayed anatomical structures match in both series.
MRS (MR Spectroscopy) is intended to evaluate the molecule constitution and spatial distribution of cell metabolism. It provides a set of tools to view, process, and analyze the complex MRS data. This application supports the analysis for both SVS (Single Voxel Spectroscopy) and CSI (Chemical Shift Imaging) data.
The MAPs application is intended to provide a number of arithmetic and statistical functions for evaluating dynamic processes and images. These functions are applied to the grayscale values of medical images.
The MR Breast Evaluation application provides the user a tool to calculate parameter maps from contrast-enhanced timecourse images.
The Brain Perfusion application is intended to allow the visualizations in the dynamic susceptibility time series of MR datasets.
MR Vessel Analysis is intended to provide a tool for viewing, manipulating MR vascular images.
The Inner view application is intended to perform a virtual camera view through hollow structures (cavities), such as vessels.
The DCE analysis is intended to view, manipulate, and evaluate dynamic contrast-enhanced MRI images.
The United Neuro is intended to view, manipulate, and evaluate MR neurological images.
The MR Cardiac Analysis application is intended to be used for viewing, post-processing and quantitative evaluation of cardiac magnetic resonance data.
uWS-MR is a comprehensive software solution designed to process, review and analyze MR (Magnetic Resonance Imaging) studies. It can be used as a stand-alone SaMD or a post processing application option for cleared UIH (Shanghai United Imaging Healthcare Co.,Ltd.) MR Scanners. It can transfer images in DICOM 3.0 format over a medical imaging network or import images from external storage devices such as CD/DVDs or flash drives. These images can be functional data, as well as anatomical datasets. It can be at one or more time-points or include one or more time-frames. Multiple display formats including MIP and volume rendering and multiple statistical analysis including mean, maximum and minimum over a user-defined region is supported. A trained, licensed physician can interpret these displayed images as well as the statistics as per standard practice.
The provided 510(k) summary for the uWS-MR device from Shanghai United Imaging Healthcare Co., Ltd. does not contain a typical acceptance criteria table with reported device performance metrics in the format usually associated with diagnostic performance studies (e.g., sensitivity, specificity, accuracy).
Instead, this document describes modifications to an already cleared Picture Archiving and Communications System (PACS) named uWS-MR (K183164) and introduces a new advanced application (MR Cardiac Analysis) and a modified existing application (United Neuro). The "acceptance criteria" here are framed around demonstrating substantial equivalence to predicate devices for these functionalities.
Here's a breakdown of the requested information based on the provided text, focusing on the performance verification mentioned, which is the closest to a "study" described.
1. Table of Acceptance Criteria and Reported Device Performance
As noted, a quantitative performance table with metrics like sensitivity or specificity is not present. The "acceptance criteria" are implied by the comparison to predicate devices, focusing on functional parity and safe operation, rather than diagnostic accuracy for a specific disease or condition. The performance is summarized as the device having a "safety and effectiveness profile that is similar to the predicate device and reference devices."
The tables in the document (Table 1 and Table 2) compare the functional features of the proposed device against predicate/reference devices. These tables demonstrate functional equivalence rather than meeting specific quantifiable performance thresholds.
Item | Acceptance Criterion (Implied) | Reported Device Performance (Implied) |
---|---|---|
General Device | Substantial equivalence to predicate device in classification, product code, regulation number, device class, and panel. | All general characteristics are "Same" as the predicate device. |
Indications for Use | Substantial equivalence in core indications, with additional applications not negatively impacting safety/effectiveness. | The indications for use are supplemented, and additional applications are discussed, with the conclusion that differences "will not impact the safety and effectiveness of the device." |
Basic Functions | Functional equivalence for image communication, hardware/OS, patient administration, review 2D/3D, filming, fusion, inner view, visibility, ROI/VOI, MIP display, compare, and report. | All basic functions are "Same" as the predicate device, except "Report" which is "Optimized function which will not impact the safety and effectiveness." |
MR Cardiac Analysis | Functional equivalence to reference devices (cvi42 and Philips IntelliSpace Cardiovascular) for specific cardiac analysis functions (Cardiac Function and Flow Analysis). | All listed functions for Cardiac Analysis ("Type of imaging scans" to "Report") are "Same" as the reference devices. |
United Neuro | Functional equivalence to predicate device (uWS-MR K183164) for neurological image processing functions, with acceptable modification to MR Segmentation. | Most listed functions for United Neuro are "Same" as the predicate. "MR Segmentation" is a new feature, allowing manual user segmentation, with the note, "does not affect safety and effectiveness." |
Software V&V | Demonstration of safety and efficacy through software verification and validation. | Software V&V, hazard analysis (moderate LOC), and various software documentation were provided. |
Other Standards/Guidance | Compliance with relevant standards (DICOM, ISO 14971, IEC 62304). | Compliance with these standards is implicitly claimed by their listing under "Other Standards and Guidance." |
2. Sample Size Used for the Test Set and Data Provenance
The document states "No clinical study was required" and "No animal study was required." The performance verification relies on "Software Verification and Validation" and a "Performance Evaluation Report for MR Cardiac Analysis." Details regarding the specific datasets used for these evaluations (sample size, data provenance like country of origin, or retrospective/prospective nature) are not provided in this summary.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
Given that "No clinical study was required," the direct involvement of medical experts for establishing ground truth on a specific test set (for diagnostic performance) is not explicitly mentioned or detailed. The evaluation appears to be primarily focused on software functionality and engineering verification rather than a human reading study to assess diagnostic accuracy.
4. Adjudication Method for the Test Set
Since no clinical study or human reading study is mentioned, there is no information provided regarding an adjudication method for a test set.
5. Multi Reader Multi Case (MRMC) Comparative Effectiveness Study
A multi-reader, multi-case (MRMC) comparative effectiveness study is not mentioned in the document. Therefore, no effect size for human readers improving with or without AI assistance is provided.
6. Standalone Performance Study (Algorithm Only Without Human-in-the-Loop Performance)
The device is described as "a stand-alone SaMD" (Software as a Medical Device) or a "post processing application option." While the device itself is standalone software, the performance verification details do not explicitly detail a standalone performance study (e.g., diagnostic accuracy metrics) for its advanced applications. The evaluation is implicitly focused on the software's functional correctness and safety, rather than its diagnostic performance in a clinical scenario without human interaction to interpret results. The "Performance Evaluation Report for MR Cardiac Analysis" would be the closest to this, but its contents (e.g., metrics, sample size) are not detailed.
7. Type of Ground Truth Used
Given the lack of a clinical study, specific "ground truth" (such as pathology, outcomes data, or expert consensus on clinical cases) for diagnostic performance evaluation is not explicitly stated or detailed in the document. The "ground truth" for the software verification and validation would likely be defined by functional requirements specifications and adherence to design documents. For the cardiac and neuro applications, the "truth" could be defined by expected mathematical outcomes or standardized rendering accuracies.
8. Sample Size for the Training Set
The document does not provide any information regarding a training set sample size. This is typical for submissions focused on software modifications and functional equivalence rather than de novo AI algorithm development requiring extensive data for training and validation.
9. How the Ground Truth for the Training Set Was Established
As no training set is mentioned, no information is provided on how its ground truth would have been established.
Ask a specific question about this device
(127 days)
uWS-MR
uWS-MR is a software solution intended to be used for viewing, manipulation, communication, and storage of medical images. It supports interpretation and evaluation of examinations within healthcare institutions. It has the following additional indications:
The MR Stitching is intended to create full-format images from overlapping MR volume data sets acquired at multiple stages.
The Dynamic application is intended to provide a general post-processing tool for time course studies.
The Image Fusion application is intended to combine two different image series so that the displayed anatomical structures match in both series.
MRS (MR Spectroscopy) is intended to evaluate the molecule constitution and spatial distribution of cell metabolism. It provides a set of tools to view, process, and analyze the complex MRS data. This application supports the analysis for both SVS (Single Voxel Spectroscopy) and CSI (Chemical Shift Imaging) data.
The MAPs application is intended to provide a number of arithmetic and statistical functions for evaluating dynamic processes and images. These functions are applied to the grayscale values of medical images.
The MR Breast Evaluation application provides the user a tool to calculate parameter maps from contrast-enhanced time-course images.
The Brain Perfusion application is intended to allow the visualization of temporal variations in the dynamic susceptibility time series of MR datasets.
MR Vessel Analysis is intended to provide a tool for viewing, manipulating, and evaluating MR vascular images.
The Inner view application is intended to perform a virtual camera view through hollow structures (cavities), such as vessels.
The DCE analysis is intended to view, manipulate, and evaluate dynamic contrast-enhanced MRI images.
The United Neuro is intended to view, manipulate, and evaluate MR neurological images.
uWS-MR is a comprehensive software solution designed to process, review and analyze MR (Magnetic Resonance Imaging) studies. It can transfer images in DICOM 3.0 format over a medical imaging network or import images from external storage devices such as CD/DVDs or flash drives. These images can be functional data, as well as anatomical datasets. It can be at one or more time-points or include one or more time-frames. Multiple display formats including MIP and volume rendering and multiple statistical analysis including mean, maximum and minimum over a user-defined region is supported. A trained, licensed physician can interpret these displayed images as well as the statistics as per standard practice.
This subject device contains the following modifications/improvements in comparison to the predicate device uWS-MR:
-
Modified Indications for Use Statement;
-
Added some advanced applications:
- DCE Analysis
- United Neuro
The provided text from the FDA 510(k) summary (K183164) for uWS-MR describes a software solution primarily for viewing, manipulation, and storage of medical images, with added advanced applications such as DCE Analysis and United Neuro. However, the document does not provide specific acceptance criteria or detailed study data to prove the device meets those criteria in the typical format of a clinical performance study.
The submission is for a modification to an existing device (uWS-MR, K172999) by adding two new features: DCE Analysis and United Neuro. The "study" described is primarily a performance verification demonstrating that these new features function as intended and are substantially equivalent to previously cleared reference devices with similar functionalities.
Here's a breakdown of the requested information based on the provided document, noting where specific details are absent:
Acceptance Criteria and Reported Device Performance
The document does not explicitly state quantitative acceptance criteria or a table of reported device performance metrics like sensitivity, specificity, or AUC, as would be typical for an AI-driven diagnostic device. Instead, the "performance" is assessed through verification that the new features (DCE Analysis and United Neuro) operate comparably to predicate devices and meet functional and safety standards.
The comparison tables (Table 2 for DCE Analysis and the subsequent table for United Neuro) serve as a substitute for performance criteria by demonstrating functional equivalence to reference devices.
Implicit Acceptance Criteria (based on comparison to reference devices): The new functionalities (DCE Analysis and United Neuro) must exhibit the same core operational features (e.g., image loading, viewing, motion correction, parametric maps, ROI analysis, result saving, reporting for DCE; and motion correction, functional activation calculation, diffusion parameter analysis, display adjustment, fusion, fiber tracking, time-intensity curve, ROI statistics, result saving, reporting for United Neuro) as their respective predicate/reference devices.
Reported Device Performance (Functional Equivalence):
The document states that the proposed device's functionalities are "Same" or "Yes" compared to the reference devices for all listed operational features, implying that they perform equivalently in terms of available functions.
Application | Function Name | Proposed Device (uWS-MR) | Reference Device Comparison | Remark (from document) |
---|---|---|---|---|
DCE Analysis | Type of imaging scans | MR | MR | Same |
Image Loading and Viewing | Yes | Yes | Same | |
Motion Correction | Yes | Yes | Same | |
Series Registration | Yes | Yes | Same | |
Parametric Maps (Ktrans, kep, ve, vp, iAUC, Max Slop, CER) | Yes (for all) | Yes (for most) | Same (for most, / for others but implied functional) | |
ROI Analysis | Yes | Yes | / (Implied Same) | |
Result Saving | Yes | Yes | Same | |
Report | Yes | Yes | Same | |
United Neuro | Type of imaging scan | MR | MR | Same |
Motion correction | Yes | Yes | Same | |
Functional activation calculation | Yes | Yes | Same | |
Diffusion parameter analysis | Yes | Yes | Same | |
Adjust display parameter | Yes | Yes | Same | |
Fusion | Yes | Yes | Same | |
Fiber tracking | Yes | Yes | Same | |
Time-Intensity curve | Yes | Yes | Same | |
ROI Statistics | Yes | Yes | Same | |
Result Saving | Yes | Yes | Same | |
Report | Yes | Yes | Same |
(Note: The document uses "/" in some rows for reference devices, which generally means the feature wasn't applicable or explicitly listed for that device, but the "Remark" column still concludes "Same" or "Yes," implying that the proposed device's functionality is considered equivalent or non-inferior within the context of substantial equivalence.)
Study Details:
-
Sample sizes used for the test set and the data provenance:
The document mentions "Performance Evaluation Report For MR DCE" and "Performance Evaluation Report For MR United neuro" as part of the "Performance Verification." However, it does not specify any sample sizes (number of cases or images) used for these performance evaluations. It also does not specify the data provenance (e.g., country of origin, retrospective or prospective nature of the data). -
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
The document does not provide any information regarding the number or qualifications of experts used for establishing ground truth, as it's a software verification and validation rather than a clinical reader study. -
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
Since no reader study or expert review for ground truth establishment is described, there's no mention of an adjudication method. -
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
No MRMC comparative effectiveness study was reported or required. The submission is for a software post-processing tool, not an AI-assisted diagnostic tool that directly impacts human reader performance or diagnosis. -
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
The performance verification for the new features (DCE Analysis and United Neuro) can be considered a form of standalone testing, as it demonstrates the algorithms' functional capabilities and outputs. However, the document doesn't provide quantitative metrics like accuracy, sensitivity, or specificity for these algorithms. It focuses on functional equivalence to existing cleared devices rather than a standalone performance benchmark against a clinical ground truth. -
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
The document does not specify the type of "ground truth" used for the performance verification of the DCE Analysis and United Neuro features. Given the nature of a 510(k) for a post-processing software with added functionalities, the "ground truth" likely refers to the expected algorithmic output or behavior as defined by engineering specifications and comparison to the outputs of the predicate/reference devices, rather than a clinical ground truth derived from expert consensus, pathology, or outcomes for disease detection. -
The sample size for the training set:
The document does not mention any training set size. This is because the device is described as "MR Image Post-Processing Software" and its functionalities seem to be based on established image processing algorithms rather than a machine learning model that requires a distinct training phase. -
How the ground truth for the training set was established:
As no training set is mentioned for a machine learning model, this information is not applicable/not provided.
Ask a specific question about this device
(406 days)
uWS-MR
uWS-MR is a software solution intended to be used for viewing, manipulation, and storage of medical images. It supports interpretation and evaluations within healthcare institutions. It has the following additional indications:
The MR Stitching is intended to create full-format images from overlapping MR volume data sets acquired at multiple stages.
The Dynamic application is intended to provide a general post-processing tool for time course studies.
The Image Fusion application is intended to combine two different image series so that the displayed anatomical structures match in both series.
MRS (MR Spectroscopy) is intended to evaluate the molecule constitution and spatial distribution of cell metabolism. It provides a set of tools to view, process, and analyze the complex MRS data. This application supports the analysis for both SVS (Single Voxel Spectroscopy) and CSI (Chemical Shift Imaging) data.
The MAPs application is intended to provide a number of arithmetic and statistical functions for evaluating dynamic processes and images. These functions are applied to the grayscale values of medical images.
The MR Breast Evaluation application provides the user a tool to calculate parameter maps from contrast-enhanced timecourse images.
The Brain Perfusion application is intended to allow the visualizations in the dynamic susceptibility time series of MR datasets.
MR Vessel Analysis is intended to provide a tool for viewing, manipulating MR vascular images. The Inner view application is intended to perform a virtual camera view through hollow structures (cavities), such as vessels.
uWS-MR is a comprehensive software solution designed to process, review and analyze MR (Magnetic Resonance Imaging) studies. It can transfer images in DICOM 3.0 format over a medical imaging network or import images from external storage devices such as CD/DVDs or flash drives. These images can be functional data, as well as anatomical datasets. It can be at one or more time-points or include one or more time-frames. Multiple display formats including MIP and volume rendering and multiple statistical analysis including mean, maximum and minimum over a user-defined region is supported. A trained, licensed physician can interpret these displayed images as well as the statistics as per standard practice.
The provided document is a 510(k) premarket notification for the uWS-MR software. It focuses on demonstrating substantial equivalence to predicate devices, rather than presenting a standalone study with detailed acceptance criteria and performance metrics for a novel AI-powered diagnostic device.
Therefore, much of the requested information regarding specific acceptance criteria, detailed study design for proving the device meets those criteria, sample sizes for test and training sets, expert qualifications, and ground truth establishment cannot be fully extracted from this document.
Here's what can be inferred and stated based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly list quantitative acceptance criteria in a pass/fail format nor provide specific, measurable performance metrics for the proposed device (uWS-MR) in comparison to such criteria. Instead, it relies on demonstrating substantial equivalence to predicate devices by comparing their features and functionalities. The "Remark" column consistently states "Same," indicating that the proposed device's features align with its predicates, implying it meets comparable performance.
Feature Type (Category) | Proposed Device (uWS-MR) Performance (Inferred) | Predicate Device (syngo.via/Reference Devices) Performance (Inferred to be matched) | Remark/Acceptance (Inferred) |
---|---|---|---|
General | |||
Device Classification Name | Picture Archiving and Communications System | Picture Archiving and Communications System | Same (Acceptable) |
Product Code | LLZ | LLZ | Same (Acceptable) |
Regulation Number | 21 CFR 892.2050 | 21 CFR 892.2050 | Same (Acceptable) |
Device Class | II | II | Same (Acceptable) |
Classification Panel | Radiology | Radiology | Same (Acceptable) |
Specification | |||
Image communication | Standard network protocols like TCP/IP and DICOM. Additional fast image. | Standard network protocols like TCP/IP and DICOM. Additional fast image. | Same (Acceptable) |
Hardware / OS | Windows 7 | Windows 7 | Same (Acceptable) |
Patient Administration | Display and manage image data information of all patients stored in the database. | Display and manage image data information of all patients stored in the database. | Same (Acceptable) |
Review 2D | Basic processing of 2D images (rotation, scaling, translation, windowing, measurements). | Basic processing of 2D images (rotation, scaling, translation, windowing, measurements). | Same (Acceptable) |
Review 3D | Functionalities for displaying and processing image in 3D form (VR, CPR, MPR, MIP, VOI analysis). | Functionalities for displaying and processing image in 3D form (VR, CPR, MPR, MIP, VOI analysis). | Same (Acceptable) |
Filming | Dedicated for image printing, layout editing for single images and series. | Dedicated for image printing, layout editing for single images and series. | Same (Acceptable) |
Fusion | Auto registration, Manual registration, Spot registration. | Auto registration, Manual registration, Spot registration. | Same (Acceptable) |
Inner View | Inner view of vessel, colon, trachea. | Inner view of vessel, colon, trachea. | Same (Acceptable) |
Visibility | User-defined display property of fused image: Adjustment of preset of T/B value; Adjustment of the fused. | User-defined display property of fused image: Adjustment of preset of T/B value; Adjustment of the fused. | Same (Acceptable) |
ROI/VOI | Plotting ROI/VOI, obtaining max/min/mean activity value, volume/area, max diameter, peak activity value. | Plotting ROI/VOI, obtaining max/min/mean activity value, volume/area, max diameter, peak activity value. | Same (Acceptable) |
MIP Display | Image can be displayed as MIP and rotating play. | Image can be displayed as MIP and rotating play. | Same (Acceptable) |
Compare | Load two studies to compare. | Load two studies to compare. | Same (Acceptable) |
Advanced Applications (Examples) | |||
MR Brain Perfusion: Type of imaging scans | MRI | MRI | Same (Acceptable) |
MR Breast Evaluation: Automatic Subtraction | Yes | Yes | Same (Acceptable) |
MR Stitching: Automatic Stitching | Yes | Yes | Same (Acceptable) |
MR Vessel Analysis: Automatic blood vessel center lines extraction | Yes | Yes | Same (Acceptable) |
MRS: Single-voxel Spectrum Data Analysis | Yes | Yes | Same (Acceptable) |
MR Dynamic/MAPS: ADC and eADC Calculate | Yes | Yes | Same (Acceptable) |
2. Sample Size Used for the Test Set and the Data Provenance
The document states: "Software verification and validation testing was provided to demonstrate safety and efficacy of the proposed device." and lists "Performance Verification" for various applications. However, it does not specify the sample size used for these performance tests (test set) or the data provenance (e.g., country of origin, retrospective/prospective nature).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts
This information is not provided in the document. The filing focuses on technical and functional equivalence, not on clinical performance evaluated against expert ground truth.
4. Adjudication Method
This information is not provided in the document.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
A multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly mentioned as being performed or required. The submission is a 510(k) for substantial equivalence, which often relies more on technical verification and comparison to predicate devices rather than new clinical effectiveness studies.
6. Standalone Performance
The document clearly states: "Not Applicable to the proposed device, because the device is stand-alone software." This implies that the device is intended to perform its functions (viewing, manipulation, post-processing) as a standalone software, and its performance was evaluated in this standalone context during verification and validation, aligning with the predicates which are also software solutions. However, no specific standalone performance metrics are provided.
7. Type of Ground Truth Used
The document does not explicitly state the "type of ground truth" used for performance verification. Given the nature of the device (image post-processing software) and the 510(k) pathway, performance verification likely involved:
- Technical validation: Comparing outputs of uWS-MR's features (e.g., image stitching, parameter maps, ROI measurements) against known good results, simulated data, or outputs from the predicate devices.
- Functional testing: Ensuring features operate as intended (e.g., if a rotation function rotates the image correctly).
Pathology or outcomes data are typically used for diagnostic devices with novel clinical claims, which is not the primary focus here.
8. Sample Size for the Training Set
The concept of a "training set" typically applies to machine learning or AI models that learn from data. While the uWS-MR is post-processing software, and could potentially incorporate AI elements (though not explicitly stated beyond general "post-processing"), the document does not mention a training set size. This strongly suggests that a machine learning or deep learning model with a distinct training phase, as commonly understood, was not a primary component evaluated in this filing or, if present, its training data details were not part of this summary.
9. How the Ground Truth for the Training Set Was Established
As no training set is mentioned (see point 8), the establishment of its ground truth is not applicable and therefore not described in the document.
Ask a specific question about this device
Page 1 of 1