Search Results
Found 6 results
510(k) Data Aggregation
(109 days)
MeVis Medical Solutions AG
MeVis Liver Suite is an image analysis software and intended to be used for visualization of hepatic imaging studies derived from CT and MR scanning devices (image source: single- and multiframe DICOM).
MeVis Liver Suite supports physicians in their workflow for evaluating the liver, related vascular anatomy, and the volume of liver and liver vascular territories for treatment planning, preoperative evaluation of surgery strategies, and post-procedure or therapy follow-up assessment of the hepatic system and related vascular structures.
MeVis Liver Suite supports image analysis for multiphase contrast enhanced CT, dynamic contrast enhanced MRI, and MRCP. MeVis Liver Suite only allows import of homogenous image series (i.e., image and pixel size must be constant within each series) and with orthogonal image matrix (without a gantry tilt). Images with a resolution of > 5mm slice spacing are not suitable for image analysis with MeVis Liver Suite.
MeVis Liver Suite can be used for manual segmentation, user-defined manual labeling, and 3D visualization of:
- abdominal organs (i.e., liver, stomach, duodenum, spleen, kidney, gallbladder, and pancreas)
- liver related vascular structures (i.e., bile ducts, hepatic vein, portal vein, and inferior vena cava)
- lesions inside and adjacent to the liver
The tools for manual segmentation and 3D visualization are applicable for CT and MR imaging studies. In addition to the manual segmentation tools, MeVis Liver Suite provides AI based semi-automatic pre-segmentation tools for liver, hepatic artery, hepatic vein restricted to CT scans of potential living liver donors with healthy livers and intended for:
- Liver: contrast enhanced late-venous and venous phase
- Hepatic vein: contrast enhanced late-venous and venous phase
- Portal vein: contrast enhanced late-venous and venous phase
- Hepatic artery: contrast enhanced arterial phase
Using MeVis Liver Suite, users can evaluate the segmented objects by exploring, and manually correcting:
- the volume of the segmented abdominal organs (see above)
- the volume of the segmented lesions inside and adjacent to the liver
- the volume of the manually defined parts of the liver
- by defining separation planes ("separation proposals")
- from vascular territories that are derived from the user-defined labeling of the liver related vascular structures
- 3D visualizations of user-defined (vascular) tumor margins (coloring of area based on user-defined margin sizes/ distances - between the edges of user-defined lesions in relation to the edges of user-defined vascular structures of the liver)
- based on user provided values, calculation of liver volume to body weight ratios (i.e., estimated weight for remnant or graft, body surface area, graft to recipient body weight ratio, graft to SLV ratio, remnant to body weight ratio).
Using a manual spatial registration of the images, the segmented objects (created on different CT phases and MRI sequences) can be visualized together.
The information created with MeVis Liver Suite is intended to be used only in addition to the original images, clinical data, and the real anatomical and clinical situation. Physicians make all final patient management, and treatment decisions.
MeVis Liver Suite is not intended for the anatomical systems integumentary, skeletal, muscular, lymphatic, respiratory, nervous, reproductive, and cardiovascular (excluding hepatic).
MeVis Liver Suite does not support the following application areas: real time viewing, diagnostic review, mage manipulation, optimization, virtual colonoscopy, and automatic lesion detection.
MeVis Liver Suite does not utilize high-resolution displays or display drivers and should not be used as a replacement for a PACS workstation.
MeVis Liver Suite is an image analysis software and intended to be used for visualization of hepatic imaging studies derived from CT and MR scanning devices (image source: single- and multiframe DICOM).
MeVis Liver Suite supports physicians in their workflow for evaluating the liver, related vascular anatomy, and the volume of liver vascular territories for treatment planning, preoperative evaluation of surgery strategies, and post-procedure or therapy follow-up assessment of the hepatic system and related vascular structures.
The information created with MeVis Liver Suite is intended to be used only in addition to the original images, clinical data, and the real anatomical and clinical situation. Physicians make all final patient management, assessment, and treatment decisions.
Software and Operating System
MeVis Liver Suite is a standalone software application that can be installed on any PC that runs on Windows 10 which meet the hardware requirements.
Supported Modalities
DICOM compatible CT and MR image data with or without contrast.
The tools for manual segmentation and 3D visualization (see below) are applicable for CT and MR image data with the exception of the Al based semi-automatic pre-segmentation tools. The semi-automatic pre-segmentation tools are restricted to CT scans of potential living liver donors with healthy livers.
Image import and selection
MeVis Liver Suite supports image analysis for multiphase contrast enhanced CT, dynamic contrast enhanced MRI, and MRCP. MeVis Liver Suite only allows import of homoqenous image series (i.e., image and pixel size must be constant within each series) and with orthogonal image matrix (without a gantry tilt). Images with a resolution of > 5mm slice spacing are not suitable for image analysis with MeVis Liver Suite.
MeVis Liver Suite can be used to manually select DICOM images (CT and MR) for import. The user can visually inspect the images, if the anatomical structures are visible and if the image resolution and image quality is acceptable for the manual segmentation and the user's needs.
Seqmentation and 3D visualization
MeVis Liver Suite provides multiple contouring tools for manual segmentation.
The user has full control over the workflow and decides which structures to segment. 3D visualizations are created on demand. The following segmentation workflows are available:
. Abdominal organs
MeVis Liver Suite is intended to be used to manually segment and visualize liver, stomach, duodenum, spleen, kidney, gallbladder, or pancreas using contouring tools. Users can inspect and manually correct, add, and delete their segmentation until they are satisfied with the result. The following tools are available:
- o Freehand contouring
- o Region Growing
. Liver related vascular structures
MeVis Liver Suite is intended to be used to manually segment and visualize bile ducts. hepatic artery, hepatic vein, portal vein, and inferior vena cava, including the option to manually classify different vessels of the vascular branches by assigning them userdefined labels. Users can inspect and manually correct, add, and delete their segmentation until they are satisfied with the result. The following tools are available:
- o Freehand drawing and freehand contouring
- Region Growing o
- o Edit and label 3D tree
. Lesions inside and adjacent to the liver
MeVis Liver Suite is intended to be used to manually seqment, visualize, and label user identified lesions inside and adjacent to the liver using:
- o Freehand contouring
MeVis Liver Suite does not identify or highlight lesions or other abnormalities.
Additionally, the user can use Al based semi-automatic pre-segmentation of liver and liver related vascular structures to create a segmentation proposal. The semi-automatic preseqmentation uses locked/non adaptive Al networks.
● Semi-automatic pre-segmentation of the liver
- Supported modalities
- CT, contrast enhanced late-venous and venous phase
- o Limitations
■ Only intended for living donor liver transplantation cases (healthy livers)
-
Semi-automatic pre-segmentation for liver related vascular structures ●
-
Supported modalities o
- Hepatic vein: CT, contrast enhanced late-venous and venous phase .
- Portal vein: CT, contrast enhanced late-venous and venous phase
- . Hepatic artery: CT, contrast enhanced arterial phase
-
o Limitations
- Only intended for living donor liver transplantation cases (healthy livers)
Evaluation of segmented objects
Users can evaluate the segmented objects by exploring, and manually correcting:
-
Volume
MeVis Liver Suite calculates the volume (via voxel-counting of segmentation mask) and displays the volume information to the user for the following manually segmented objects: -
o Abdominal organs (i.e., liver, stomach, duodenum, spleen, kidney, gallbladder, pancreas)
-
o Lesions inside and adjacent to the liver
-
Manually defined parts of the liver o
- . "Separation proposals"
The user can manually define separation planes that part the liver into virtual parts. The user manually labels the parts as either "resection"," graft" (for coloring the 3D visualization of the user-defined part with a red color), or "remnant" (green color)
- = "Vascular territories" Using the user-defined labeling (name and color) of vessel subtrees, the software calculates seqmentation masks for the corresponding vascular territories within the liver
- 3D visualizations of user-defined (vascular) tumor margins (coloring of area based . on user-defined margin sizes/distances - between the edges of user-defined lesions in relation to the edges of user-defined vascular structures of the liver)
- o The distance of vascular structures to user selected lesions can be visualized with colored 3D visualizations:
- · with color coded vascular structures ("vascular tumor margins")
- . with color coded voxels around a lesion inside the liver ("tumor margins")
- o The distance of vascular structures to user selected lesions can be visualized with colored 3D visualizations:
- Based on user provided values, calculation of liver volume to body weight ratios ● MeVis Liver Suite provides the following calculations:
- Estimated Weight for Remnant and Graft O
- Body Surface Area o
- Graft to Recipient Body Weight Ratio o
- Graft to Standard Liver Volume (SLV) Ratio o
- Remnant Volume to Body Weight Ratio o
Manual spatial registration
MeVis Liver Suite supports performing a manual spatial registration of the images from different modalities and studies (CT and MR). The user can manually align the imported images visually in pairs on top of each other using manual rigid registration.
Reporting
Using MeVis Liver Suite, the user can report results of the image analysis in different formats (DICOM for archiving, with DICOM definitions for 2D segmentation and 3D volumes; HTML report).
Here's a breakdown of the acceptance criteria and study details for the MeVis Liver Suite device, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Feature/Metric | Acceptance Criteria (Implicit) | Reported Device Performance (MeVis Liver Suite) |
---|---|---|
Liver AI Pre-segmentation | Median DICE score similar to reference device (0.85) | Median: 0.98 |
Median 95% Hausdorff Distance (HD) similar to reference device (2.0 mm) | Median: 1.5 mm | |
Liver related vascular structures AI Pre-segmentation (HV, PV, HA) | Algorithms performed as expected (bench tests vs. ground truth). | Qualitative expert scoring: >80% of expert scores rated results as sufficiently accurate. |
Volume Calculation Accuracy | Pass criteria for accuracy of volume calculation (against reference values in simulated phantom data) | Fulfilled |
Volume Calculation Precision (Repeatability) | Pass criteria for precision of volume calculation (based on representative clinical test data) | Fulfilled |
2. Sample Size and Data Provenance for Test Set (AI Pre-segmentation)
- Liver AI Pre-segmentation: Not explicitly stated, but the study used a "retrospective multi-center performance study" and "subgroup analysis indicates that the AI algorithm generalize well across all subgroups." More specific numbers like total cases or institutions are not provided.
- Liver related vascular structures AI Pre-segmentation (HV, PV, HA): "a representative clinical multi-center dataset" was used. The exact number of cases or patients is not specified.
- Data Provenance: Retrospective, multi-center. No specific countries are mentioned.
3. Number and Qualifications of Experts for Ground Truth (Test Set)
- Liver AI Pre-segmentation: Not explicitly stated how many experts established the ground truth for this metric.
- Liver related vascular structures AI Pre-segmentation (HV, PV, HA): "3 board certified surgeons/radiologists" were used for qualitative assessment. Their specific experience (e.g., "10 years of experience") is not detailed beyond "board certified".
4. Adjudication Method for the Test Set
- Liver AI Pre-segmentation: Not explicitly stated. The DICE score and Hausdorff Distance are quantitative metrics, implying a direct comparison to a ground truth, rather than an adjudication process between expert segmentations.
- Liver related vascular structures AI Pre-segmentation (HV, PV, HA): "3 board certified surgeons/radiologists" provided scores using a 5-point Likert scale. This implies a consensus or individual scoring approach, but a specific adjudication method (e.g., 2+1, 3+1) is not detailed.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No, an MRMC comparative effectiveness study demonstrating how much human readers improve with AI vs. without AI assistance was not reported. The performance data focuses on the standalone performance of the AI pre-segmentation algorithm.
6. Standalone (Algorithm Only) Performance Study
- Yes, a standalone performance study was done for the AI-based semi-automatic pre-segmentation tools.
- For the liver, the standalone performance was assessed using DICE score and 95% Hausdorff Distance.
- For the liver-related vascular structures, bench tests comparing AI output with ground truth, and qualitative expert scoring on a clinical dataset, were performed.
7. Type of Ground Truth Used (Test Set)
- Liver AI Pre-segmentation: The metrics (DICE score, Hausdorff Distance) imply a quantitative comparison to an established "ground truth" segmentation. How this ground truth was derived (e.g., expert consensus, manual gold standard) is not explicitly detailed but typically for segmentation tasks, it involves expert manual annotation.
- Liver related vascular structures AI Pre-segmentation (HV, PV, HA): Bench tests compared AI output with "ground truth annotated by qualified experts." The qualitative assessment involved "expert scores." This suggests expert-annotated ground truth.
- Volume Calculation: "Digital simulated phantom data with a reference value" was used for accuracy, and "clinical test data" for precision. For simulations, the reference value would be the known volume of the phantom. For clinical data, the ground truth for volume calculation precision is implicitly based on repeatability tests against a reference process.
8. Sample Size for the Training Set
- The document does not provide any details regarding the sample size used for training the AI models.
9. How the Ground Truth for the Training Set was Established
- The document does not provide any details on how the ground truth for the training set was established.
Ask a specific question about this device
(263 days)
MeVis Medical Solutions AG
Veolity is intended to:
-
display a composite view of 2D cross-sections, and 3D volumes of chest CT images,
-
allow comparison between new and previous acquisitions as well as abnormal thoracic regions of interest, such as pulmonary nodules,
-
provide Computer-Aided Detection ("CAD") findings, which assist radiologists in the detection of solid pulmonary nodules between 4-30 mm in size in CT images with or without intravenous contrast. CAD is intended to be used as an adjunct, alerting the radiologist - after his or her initial reading of the scan - to regions of interest that may have been initially overlooked.
The system can be used with any combination of these features. Enabling is handled via licensing or configuration options.
Veolity is a medical imaging software platform that allows processing, review, and analysis of multi-dimensional digital images.
The system integrates within typical clinical workflow patterns through receiving and transferring medical images over a computer network. The software can be loaded on a standard off-theshelf personal computer (PC). It can operate as a stand-alone workstation or in a distributed server-client configuration across a computer network.
Veolity is intended to support the radiologist in the review and analysis of chest CT data. Automated image registration facilitates the synchronous display and navigation of current and previous CT images for follow-up comparison
The software enables the user to determine quantitative and characterizing information about nodules in the lung in a single study, or over the time course of several thoracic studies. Veolity automatically performs the measurements for segmented nodules, allowing lung nodules and measurements to be displayed. Afterwards nodule segmentation contour lines can be edited by the user manually with automatic recalculation of geometric measurements post-editing. Further, the application provides a range of interactive tools specifically designed for segmentation and volumetric analysis of findings in order to determine growth patterns and compose comparative reviews.
Veolity requires the user to identify a nodule and to determine the type of nodule in order to use the appropriate characterization tools. Additionally, the software provides an optional/licensable CAD package that analyzes the CT images to identify findings with features suggestive of solid pulmonary nodules between 4-30 mm in size. The CAD is not intended as a detection aid for either part-solid or non-solid lung nodules. The CAD is intended to be used as an adjunct, alerting the radiologist – after his or her initial reading of the scan – to regions of interest that may have been initially overlooked.
The provided text describes the MeVis Medical Solutions AG's Veolity device and its 510(k) submission (K201501). However, the document states: "N/A - No clinical testing has been conducted to demonstrate substantial equivalence." This means that detailed acceptance criteria tables, sample sizes for test sets, expert qualifications, etc., as requested in your prompt, are not explicitly provided in the document for a new clinical study.
The submission claims substantial equivalence based on the device being a combination of previously cleared predicate and reference devices. It asserts that the individual functionalities remain technically unchanged. The performance assessment for the CAD system relies on prior panel review results from the initial submission of the predicate device and a re-evaluation with a multi-center dataset designed to be comparable to the predicate device's clinical study.
Therefore, I cannot fully complete all sections of your request with specific details from this document regarding a new study demonstrating the device meets acceptance criteria. I will instead extract the information that is present and highlight the limitations.
Here's a breakdown of the requested information based on the provided text:
1. A table of acceptance criteria and the reported device performance
The document does not provide a specific table of quantitative acceptance criteria for the current submission's performance study. It states that the subject device's CAD system performance "provides equal results in terms of sensitivity and false positive rates compared to the primary predicate device."
It re-evaluated performance "in terms of sensitivity and false positive rate per case" and found it "equivalent to the primary predicate device."
- Acceptance Criteria (Implied): Equivalence in sensitivity and false positive rates per case compared to the primary predicate device (ImageChecker CT CAD Software System K043617).
- Reported Device Performance: Stated as "equivalent" to the primary predicate device's performance, which was based on its own initial submission. No specific numerical values for sensitivity or false positive rates are provided for Veolity.
2. Sample size used for the test set and the data provenance
- Test Set Sample Size: Not explicitly stated for the "re-evaluated" performance study. The text mentions it was conducted with "a multi-center dataset."
- Data Provenance: "modern and multivendor CT data." The document does not specify the country of origin or whether it was retrospective or prospective. Given it's a re-evaluation designed to be comparable to a predicate's clinical study, it's likely retrospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Not explicitly stated for the "re-evaluated" performance study. The document refers to "panel review results" from the initial submission of the predicate device for its performance assessment. It does not provide details on the number or qualifications of experts for Veolity's re-evaluation.
4. Adjudication method for the test set
Not explicitly stated. Given the reliance on prior predicate studies, this information would likely be found in the predicate's 510(k) submission.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: The document does not describe an MRMC comparative effectiveness study where human readers' performance with and without AI assistance was evaluated for this submission.
- Effect Size: Not applicable, as no such study is described. The CAD's indication for use states it is "intended to be used as an adjunct, alerting the radiologist - after his or her initial reading of the scan - to regions of interest that may have been initially overlooked." This implies an assistive role, but no data on human-AI collaboration improvement is presented here.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, a standalone performance evaluation of the CAD algorithm itself was done. The document states: "the subject device's CAD system performance provides equal results... in terms of sensitivity and false positive rates." This implies an algorithm-only evaluation against ground truth.
7. The type of ground truth used
The document does not explicitly state the method for establishing ground truth for the re-evaluated dataset. However, given that it's comparing against "sensitivity and false positive rates" for solid pulmonary nodules, the ground truth would typically be established by expert consensus (e.g., highly experienced radiologists, often with follow-up or pathology correlation if available within the original study design of the predicate).
For the predicate device, it mentions "panel review results," which strongly suggests expert consensus.
8. The sample size for the training set
The document does not provide any information about the training set used for the Veolity CAD algorithm. It only discusses performance evaluations.
9. How the ground truth for the training set was established
Not provided in the document.
Ask a specific question about this device
(90 days)
MEVIS MEDICAL SOLUTIONS AG
Visia™ is a medical image processing software application intended for the visualization of images from various sources (e.g., Computed Tomography (CT), Magnetic Resonance (MR), etc). The system provides viewing, quantification, manipulation, communication, and printing of medical images. Visia™ is not meant for primary diagnostic interpretation of mammography.
Visia™ is a medical imaging software platform that allows processing, review, and analysis of multidimensional digital images acquired from a variety of medical imaging modalities. Visia™ offers flexible workflow options to aid clinicians in the evaluation of patient anatomy and pathology. The Visia™ system integrates within typical clinical workflow patterns through receiving and transferring medical images over a computer network. The software can be loaded on a standard off-the-shelf personal computer (PC) and can operate as a stand-alone workstation or in a distributed server client configuration across a computer network. Images can be displayed based on physician preferences using configurable viewing options or hanging protocols. Visia™ provides the clinician with a broad set of viewing and analysis tools to annotate, measure, and output selected image views or reports.
The provided 510(k) summary for MeVis Medical Solutions AG's Visia™ device primarily focuses on demonstrating substantial equivalence to a predicate device (Vitrea® K071331) and discusses general nonclinical testing.
Based on the provided text, there is no detailed information about specific acceptance criteria, a dedicated study proving performance against these criteria, or the methodology typically found in studies for AI/CADe devices. The device, Visia™, is described as a medical image processing software platform for visualization, quantification, manipulation, and printing, rather than an AI/CADe device that performs diagnostic interpretations or provides automated analysis results for which detailed performance metrics would be assessed. The submission explicitly states: "Diagnosis is not performed by the software but by Radiologists, Clinicians and referring Physicians." and "Visia™ is not meant for primary diagnostic interpretation of mammography."
Therefore, I cannot populate most of the requested fields as the information is not present in the provided document.
Here's an assessment based on the available information:
1. Table of acceptance criteria and the reported device performance:
Acceptance Criteria (Not explicitly stated or quantitative for performance) | Reported Device Performance (General statements from submission) |
---|---|
Functional Equivalence to Predicate Device | The submission states: "Nonclinical and performance testing results are provided in the 510(k) and demonstrate that the predetermined acceptance criteria are met." It also claims "The design, function, and specifications of Visia™ are similar to the identified legally marketed predicate device." And "The new device and predicate devices are substantially equivalent in the areas of technical characteristics, general function, and intended use. The new device does not raise any new potential safety risks and is equivalent in performance to the existing legally marketed devices." The validation test plan "was designed to evaluate all input functions, output functions, and actions performed by the software in each operational mode" and "passed all in-house testing criteria including validating design, function, and specifications." |
Safety and Effectiveness | "The Visia™ labeling contains instructions for use and necessary cautions, warnings and notes to provide for safe and effective use of the device." "Risk Management is ensured via MeVis Medical Solution AG's Risk Management procedure, which is used to identify potential hazards." "These potential hazards are controlled via software development and verification testing." And "Nonclinical tests demonstrate that the device is safe, effective, and is substantially equivalent to the predicate device." |
2. Sample sized used for the test set and the data provenance:
- Sample Size: Not specified. The document refers generally to "nonclinical and performance testing" and a "Validation Test Plan" but does not provide details on the number of cases or datasets used.
- Data Provenance: Not specified (e.g., country of origin, retrospective/prospective).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not applicable/Not specified. The device is a viewing/processing platform, not a diagnostic aid that establishes "ground truth." The submission states "Diagnosis is not performed by the software but by Radiologists, Clinicians and referring Physicians."
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not applicable/Not specified.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC study was not described or implied. The device is explicitly stated as "not meant for primary diagnostic interpretation of mammography" and does not perform diagnosis; therefore, a study on human reader improvement with AI assistance would not be relevant to its intended use as described.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:
- Not applicable. The device is a "medical imaging software platform" that provides "viewing, quantification, manipulation, and printing." It is a tool for clinicians, not a standalone diagnostic algorithm.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc):
- Not applicable. As a viewing and processing software, the concept of "ground truth" for its performance isn't in the same vein as for a diagnostic AI. The "ground truth" for showing it works would likely relate to the accuracy of its display, measurements, and functional operations compared to predefined specifications or those of the predicate device, which isn't detailed in terms of a specific "ground truth" type here.
8. The sample size for the training set:
- Not applicable. This device is described as general medical image processing software, not an AI/ML algorithm that requires a training set in the conventional sense.
9. How the ground truth for the training set was established:
- Not applicable.
Summary of Device and Evidence provided:
The Visia™ device is a medical image processing software platform. Its 510(k) submission primarily relies on demonstrating substantial equivalence to a predicate device (Vitrea® K071331) and on internal "nonclinical and performance testing" to validate its design, function, and specifications. The criteria for acceptance appear to be primarily functional verification against design requirements and comparison to the predicate device's capabilities, rather than quantitative performance metrics for a diagnostic aid that would analyze medical images or perform AI tasks. The submission emphasizes that the software does not perform diagnosis and is not intended for primary diagnostic interpretation.
Ask a specific question about this device
(39 days)
MEVIS MEDICAL SOLUTIONS AG
Visia Oncology is a medical software application intended for the visualization of images from a variety of image devices. The system provides viewing, quantification, manipulation, and printing of medical images. Visia Oncology is a noninvasive image analysis software package designed to support the physician in routine diagnostic oncology, staging and follow-up. Flexible layouts and automated image registration facilitate the synchronous display and navigation of multiple datasets for viewing data and easy follow-up comparison. The application provides a range of interactive tools specifically designed for segmentation and volumetric analysis of findings. The integrated reporting helps the user to track findings and note changes, such as shape or size, over time.
Visia™ Oncology is a noninvasive medical image processing software application intended for the visualization of images from various sources such as Computed Tomography systems or from image archives. The system provides viewing, quantification, manipulation, and printing of medical images. Visia™ Oncology integrates within typical clinical workflow patterns through receiving and transferring medical images over a computer network. The software can be loaded on a standard off-the-shelf personal computer (PC) and can operate as a stand-alone workstation or in a distributed server-client configuration across a computer network. Visia™ Oncology is designed to support the physician in routine diagnostic oncology, staging and follow-up. Flexible layouts and automated image registration facilitate the synchronous display and navigation of multiple datasets for viewing data and easy follow-up comparison. The application provides a range of interactive tools specifically designed for segmentation and volumetric analysis of findings. The integrated reporting helps the user to track findings and note changes, such as shape or size, over time.
The provided text indicates that "Visia™ Oncology" is a medical image processing software. However, the document does not contain specific acceptance criteria, a detailed study description, or performance metrics for the device. Instead, it focuses on demonstrating substantial equivalence to predicate devices for regulatory clearance.
Therefore, I cannot provide a table of acceptance criteria and reported device performance, nor details about a specific study proving the device meets acceptance criteria, an MRMC study, standalone performance, or training/test set details based on this document.
Here's what can be extracted based on the information provided, assuming the "nonclinical testing" mentioned broadly refers to the evaluation of the device:
1. Table of Acceptance Criteria and Reported Device Performance:
Acceptance Criteria | Reported Device Performance |
---|---|
Not specified | Not specified |
Explanation: The document states that "Validation testing indicated that as required by the risk analysis, designated individuals performed all verification and validation activities and that the results demonstrated that the predetermined acceptance criteria were met." However, the specific acceptance criteria themselves (e.g., minimum accuracy for a particular task, specific tolerance for volumetric measurements, success rate for image registration) and the actual reported performance metrics against those criteria are not detailed in this 510(k) summary.
2. Sample size used for the test set and the data provenance:
- Sample size for the test set: Not specified.
- Data provenance: Not specified (e.g., country of origin, retrospective or prospective). The document only mentions "images from various sources such as Computed Tomography systems or from image archives."
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of experts: Not specified.
- Qualifications of experts: Not specified beyond the general statement that "Diagnosis is not performed by the software but by Radiologists, Clinicians and referring Physicians." There's no mention of specific experience levels or board certifications for anyone involved in establishing ground truth for testing.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not specified.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- MRMC study done: No, not mentioned in the document. The document describes the software as a tool to "support the physician" and provides "interactive tools," but it doesn't detail a study measuring improvement in human reader performance with or without the AI assistance.
- Effect size of improvement: Not applicable, as no MRMC study is detailed.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- The document implies the software is a standalone application but doesn't describe a standalone performance study of the algorithm itself in isolation from human interpretation. It emphasizes that "Diagnosis is not performed by the software but by Radiologists, Clinicians and referring Physicians." So, if "standalone" refers to the algorithm making independent diagnoses or interpretations without human oversight, then no such study is described, as that is explicitly not its intended use.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Not specified.
8. The sample size for the training set:
- Not specified. The document does not describe a machine learning training process or a training set.
9. How the ground truth for the training set was established:
- Not applicable, as no training set or machine learning model requiring ground truth for training is described. The device is characterized as "medical image processing software" that provides "viewing, quantification, manipulation, and printing." While it has "automated image registration" and "interactive tools specifically designed for segmentation and volumetric analysis," the underlying methods are not detailed as AI/ML that would require a distinct training phase.
Ask a specific question about this device
(62 days)
MEVIS MEDICAL SOLUTIONS AG
Visia™ Neuro is a medical image processing software application intended for the visualization of images from various sources such as Magnetic Resonance Imaging systems or from image archives. The system provides viewing, quantification, manipulation, and printing of medical images. Visia™ Neuro provides both analysis and viewing capabilities for anatomical and physiologic/functional imaging datasets, including blood oxygen dependent (BOLD) fMRI, diffusion, fiber tracking, dynamic review, and vessel visualization. Data can be visualized in both 2D and 3D views.
BOLD fMRi Review: The BOLD MRI feature is useful in identifying small susceptibility changes arising from neuronal activity during performance of a specific task.
Diffusion Review: The diffusion review feature is intended for visualization and analysis of the diffusion of water molecules through brain tissue.
Fiber Tracking Review: The fiber tracking feature uses the directional portion of the diffusion vector to track and visualize white matter structures within the brain.
Dynamic Review: Dynamic review feature is intended for visualization and analysis of MRI dynamic studies, showing changes in contrast over time, where such techniques are useful or necessary.
Vessel Visualization: The vessel feature is used to identify and visualize the vascular structures of the brain.
3D Visualization: The 3D visualization feature allows image data to be reconstructed as 3D objects that are visualized and manipulated on a 2D screen.
Visia™ Neuro is a medical image processing software application intended for the visualization of images from various sources such as Magnetic Resonance Imaging systems or from image archives. The system provides viewing, quantification, manipulation, and printing of medical images.
Visia™ Neuro integrates within typical clinical workflow patterns through receiving and transferring medical images over a computer network. The software can be loaded on a standard off-the-shelf personal computer (PC) and can operate as a stand-alone workstation or in a distributed server-client configuration across a computer network.
The software provides functionality for processing and analyzing both anatomical and physiologic/functional imaging datasets. Specifically, the software includes user defined processing modules for image registration, blood oxygen level dependent (BOLD) functional magnetic resonance imaging (fMRI), diffusion imaging, fiber tracking, dynamic imaging, and vessel imaging. Processed images are stored as separate files from the original data such that the original data is preserved.
Images may be displayed based on physician preferences using configurable layouts, or hangings. Visia™ Neuro provides the clinician with a broad set of viewing and analysis tools in both 2D and 3D. The software includes tools to annotate, measure, and output selected image views or user defined reports.
The provided documentation for Visia™ Neuro does not contain specific acceptance criteria or a detailed study that proves the device meets such criteria in terms of quantitative performance metrics for medical diagnosis or image interpretation.
Instead, the submission focuses on demonstrating substantial equivalence to a predicate device (DC Neuro, K081262) through non-clinical testing and verification/validation activities of the software itself. The document states that the software passed "all in-house testing criteria" and that "the results demonstrated that the predetermined acceptance criteria were met." However, these acceptance criteria are not explicitly defined in terms of clinical performance (e.g., accuracy, sensitivity, specificity for identifying pathologies).
Here's a breakdown of the information that is available in the provided text:
1. Table of Acceptance Criteria and Reported Device Performance:
No specific clinical acceptance criteria for diagnostic performance (e.g., sensitivity, specificity, AUC) are mentioned. The document primarily discusses functional and technical acceptance criteria related to software performance and safety.
Acceptance Criteria Category | Reported Device Performance |
---|---|
Software Functionality | Passed all in-house testing criteria for input functions, output functions, and actions in each operational mode. |
Safety and Effectiveness | Risk management procedures identified potential hazards, which were controlled via software development and verification & validation testing. |
Technological Characteristics (Substantial Equivalence) | Substantially equivalent to the predicate device (DC Neuro, K081262) in technical characteristics, general function, application, and intended use. Does not raise new safety risks. |
2. Sample size used for the test set and the data provenance:
- Test Set Description: The document refers to "the complete system configuration" being "assessed and tested at the manufacturer's facility." It also mentions "Validation Test Plan" results.
- Sample Size: Not specified. It only refers to "all verification activities."
- Data Provenance: Not specified, but given it was "in-house testing," it's likely internal, potentially simulated or based on historical data readily available to the manufacturer. It doesn't specify if it's retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- The document states: "Diagnosis is not performed by the software but by Radiologists, Clinicians and referring Physicians." and "A physician, providing ample opportunity for competent human interprets images and information being displayed and printed."
- However, for the validation testing of the software itself, there's no mention of experts establishing ground truth for evaluating diagnostic performance. The validation appears to be focused on software functionality and technical aspects rather than clinical outcome.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
Not applicable or not mentioned, as the validation described is for software functionality and not for diagnostic accuracy requiring expert adjudication.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
Not applicable. The document describes a software medical device for visualization and analysis, not an AI-assisted diagnostic tool requiring MRMC studies to assess human reader improvement.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
The device is explicitly described as a tool for visualization and analysis, with diagnoses made by physicians. Therefore, a "standalone algorithm only" performance study in a diagnostic context is not relevant to its intended use as described. The software's performance was evaluated in terms of its functions, not its diagnostic accuracy.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
No ground truth type is specified for evaluating the device's clinical performance because the validation described is centered on software functionality and technical equivalence.
8. The sample size for the training set:
Not applicable. This is a medical image processing software application, not a machine learning or AI algorithm that requires a "training set" in the context of learning to perform a diagnostic task.
9. How the ground truth for the training set was established:
Not applicable, as there is no mention of a training set for an AI/ML algorithm.
Ask a specific question about this device
(46 days)
MEVIS MEDICAL SOLUTIONS AG
Visia™ Dynamic Review is a software package intended for use in viewing and analyzing magnetic resonance imaging (MRI) studies. Visia™ Dynamic Review supports evaluation of dynamic MR data.
Visia™ Dynamic Review automatically registers serial patient motion to minimize the impact of patient motion and visualizes different enhancement characteristics (parametric image maps). Furthermore, it performs other user-defined post-processing functions such as image subtractions; multi-planar reformats and maximum intensity projections. The resulting information can be displayed in a variety of formats, including a parametric image overlaid onto the source image. Visia™ Dynamic Review can also be used to provide measurements for diameters, areas and volumes. Furthermore, Visia™ Dynamic Review can evaluate the uptake characteristics of segmented tissues.
Visia™ Dynamic Review also displays images from a number of other imaging modalities; however, these images must not be used for primary diagnostic interpretation.
When interpreted by a skilled physician, Visia™ Dynamic Review provides information that may be useful in diagnosis. Patient management decisions should not be made based solely on the results of Visia™ Dynamic Review analysis.
Visia™ Dynamic Review is a software package intended for use in viewing and analyzing magnetic resonance imaging (MRI) studies. Visia™ Dynamic Review supports evaluation of dynamic MR data.
Visia™ Dynamic Review integrates within typical clinical workflow patterns through receiving and transferring medical images over a computer network. The software can be loaded on a standard offthe-shelf personal computer (PC) and can operate as a stand-alone workstation or in a distributed server-client configuration across a computer network.
Visia™ Dynamic Review automatically registers serial patient motion to minimize the impact of patient motion and visualizes different enhancement characteristics (parametric image maps). Furthermore, it performs other user-defined post-processing functions such as image subtractions; multi-planar reformats and maximum intensity projections.
The resulting information can be displayed in a variety of formats, including a parametric image overlaid onto the source images can also be displayed based on physician preferences using configurable viewing options or hanging protocols.
Visia™ Dynamic Review provides the clinician with a broad set of viewing and analysis tools to annotate, measure, and output selected image views or user defined reports. Furthermore, Visia™ Dynamic Review can evaluate the uptake characteristics of segmented tissues.
Visia™ Dynamic Review also displays images from a number of other imaging modalities; however, these images must not be used for primary diagnostic interpretation.
Here's an analysis of the provided text regarding the acceptance criteria and study for the Visia™ Dynamic Review device, structured according to your requested information:
1. A table of acceptance criteria and the reported device performance
Based on the provided K113337 510(k) Summary, specific quantitative acceptance criteria and corresponding reported device performance values are not explicitly detailed. The document focuses on the regulatory submission process and general affirmations of safety and effectiveness through nonclinical testing.
Acceptance Criteria | Reported Device Performance |
---|---|
Not explicitly defined in the provided document. The document states: "The Validation Test Plan was designed to evaluate all input functions, output functions, and actions performed by the software in each operational mode and followed the process documented in the Validation Test Plan." And "Validation testing indicated that as required by the risk analysis, designated individuals performed all verification activities and that the results demonstrated that the predetermined acceptance criteria were met." | No specific quantitative performance metrics are provided. The document states: "The complete system configuration has been assessed and tested at the manufacturer's facility and has passed all in-house testing criteria." And "Nonclinical tests demonstrate that the device is safe, effective, and is substantially equivalent to the predicate device." |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The provided K113337 510(k) Summary does not specify the sample size used for any test set or the data provenance (country of origin, retrospective/prospective). It only mentions "in-house testing criteria" and "Validation Test Plan."
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
The provided K113337 510(k) Summary does not provide information on the number or qualifications of experts used to establish ground truth for any test set.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The provided K113337 510(k) Summary does not describe any adjudication method used for a test set.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
A multi-reader multi-case (MRMC) comparative effectiveness study was not described or performed in the provided K113337 510(k) Summary. The device is software for viewing and analyzing MRI studies, and the document explicitly states, "Diagnosis is not performed by the software but by Radiologists, Clinicians and referring Physicians." There is no mention of AI assistance or human reader improvement with or without AI.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The K113337 510(k) Summary does not explicitly describe a standalone performance study in terms of quantitative metrics. It states that the "complete system configuration has been assessed and tested at the manufacturer's facility and has passed all in-house testing criteria." However, it clarifies that the software's role is to provide information for a skilled physician's interpretation, rather than performing diagnosis independently: "Diagnosis is not performed by the software but by Radiologists, Clinicians and referring Physicians." and "A physician, providing ample opportunity for competent human intervention interprets images and information being displayed and printed."
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
The provided K113337 510(k) Summary does not specify the type of ground truth used for its internal testing or validation studies.
8. The sample size for the training set
The provided K113337 510(k) Summary does not mention a training set sample size. This is consistent with the nature of the device, which is described as "Medical Image Processing Software" that performs functions like motion registration, parametric image mapping, subtractions, and multi-planar reformats. These are generally rule-based or algorithmic image processing tasks rather than machine learning models that require explicit training sets in the typical sense.
9. How the ground truth for the training set was established
As no training set is mentioned or implied for the core functionalities of the device, the document does not describe how ground truth for a training set was established.
Ask a specific question about this device
Page 1 of 1