Search Results
Found 1 results
510(k) Data Aggregation
(267 days)
MediSystem PACS is intended for use as a diagnostic and analysis tool for diagnostic images for hospitals, imaging centers, radiologists, reading practices and any user who requires and is granted access to patient image, demographic and report information. MediSystem PACS displays and manages diagnostic quality DICOM images. MediSystem PACS is not intended for diagnostic use with mammography images. Usage for mammography is for referral only. MediSystem PACS is not intended for diagnostic use on mobile devices.
Contraindications: MediSystem PACS is not intended for the acquisition of mammographic image data and is meant to be used by qualified medical personnel.
The MediSystem PACS is a Web-based DICOM medical image viewer that allows downloading. reviewing, manipulating, visualizing and printing medical multi-modality image data in DICOM format, from a client machine. MediSystem PACS is a server-based solution that connects to any PACS and displays DICOM images within the hospital, securely from remote locations, or as an integrated part of an EHR or portal. MediSystem PACS enables health professionals to access, manipulate, measure DICOM images and collaborate real-time over full quality medical images using any web-browser without installing client software. Supported Web browsers are: Microsoft Edge 79 or later, Mozilla Firefox 58 or later, Google Chrome 63 or later.
This document is a 510(k) summary for the MediSystem PACS device, which is a Medical Image Management and Processing System. The purpose of this summary is to demonstrate substantial equivalence to a predicate device (MediLab, K221065). The document primarily focuses on comparing the features and intended use of the new device to the predicate device, rather than providing detailed acceptance criteria and performance data from a standalone or MRMC study.
Based on the information provided in the document, here's what can be extracted and what is not available:
1. A table of acceptance criteria and the reported device performance:
The document does not provide a formal table of acceptance criteria with specific quantitative targets and corresponding reported device performance for clinical endpoints (e.g., accuracy, sensitivity, specificity for diagnostic tasks). Instead, it focuses on demonstrating equivalence in technical characteristics and functional performance to a predicate device.
The only quantitative performance criteria mentioned are for measurement accuracy, which is considered a non-clinical performance test.
Acceptance Criteria (Non-clinical) | Reported Device Performance (Non-clinical) |
---|---|
Measurement accuracy for distance: 1% | 1% for distance |
Measurement accuracy for angle: 2% | 2% for angle |
Measurement accuracy for area: 2% | 2% for area |
2. Sample size used for the test set and the data provenance:
- Test Set Sample Size: Not explicitly stated. The document refers to "Performance Testing (Measurement Accuracy)" but does not specify the number of images or measurements used in this testing.
- Data Provenance: Not explicitly stated. Given the nature of the device (a PACS viewer), the "data" for measurement accuracy testing would typically be DICOM images designed to test measurement capabilities, not necessarily patient data with specific clinical outcomes. The document does not specify the origin of these test images. It is focused on the device's ability to display and measure images in DICOM format, not on its diagnostic efficacy for specific pathologies.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not applicable for the type of testing described. The "Performance Testing (Measurement Accuracy)" would likely involve comparing software measurements against known, true physical measurements or highly accurate digital representations, not expert consensus on medical findings. There is no indication that expert radiologists were involved in setting "ground truth" for this specific measurement accuracy testing.
4. Adjudication method for the test set:
- Not applicable for the type of testing described. Since the testing described is focused on objective measurement accuracy rather than subjective diagnostic interpretation, adjudication among experts is not relevant here.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study was not done. The device (MediSystem PACS) is a Medical Image Management and Processing System, primarily a viewer and management tool, not an AI-powered diagnostic aide. Its function is to display diagnostic quality images and provide tools for manipulation and measurement, for use by qualified medical personnel. Therefore, a study to assess human reader improvement with "AI assistance" is not relevant to this device's intended use or claim.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, a form of "standalone" performance testing was done for measurement accuracy. The document states: "Performance Testing (Measurement Accuracy) was conducted on the MediSystem PACS to determine measurement accuracy when performing the various distance, area and angle measurements." This evaluates the software's ability to perform these measurements accurately, independent of a human user's diagnostic interpretation.
7. The type of ground truth used:
- For the measurement accuracy testing, the ground truth would typically be known, precise measurements of the objects within the test images. This could be established through physical calibration, highly accurate digital models, or other metrology standards, not expert consensus, pathology, or outcomes data, as the device is not making a diagnostic claim.
8. The sample size for the training set:
- Not applicable. The MediSystem PACS is a PACS viewer and management system, not an AI/ML algorithm that requires a "training set" in the conventional sense for learning patterns from data to make predictions or classifications. Its development is based on software engineering principles and functionality equivalence to a predicate, not machine learning model training.
9. How the ground truth for the training set was established:
- Not applicable. (See point 8).
Summary of Device Performance and Equivalence:
The document focuses on demonstrating substantial equivalence to a predicate device (MediLab, K221065) based on:
- Similar intended use: Both devices are used as diagnostic and analysis tools for medical images, display and manage DICOM images, and are not for mammography (except referral) or mobile devices.
- Similar technical characteristics and functionality: The document provides a detailed comparison table showing identical features such as DICOM image loading/visualization, patient study search, user authentication, image display operations (window level, rotate/pan/zoom, flip, layout, PET fusion, volumetric rendering), measurement functions (line, angle, area), annotations, report generation, export, DICOM windowing, supported imaging modalities (US, CT, MRI, XRay), communications (DICOM), and supported operating systems/browsers.
- Non-clinical performance testing: Measurement accuracy for distance (1%), angle (2%), and area (2%) was verified as comparable to the predicate device.
- Software Verification and Validation: The software was considered a "moderate" level of concern, and V&V documentation was provided as per FDA guidance. This ensures the software functions as intended and does not pose undue risks.
The conclusion is that "there are no differences between the devices that affect the usage, safety and effectiveness, thus no new question is raised regarding the safety and effectiveness." The submission relies heavily on the "feature comparison" and the fact that the new device replicates the functionality and performance of the legally marketed predicate device.
Ask a specific question about this device
Page 1 of 1