Search Results
Found 1 results
510(k) Data Aggregation
(177 days)
ADVANCED VIEWER
Advanced Viewer is a web based software for medical professionals that provides doctors with tools for secure online image (DICOM) review including measurement functions and the display of voxel objects.
It is not intended for detailed treatment planning, treatment of patients or the review of mammographic images. It is also not intended to be used on mobile systems.
Advanced Viewer is integrated in the online collaboration platform Quentry to share, discuss and transfer medical image data. The viewer provides capabilities to visualize medical images (DICOM) that have been uploaded to the platform before.
Quentry is a software platform consisting of a set of server-based components providing functions for transfer and storage of medical data, as well as user access via a web-based portal for data management, sharing, and download. The platform is integrated with desktop and server-based applications for upload and download of medical data from workstations and network-based image archive servers. The platform also provides interfaces for integration of third-party systems and applications. Quentry platform is an FDA class I product.
The provided text describes the "Advanced Viewer," a web-based software for medical professionals to review DICOM images. However, the document does not contain specific acceptance criteria, a detailed study that proves the device meets those criteria, or the other requested information like sample sizes, expert qualifications, or ground truth details.
The document primarily focuses on establishing substantial equivalence to predicate devices (CONi and iPlan) for regulatory approval (510(k)). It highlights that the Advanced Viewer offers more viewing features than CONi but states these do not introduce new safety or effectiveness concerns, and that it provides identical functionalities to iPlan, running on a different platform but using the same software framework.
The "Verification/validation summary" section mentions that verification was done to demonstrate that design specifications are met, and non-clinical validation included usability tests that were rated as successful according to their acceptance criteria. However, it does not detail what those acceptance criteria were or the specifics of the validation study.
Therefore, most of the information requested cannot be extracted from this document.
Here's a summary of what can be inferred from the provided text, and what is missing:
1. Table of Acceptance Criteria and Reported Device Performance:
- Acceptance Criteria: Not explicitly stated in terms of quantitative metrics or specific thresholds. The document generally implies that the device must function equivalently or better than the predicate devices for image viewing and manipulation.
- Reported Device Performance: Not explicitly detailed with specific performance metrics. The document states that "All test reports were finally rated as successful according to their acceptance criteria" for usability tests, but doesn't elaborate on the results.
Missing Information (Not Available in the Provided Text):
- Sample size used for the test set and the data provenance: Not mentioned.
- Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not mentioned. The device's purpose is for viewing existing DICOM images, not for generating a diagnosis that would require ground truth in the typical AI/CAD context.
- Adjudication method for the test set: Not mentioned.
- If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and the effect size of how much human readers improve with AI vs without AI assistance: Not mentioned. This device is a viewer, not an AI-based diagnostic tool.
- If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Not mentioned. The device is fundamentally a human-in-the-loop viewer.
- The type of ground truth used (expert consensus, pathology, outcomes data, etc): Not mentioned, and likely not applicable in the traditional sense for a medical image viewer. The "ground truth" for a viewer would relate to proper display and accurate measurements, likely verified against source DICOM data and expert visual inspection.
- The sample size for the training set: Not mentioned. There is no indication of machine learning or AI algorithm training in the description.
- How the ground truth for the training set was established: Not mentioned, and likely not applicable given the device's description.
Ask a specific question about this device
Page 1 of 1