Search Results
Found 2 results
510(k) Data Aggregation
(50 days)
View is a software application that displays, processes, and analyzes medical image data and associated clinical reports to aid in diagnosis for healthcare professionals. It streamlines standard and advanced medical imaging analysis by providing a complete suite of measurement tools intended to generate relevant findings automatically collected for export and save purposes.
Typical users of this system are authorized healthcare professionals.
Mammography images may only be interpreted using a monitor compliant with requirements of local regulations and must meet other technical specifications reviewed and accepted by the local regulatory agencies. Lossy compressed mammographic images and digitized film screen images should not be reviewed for primary image interpretations with use of the View.
View is a cloud-native software application designed to support healthcare professionals in the display, processing, and analysis of medical image data. It enhances diagnostic workflows by integrating intelligent tools, streamlined accessibility, and advanced visualization capabilities, including specialized support for breast imaging.
View brings together 2D imaging, basic 3D visualization and advance image analysis in a single, intuitive interface. This simplifies information access, improves workflow efficiency, and reduces the need for multiple applications.
Key features include:
- Smart Reading Protocol (SRP) which uses machine learning for creating and applying hanging protocols (HP)
- AI workflow to support both DICOM Secondary Capture Object & DICOM Structured Report for displaying AI findings and enabling rejection/modification of the AI findings.
- Displays 2D, 3D, and historical comparison exams in customizable layouts.
- Enables smooth transitions between 2D and 3D views, either manually or as part of hanging protocols.
- Advantage Workstation integration for deeper analysis through dedicated 3D applications.
- Offers a full suite of measurement, annotation and segmentation tools for DICOM images.
- Captures all measurements and annotations in a centralized findings panel.
- Enhanced access for DICOM images stored in the cloud server.
- Easy way to integrate with external systems using FHIRcast.
- Better user experience by having a native MIP/MPR/Smart Segmentation/Volume Rendering
- Seamless access to breast images through cloud with specific tools for Mammo images.
The provided FDA 510(k) clearance letter and summary for the "View" device (K253639) offer limited details regarding specific acceptance criteria and the studies conducted to prove device performance. The information is high-level and generalized.
Based on the available text, here's the breakdown of what can and cannot be extracted:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly state quantitative acceptance criteria (e.g., minimum accuracy, sensitivity, specificity) or specific reported performance metrics for the device. It focuses on functional equivalence and verification/validation testing without presenting performance data.
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document states: "The Smart Reading Protocol function, which uses machine learning for creating and applying hanging protocols, was tested on various imaging modality datasets representative of the clinical scenarios where View is intended to be used."
- Sample Size: "various imaging modality datasets" – The exact sample size (number of images, cases, or patients) is not specified.
- Data Provenance: "representative of the clinical scenarios" – The country of origin and whether the data was retrospective or prospective are not specified.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
The document does not provide any information on the number or qualifications of experts used to establish ground truth for the test set.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The document does not specify any adjudication method used for the test set.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document mentions a "comparison was performed between the predicate device (Universal Viewer) and the subject device (View) and showed that the devices are equivalent" for the Smart Reading Protocol function. However, this is a comparison between devices, not an MRMC study designed to assess human reader improvement with AI assistance. Therefore:
- A specific MRMC comparative effectiveness study involving human readers with and without AI assistance is not described.
- Consequently, an effect size for human reader improvement is not provided.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The document implies that the Smart Reading Protocol (SRP) function, which uses machine learning, was tested. The statement "A comparison was performed between the predicate device (Universal Viewer) and the subject device (View) and showed that the devices are equivalent" regarding SRP suggests an evaluation of the algorithm's output. However, whether this testing was strictly standalone performance (algorithm only) versus integrated system performance is not explicitly detailed. Given the context of a 510(k) summary for a "Medical Image Management And Processing System," the testing likely covers the integrated system.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
The document does not specify the type of ground truth used for its testing.
8. The sample size for the training set
The document does not provide any information on the sample size used for the training set.
9. How the ground truth for the training set was established
The document does not provide any information on how the ground truth for the training set was established.
Summary of available information regarding acceptance criteria and study data:
Unfortunately, the provided FDA 510(k) summary is very high-level and primarily focuses on justifying substantial equivalence based on technological characteristics and general verification/validation processes. It does not contain the detailed performance metrics, sample sizes, ground truth establishment methods, or reader study details typically found in more comprehensive clinical study reports. The approval is based on the device having "substantial equivalent technological characteristics" and being "as safe and as effective" as the predicate device.
The only specific functional testing mentioned is for the "Smart Reading Protocol" using machine learning, which was compared to the predicate device to show equivalence. However, the details of this comparison (e.g., performance metrics, specific acceptance criteria for equivalence, or study design) are not included.
Ask a specific question about this device
(21 days)
Universal Viewer is a software application that displays medical image data and associated clinical reports to aid in diagnosis for healthcare professionals. It performs operations relating to the transfer, storage, display, and measurement of image data.
Typical users of this system are authorized healthcare professionals.
Mammography images may only be interpreted using a monitor compliant with requirements of local regulations and must meet other technical specifications reviewed and accepted by the local regulatory agencies.
Lossy compressed mammographic images and digitized film screen images should not be reviewed for primary image interpretations with use of the Universal Viewer.
Universal Viewer is an Internet based medical image display and interpretation software product that is part of a medical image management and processing system. It provides users with capabilities relating to the acceptance, transfer, display, and digital processing of medical images.
The Universal Viewer product does not produce any original medical images displayed by Universal Viewer have been received from DICOM compliant modalities and/or image acquisition systems.
Universal Viewer supports DICOM SOP classes to access and manage medical imaging studies from Computed Tomography (CT), Magnetic Resonance (MR), Ultrasound (US), Nuclear Medicine (NM), Computerized Radiography (CR), Digital mammography (MG), Digital X-ray (DX), Positron Emission Tomography (PET/PT), X-Ray Angiography (XA), Digital Intra-oral X-Ray (IO), Radiofluoroscopic X-ray (RF), Secondary Capture Images (SC), Visible Light (VL) Endoscopic, Microscopic and Photographic Image Storage, Slide Coordinates Microscopic Image Storage, Presentation States (PS), Key Image Notes (KIN), and other DICOM imaging modalities.
Universal Viewer provides image manipulation tools to enable users to view and compare images such as: measurements (linear distances, angles, areas, SUV, etc.), annotations (outline and label regions of interest, label spinal vertebrae), MPR, MIP, and 3D image fusion of CT, PET and registration of CT, PET and MR.
Universal Viewer is designed to be deployed over conventional Transmission Control Protocol/Internet Protocol (TCP/IP) networking infrastructure available in most healthcare organizations and utilizes commercially available computer platforms and operating systems.
Universal Viewer provides Application Program Interfaces (APIs) to integrate with third-party medical devices and non-medical devices.
This document is a 510(k) summary for the Universal Viewer, a medical image management and processing system developed by GE Healthcare. It states that the device is substantially equivalent to a predicate device (Centricity Universal Viewer K182419). As such, the document primarily focuses on demonstrating substantial equivalence rather than providing detailed acceptance criteria and a study to prove they are met in the same way a de novo device might.
Based on the provided text, here's the information related to acceptance criteria and the study that proves the device meets them:
1. A table of acceptance criteria and the reported device performance
The document does not provide a specific table of quantitative acceptance criteria and reported device performance metrics in the format typically seen for a new device's efficacy study. Instead, it relies on demonstrating that the Universal Viewer operates in a "substantially equivalent" manner to its predicate device, Centricity Universal Viewer (K182419).
The "Comparison" table in the document summarizes the feature/technological comparison between the predicate and proposed device, indicating functional equivalence:
| Feature | Predicate Device Centricity Universal Viewer (K182419) | Proposed Device Universal Viewer | Discussion of Differences |
|---|---|---|---|
| Backend Integration | Centricity PACS (K110875) and Enterprise Archive | Enterprise Archive | Substantially Equivalent. Simplified Integration with GE Healthcare's Enterprise Archive (EA) for unified short- and long-term storage for the image and non-image data and study management workflow |
| Image Display and Review | Yes | Yes | Identical |
| Image Annotations and measurements | Yes | Yes | Identical |
| General Workflow including, Exam Search, Exam Assignments, System and Custom Worklists, Reporting Workflow | Available in the UV study list and/or Workflow Manager if enabled | Available in Workflow Manager | Substantially Equivalent. Consolidated to one worklist. |
The "Determination of Substantial Equivalence" section states:
"The testing and results did not raise new questions of safety and effectiveness from those associated with predicate device and demonstrated that the Universal Viewer device performs substantially equivalent to the predicate device."
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document does not specify a distinct "test set" with a particular sample size or data provenance in terms of patient data. The evaluation appears to be based on design control testing and comparison of functionalities to the predicate device, rather than a clinical study evaluating diagnostic performance on a patient dataset.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
Not applicable, as no clinical study with explicit ground truth establishment by experts on a patient image test set is described. The evaluation focuses on technological and functional equivalence.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not applicable, as no clinical study with a test set requiring adjudication is described.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
Not applicable. The Universal Viewer is described as a medical image display and interpretation software, not an AI-powered diagnostic assist tool that would typically undergo an MRMC study to compare reader performance with and without AI assistance. The document focuses on the system's ability to display and process images, and its substantial equivalence to another such system.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Not applicable. The device is a "medical image management and processing system" used by "healthcare professionals" to aid in diagnosis. It is not an autonomous algorithm designed for standalone performance.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
Not applicable, as no clinical study directly assessing diagnostic accuracy against a defined ground truth (e.g., pathology, expert consensus) is described. The "ground truth" for this type of submission is the established performance and safety of the predicate device, against which the new device is compared in terms of features and functionality.
8. The sample size for the training set
Not applicable. This document describes a software application for viewing and processing medical images, not an AI/ML model that would require a distinct training set. The development involved "design control testing" rather than model training.
9. How the ground truth for the training set was established
Not applicable, as there is no mention of a training set or ground truth in the context of machine learning.
Ask a specific question about this device
Page 1 of 1