(93 days)
OmegaAI Image Viewer is software for diagnostic and clinical review intended for use in General Radiology (images from modalities including CR, CT, DX, MR, MG, NM, US, PET, RG, SC, VL, XA, Film Digitizer), Interventional Radiology, cardiology, oncology, obstetrics and gynecology, ENT, orthopedics, internal medicine, emergency medicine, dermatology, dentistry, cardiology, Pathology (i.e., to review captured images/videos from a sample) and other healthcare imaging applications.
OmegaAI Image Viewer is intended to be used with off the shelf computing devices. Display monitors used for reading medical images for diagnostic purposes must comply with applicable clinical requirements, regulatory approvals and with quality control requirements for their use and maintenance. With appropriate display monitors, lighting, image quality, and level of lossy image compression, the OmegaAI Image Viewer is intended for dagnostic purposes (on desktorms) and as a non-diagnostic review tool (on mobile platforms) to be used by trained healthcare professionals. Display calibration and lighting conditions should be verified by viewing a test pattern prior to use for diagnostic purposes.
OmegaAI Image Viewer supports major desktop and mobile browsers such as Microsoft Edge, Chrome, Safari, Apple iOS, Android, Windows. OmegaAI Image Viewer displays both lossy compressed images. Each healtheare professional must ensure that they have the necessary environment to ensure the appropriate image quality for their clinical purpose and determine the level of lossy image compression for their purpose. Lossy image compression should not be used for primary reading in mammography.
OmegaAI Image Viewer can be utilized for image manipulation by radiology technologists or other healthcare professionals. OmegaAI Image Viewer can be used to verify images that are captured in a medical imaging system have a diagnostic quality, to correct viewing characteristics of the image such as orientation, contrast, as well as to add annotations to mark significant finding or provide guidance for radiologists.
OmegaAI Image Viewer can store annotations and measurements as DICOM presentation states without changing the original image data. OmegaAI Image Viewer can display annotations and measurements as an overlay on images from DICOM objects, and from Computer-Aided Diagnosis (CAD) and AI software. The viewer can perform 3D Multi-Planar Reformatting (MPR), 3D Maximum Intensity Projection (MIP) and 3D Volume Rendering (VR). OmegaAl Image Viewer is purposed to aid in reviewing findings through its ability to display clinical documents and reports side by side with the images. This can be used side by side with a reporting tool to create diagnostic reports.
Note: To protect confidential information and ensure data security, the OmegaAl Image Viewer has User Access Control (UAC) which prevents unauthorized access and modification of data.
OmegaAI Image Viewer runs in a web browser sand-box, thus it relies on the browser's handling of interruptions, low-memory conditions etc. The browser will give an error to the user when it is not able to an interruption.
Caution: Federal law restricts this device to sell by or on the order of a physician.
OmegaAl Image Viewer is designed to view and manipulate medical images or videos created from diagnostic imaging systems such as X-ray, Computed tomography, Nuclear medicine, MRI, Ultrasound, laboratory systems, and images from other sources such as handheld devices and cameras, endoscopy or other sources of images and videos. It can perform various image manipulation activities and store the modifications as presentation state along with the original study for future reference. OmegaAI Image Viewer allows users to perform image manipulations using the Adjustment Tools, including window level, rotate, flip, pan, stack scroll, and magnify. Notably, users have access to Markup Tools such as annotate, angle, probe, Mark ROI, and measurement. OmegaAI Image Viewer is also capable of organizing all the captured images for a patient and presenting them in a zero-footprint, web user interface, allowing the users to view images in their preferred layout and enabling them to compare current images with prior images of the respective patient.
Available on popular mobile and desktop platforms with keyboard, mouse, and touch inputs, the OmegaAl I Image Viewer provides access to medical images in a convenient way for health care professionals to use as a diagnostic viewer and for review purposes.
OmegaAI Image Viewer supports major desktop and mobile browsers such as Microsoft Edge, Chrome, Safari, Apple iOS, Android, Windows and Mac devices. The software can only be used for diagnostic purposes on desktop platforms, whereas on mobile platforms it can be utilized as a non-diagnostic review tool by trained health professionals.
The provided document, a 510(k) Premarket Notification from RamSoft Inc. for their OmegaAI Image Viewer, details the device's indications for use, its comparison to predicate devices, and a summary of software validation and verification testing. However, it does not contain specific information about acceptance criteria or a dedicated study proving the device meets those criteria, particularly in the context of clinical performance like human-in-the-loop improvements with AI assistance or standalone algorithm performance.
The document primarily focuses on demonstrating substantial equivalence to predicate devices based on intended use, technology, and basic functional capabilities (image manipulation, display of various modalities, architecture, HIPAA compliance, etc.). The "Software Validation and Verification Testing Summary" on page 12 describes typical software quality assurance tests (regression tests, bug fix verification) rather than a clinical performance study with human readers or AI.
Therefore, many of the requested elements for describing the acceptance criteria and a study proving device performance (especially those related to clinical effectiveness, human reader performance, or standalone AI performance) cannot be extracted from this document.
Here is what can be inferred or stated based on the provided text, with a clear indication of what is not present:
Missing Information/Cannot be Extracted from the Provided Document:
- A table of acceptance criteria and reported device performance: The document does not provide a formal table of acceptance criteria for clinical performance metrics or specific reported performance data beyond functional verification.
- Sample size used for the test set and data provenance (e.g. country of origin of the data, retrospective or prospective): No information on a test set (in a clinical performance context), sample size, or data provenance is provided.
- Number of experts used to establish the ground truth for the test set and qualifications of those experts: Not applicable, as no such clinical test set or ground truth establishment is described.
- Adjudication method (e.g. 2+1, 3+1, none) for the test set: Not applicable.
- If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: No MRMC study is described. The device is an image viewer, not explicitly stated to have integrated AI for diagnostic findings that would necessitate an MRMC study measuring AI assistance.
- If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Not applicable, as the document does not describe a standalone AI algorithm's performance. The device is a viewer.
- The type of ground truth used (expert consensus, pathology, outcomes data, etc.): Not applicable, as no clinical ground truth establishment is described.
- The sample size for the training set: Not applicable, as no AI model training is described.
- How the ground truth for the training set was established: Not applicable.
What can be extracted or inferred from the document regarding "acceptance criteria" and "study":
The "acceptance criteria" and "study" described in this 510(k) summary are primarily focused on software validation and verification to ensure the device functions as intended and is substantially equivalent to predicate devices, rather than clinical performance metrics for a novel diagnostic algorithm.
1. A table of acceptance criteria and the reported device performance:
While no formal table of performance criteria like sensitivity/specificity is provided, the document implies the following "acceptance criteria" through its software validation process:
Acceptance Criteria (Implied from Software V&V) | Reported Device Performance (Summary) |
---|---|
Functional Correctness | Full regression tests executed on major features to uncover potential bugs. |
Bug fix verification for reported issues and regression tests on related features. | |
Full regression tests executed on the candidate build for release to verify all functionalities and fixes. | |
Safety and Risk Mitigation | "Considered a 'moderate' level of concern since a failure or latent flaw in the software could result in an erroneous diagnosis or a delay in delivery of appropriate medical care that would likely lead to Minor Injury." |
Substantial Equivalence | Device performs comparably to predicate devices (Nucleus.io K203249 and RapidResults K141881) in intended use and core viewing functionalities. |
Compliance | Software Verification and Validation Testing was conducted as recommended by FDA guidance for software in medical devices. |
2. Sample size used for the test set and the data provenance:
- The document refers to "test cases" used in software verification. It does not specify a sample size for these test cases (e.g., number of images, patient cases).
- Data Provenance: Not specified. The testing described is typical software quality assurance (functional, regression testing), not a clinical trial using specific patient data sets. The origin of any images used for testing is not mentioned.
- Retrospective/Prospective: Not specified. Given it's software testing, it's typically done in a controlled, simulated/internal environment, not a prospective clinical setting.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not applicable as no clinical test set requiring expert ground truth establishment is described. The "verification" was performed by "members of the QA team," whose qualifications are not detailed beyond their role.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not applicable. The testing described is software quality assurance, not diagnostic performance evaluation requiring reader adjudication.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done:
- No, an MRMC study was not described. The OmegaAI Image Viewer is presented as a medical image management and processing system (MIMPS), a viewing tool, not an AI diagnostic algorithm that directly assists in human diagnostic reads in a quantifiable way requiring an MRMC study.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- No, a standalone algorithm performance study was not described. The device is an image viewer, not a distinct AI algorithm providing diagnostic outputs. While it "can display annotations and measurements... from Computer-Aided Diagnosis (CAD) and AI software," it is not that CAD/AI software itself, nor does the document describe a study of its own inherent AI capabilities (if any exist beyond displaying external AI outputs).
7. The type of ground truth used:
- For the software validation and verification, the "ground truth" would be the expected functional behavior and output of the software as defined by its specifications and design documents. This is not clinical ground truth (e.g., pathology, follow-up outcomes).
8. The sample size for the training set:
- Not applicable. The document does not describe the development or training of an AI algorithm within the OmegaAI Image Viewer.
9. How the ground truth for the training set was established:
- Not applicable.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).