(202 days)
uOmniscan is a software application for providing real-time communication between remote and local users, providing remote read-only or fully controlled access to connected medical imaging devices, including the ability to remotely initiate MR scans. It is also used for training medical personnel in the use of medical imaging devices. It is a vendor-neutral solution. Access must be authorized by the onsite user operating the system. Images reviewed remotely are not for diagnostic use.
uOmniscan is a medical software designed to address the skill differences among technicians and their need for immediate support, allowing them to interact directly with remote experts connected to the hospital network. By collaboration between on-site technicians and remote experts, it enables technicians or radiologists located in different geographic areas to remotely assist in operating medical imaging devices. uOmniscan provides healthcare professionals with a private, secure communication platform for real-time image viewing and collaboration across multiple sites and organizations.
uOmniscan establishes remote connections with Modality through application, KVM (Keyboard, Video, Mouse) switch, or UIH's proprietary access tool uRemote Assistant. The connection can be made in full control or read-only mode, assisting on-site technicians in seeking guidance and real-time support on scan-related issues, including but not limited to training, protocol evaluation, and scan parameter management, with the capability to remotely initiate scans for MR imaging equipment. In addition to supporting remote access and control of modality scanners, uOmniscan also supports common communication methods including real-time video, as well as audio calls and text chats between users.
Images viewed remotely are not for diagnostic purposes.
It is a vendor-neutral solution compatible with existing multimodality equipment in healthcare networks, allowing healthcare professionals to share expertise and increase work efficiency, while enhancing the communication capabilities among healthcare professionals at different locations.
The provided FDA 510(k) clearance letter for the uOmniscan device focuses primarily on demonstrating substantial equivalence to a predicate device, as opposed to proving novel clinical efficacy or diagnostic accuracy. Therefore, the "acceptance criteria" and "study that proves the device meets the acceptance criteria" in this context are related to performance verification and usability testing to ensure the device performs its intended functions safely and effectively, similar to the predicate device.
The document states that "No clinical study was required," and "No animal study was required." This indicates that the device's function (remote control and communication for medical imaging) does not require a traditional clinical outcomes study in the same way a diagnostic AI algorithm might. Therefore, the "study that proves the device meets the acceptance criteria" refers to the software verification and validation testing, performance verification, and usability studies conducted.
Here's a breakdown of the requested information based on the provided text:
Acceptance Criteria and Device Performance
The acceptance criteria for this type of device are primarily functional and usability-based, ensuring it performs its tasks reliably and is safe for user interaction.
Acceptance Criterion (Implicit from Performed Tests) | Reported Device Performance |
---|---|
Functional Verification: | |
1. Establish real-time communication between remote and local users | Testing conducted; results indicate successful establishment of real-time communication. |
2. Establish fully controlled session with medical image device via uRemote Assistant | Testing conducted; results indicate successful establishment of fully controlled sessions. |
3. Establish fully controlled session with medical image device via KVM Switch | Testing conducted; results indicate successful establishment of fully controlled sessions. |
4. Network Status Identification | Testing conducted; results indicate successful network status identification. |
5. Performance evaluation for different network conditions/speeds | Testing conducted; results indicate satisfactory performance across varying network conditions. |
6. Indicating network state to users | Testing conducted; results indicate successful indication of network state to users. |
Usability Verification: | |
1. Design of user interface and manual effectively decrease probability of use errors | Usability study results: "Design of user interface and manual effectively decrease the probability of use errors." |
2. All risk control measures are implementable and understood across user expertise levels | Usability study results: "All risk control measures are implementable and understood across user expertise levels." |
3. No unacceptable residual use-related risks | Usability study results: "The product has no unacceptable residual use-related risks." |
Study Details
Given the nature of the device and the FDA's clearance pathway, the "study" referred to is a series of engineering and usability tests rather than a clinical trial.
2. Sample size used for the test set and the data provenance
- Test Set Sample Size: The document does not specify quantitative sample sizes for the functional performance or usability testing datasets (e.g., number of communication sessions tested, specific network conditions, or number of users in usability testing). It broadly states that "Evaluation testing was conducted to verify the functions" and "Expert review for formative evaluation and usability testing for summative evaluation were conducted."
- Data Provenance: Not explicitly stated, but given the manufacturer is "Shanghai United Imaging Healthcare Co., Ltd." in China, it's reasonable to infer that the testing likely occurred in a controlled environment by the manufacturer or their designated testing facilities, potentially in China or other regions where their systems are developed/used. The tests conducted (software V&V, performance verification, usability) are typically internal, controlled studies rather than real-world data collection. The data would be prospective in the sense that the tests were designed and executed to evaluate the device.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Ground Truth Establishment: For functional and usability testing of a remote control/communication software, "ground truth" isn't established in the clinical sense (e.g., disease presence). Instead, the "truth" is whether the software correctly performs its programmed functions and is usable.
- Experts: The usability study involved "participation of experts and user representatives." The specific number or detailed qualifications of these "experts" (e.g., 'radiologist with 10 years experience') are not specified in this summary. They would likely be human factors engineers, software testers, and potentially medical professionals (radiologists, technologists) acting as user representatives.
4. Adjudication method for the test set
- Adjudication Method: Not applicable or specified in the traditional sense of medical image interpretation (e.g., 2+1 radiology review). For software functional testing, results are typically binary (pass/fail) based on predefined test cases. For usability, a consensus or qualitative analysis of user feedback and observations would be used, but a formal "adjudication method" as seen in clinical reading studies is not mentioned.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: No, a multi-reader, multi-case (MRMC) comparative effectiveness study was not done. The document explicitly states: "No clinical study was required." This type of study is typically performed for AI or CAD devices that assist with diagnostic interpretation, which is not the primary function of uOmniscan. The device is for remote control and communication, and "Images reviewed remotely are not for diagnostic use."
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Performance: The "Performance Verification" section details tests of the software's functional capabilities (e.g., establishing communication, network status). These could be considered "standalone" in the sense that they verify the software's inherent ability to perform these tasks. However, the device's core purpose is "real-time communication between remote and local users" and "access to connected medical imaging devices," implying a human-in-the-loop for its intended use. The testing confirms the software's readiness for this human-in-the-loop interaction rather than a pure standalone diagnostic performance.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Type of Ground Truth: As noted, traditional "ground truth" (e.g., pathology, clinical outcomes, expert consensus on disease) is not applicable for this device's verification. Instead, the ground truth for performance testing is the predefined functional requirements and expected system behavior, as well as the principles of human factors engineering and usability standards for the usability study.
8. The sample size for the training set
- Training Set Sample Size: Not applicable/not specified. The uOmniscan device is described as "software only solution" for remote control and communication. There is no mention of it being an AI/ML algorithm that requires a "training set" in the context of machine learning model development. The verification and validation data are for testing the implemented software features, not for training a model.
9. How the ground truth for the training set was established
- Ground Truth for Training Set Establishment: Not applicable. As the device does not appear to be an AI/ML system requiring a training set, the concept of establishing ground truth for a training set does not apply here.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).