Search Results
Found 1 results
510(k) Data Aggregation
(29 days)
Mirada XD is intended to be used by trained medical professionals including, but not limited to, radiologists, nuclear medicine physicians, and physicists,
Mirada XD is a software application intended to display and visualize 2D & 3D multi-modal medical image data. The user may process, render, review, store, print and distribute DICOM 3.0 compliant datasets within the system and/or across computer networks. Supported modalities include CT, PET, MR, SPECT and planar NM. Supported image types include static, gated and dynamic.
The user may also create, display, print, store and distribute reports resulting from interpretation of the datasets.
Mirada XD allows the user to register combinations of anatomical images and display them with fused and non-fused displays to facilitate the comparison of image data by the user. The registration operation can assist the user in assessing changes in image data, either within or between exammations and aims to help the user obtain a better understanding of the combined information that would otherwise have to be visually compared disjointedly.
Mirada XD provides a number of tools such as rulers and region of interests intended to be used for the assessment of regions of an image to support a clinical workflow. Examples of such workflows include, but are not limited to, the evaluation of the presence or absence of lesions, determination of treatment response and follow-up.
Mirada XD allows the user to define, import, transform, store and export regions of interest structures in DICOM RT format for use in radiation therapy planning systems.
XD is a stand-alone desktop software application with tools and features designed to display or view medical images as well as tools for performing quantitative readings of the imaging data.
The use environment for XD is in a clinical environment, typically within dedicated radiology reading rooms or areas.
The software components provide functions for performing operations related to image display, manipulation, analysis, and quantification, including features designed to facilitate segmentation of user-defined regions of interest.
The software system runs on a dedicated workstation and is intended for display and processing, of a Computed Tomography (CT), Magnetic Resonance (MR), Positron Emission Tomography (PET), Single-Photon Emission Computed Tomography (SPECT) or Nuclear Medicine (NM) images, including contrast enhanced and dynamic or multisequence images.
XD is not intended for specific populations; the system can be used to display data of any patient demographic chosen by trained medical professionals including, but not limited to, radiologists, nuclear medicine physicians, and physicists for use in clinical workflows.
The Mirada XD is a medical imaging software application. The provided text describes the device's indications for use, comparison to a predicate device, and performance testing, but does not explicitly state specific acceptance criteria or provide a detailed study report with all the requested information.
Here's an analysis based on the available text, with indications where specific information is missing:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not provide a formal table of acceptance criteria with corresponding performance metrics. It generally states that the device "meets the user needs and requirements" and passed various tests.
| Acceptance Criteria (Inferred from text) | Reported Device Performance |
|---|---|
| Functional and Performance Requirements: | "XD is validated and verified against its user needs and intended use by the successful execution of planned performance, functional and algorithmic testing included in this submission." "The results of performance, functional and algorithmic testing demonstrate that XD meets the user needs and requirements of the device..." |
| Accuracy of specific features (e.g., Thick Slab visualization, PET hotspot finder) | "...to ensure that performance and accuracy was as expected." (No specific numerical metrics provided.) |
| Usability and Human Factors: Adherence to IEC 62366-1:2015 and FDA guidance. | "Human factors testing has been performed in line with Applying Human Factors and Usability Engineering to Medical Devices, February 3, 2016 and IEC 62366-1:2015." |
| Compliance with DICOM standard: | "...adherence to the DICOM standard." |
| Risk Mitigation: Satisfactory mitigation of potential risks in device design. | "Potential risks were analyzed and satisfactorily mitigated in the device design." |
| Safety and Effectiveness: Performance at least as safely and effectively as the predicate "Mirada XD". | "In conclusion, performance testing demonstrates that XD is substantially equivalent to, and performs at least as safely and effectively as, the listed predicate device. XD meets requirements for safety and effectiveness." "The additional visualization and segmentation features support the user in completing diagnostic readings and identifying potential findings. These features do not raise any new types of safety or effectiveness questions." |
2. Sample Size Used for the Test Set and Data Provenance
This information is not provided in the document. The text mentions "performance testing (Bench)" but offers no details on the number of cases, images, or patient data used, nor their origin (e.g., country, retrospective/prospective).
3. Number of Experts Used to Establish the Ground Truth and Qualifications
This information is not provided in the document. The text refers to "user actions" and "responsibility of the user" for clinical accuracy of segmentations, implying human interpretation, but does not detail the process of establishing ground truth for testing.
4. Adjudication Method for the Test Set
This information is not provided in the document.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
A multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly mentioned as being performed. The document focuses on performance testing of the device itself and its equivalence to a predicate, rather than an AI-assisted human vs. non-AI-assisted human comparative study. Therefore, there is no information on the effect size of how much human readers improve with AI vs. without AI assistance.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study
The document mentions "performance testing (Bench)" for features like "Thick Slab visualization" and "PET hotspot finder." It also states, "The software's functions are dependent on the user actions as well as on the available information in the provided medical image data." and "The use of the segmentation tools to achieve a satisfactory delineation of any regions of interest is a user operation and the clinical accuracy of segmentation is the responsibility of the user and not an XD function." This suggests that while underlying algorithms are tested, the overall clinical performance is not treated as standalone algorithm performance but rather as a tool for a human user. It's not explicitly stated that a standalone algorithm-only performance study was conducted in a clinical context.
7. Type of Ground Truth Used
The document does not explicitly state the type of ground truth used for testing. Given the nature of the device (medical image management and processing system with tools for segmentation, registration, and quantification), it's likely that ground truth would involve:
- Expert Consensus/Manual Delineation: For segmentation accuracy, experts would typically manually delineate regions of interest.
- Known Physical Measurements/Phantoms: For distance and volumetric measurements.
- Reference Image Registrations: For image registration accuracy.
However, the text emphasizes the user's responsibility for clinical accuracy, making it unclear how ground truth was precisely established for the "performance and accuracy" evaluation mentioned.
8. Sample Size for the Training Set
This information is not provided in the document. The document describes a "Medical image management and processing system" that includes segmentation, registration, and visualization tools. It does not explicitly mention "training sets" in the context of machine learning, suggesting that the primary verification and validation focus was on software functionality and accuracy rather than a machine learning model's performance based on a training set.
9. How the Ground Truth for the Training Set Was Established
Since a training set for machine learning is not mentioned, the method for establishing its ground truth is also not provided.
Ask a specific question about this device
Page 1 of 1