Search Results
Found 2 results
510(k) Data Aggregation
(107 days)
Medic Vision - Imaging Solutions Ltd
The iQMR is intended for networking, communication, processing and enhancement of MRI images in DICOM format. The device processing is not effective for lesion, mass, or abnormalities of sizes less than 1.5mm. This device is indicated for use by qualified trained medical professionals.
iQMR is a software package that is aimed to process MRI images. The iQMR processing enhances MRI images by reduction of the image noise. The software runs on a PC server, which is connected to the Local Area Network (LAN). The device receives MRI images from different work stations over the network in DICOM format, processes the images and transmits them in DICOM format, to selected work stations.
The provided text describes the iQMR device and its FDA 510(k) submission. However, it does not contain the detailed information required to fully answer the request regarding acceptance criteria, a specific study proving it meets those criteria, expert details, or sample sizes for training and test sets in a clinical context.
The document states:
- "Clinical tests were not conducted." [Page 4]
- "However, MRI clinical images were processed in order to ensure that the iQMR conforms to defined user needs and intended uses under actual conditions." [Page 4]
- It also references "predefined software test plan" and "MRI standard phantoms" for non-clinical verification. [Page 3]
Therefore, the response below will highlight the limitations of the provided text in addressing the specific questions, while extracting what information is available.
Here's a breakdown of the requested information based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state quantitative acceptance criteria or detailed performance metrics from a formal clinical study. It mentions the device's intended use and the results of non-clinical tests.
Acceptance Criteria Category | Specific Criteria (as implied or stated) | Reported Device Performance (from non-clinical tests) |
---|---|---|
Intended Use Conformity | Device conforms to defined user needs and intended uses under actual conditions. | Clinical MRI images were processed to ensure conformity under actual conditions. |
Software Functionality | Software meets its specified performance. | Verified by testing the software following predefined software test plan. |
System Performance | System meets its specified performance (noise reduction, image enhancement). | Verified by testing using MRI standard phantoms. |
Image Resolution Limit | Processing not effective for lesions, mass, or abnormalities of sizes less than 1.5mm. | (This is a limitation stated, not a performance metric achieved by the device in terms of detecting such objects). The document doesn't provide data to "prove" this performance, it's a declared limitation of the device's capability. |
DICOM Compatibility | Receives and transmits MRI images in DICOM format. | Software handles DICOM input and output. |
Safety and Effectiveness | Equivalent to predicate device in terms of safety and effectiveness. | Non-clinical tests demonstrate that differences in technological characteristics do not raise new questions of safety or effectiveness. |
Risk Management | Risks managed and controlled following ISO 14971 standard. | Hazards identified, risk levels evaluated, mitigation measures taken. Benefits outweigh residual risks. |
2. Sample Size Used for the Test Set and Data Provenance
The document states: "Clinical tests were not conducted." [page 4].
It does mention that "MRI clinical images were processed" [page 4] but does not specify the sample size, number of images, or their provenance (e.g., country of origin, retrospective/prospective nature).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
Since formal clinical tests were not conducted, and the document only mentions "processing MRI clinical images," there is no information provided about experts establishing ground truth for a test set.
4. Adjudication Method for the Test Set
As formal clinical tests with human interpretation and ground truth establishment were not conducted, no adjudication method is described.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No MRMC comparative effectiveness study was mentioned or conducted. The submission clarifies that "Clinical tests were not conducted." [page 4]. Therefore, no effect size of human readers improving with AI vs. without AI assistance is provided.
6. Standalone Performance
The device is described as "a software package that is aimed to process MRI images" [page 3] for "enhancement of MRI images by reduction of the image noise." [page 3]. While it performs its function without a human "in the loop" during image processing, there's no standalone clinical performance study results provided in the sense of diagnostic accuracy (e.g., sensitivity, specificity for disease detection) because its purpose is image enhancement and noise reduction, not primary diagnosis. The device's declared limitation ("not effective for lesion, mass, or abnormalities of sizes less than 1.5mm") further indicates it's an enhancement tool, not a diagnostic one.
7. Type of Ground Truth Used
For the "processing of MRI clinical images" to ensure conformity, the document does not specify the type of ground truth. Given the device's function (image enhancement/noise reduction) and the lack of clinical studies, it's highly likely that ground truth, if informally assessed, would relate to image quality metrics, noise levels, or visual assessments by qualified personnel familiar with MRI, rather than pathology or outcome data for diagnostic accuracy. For the non-clinical tests, ground truth would be based on "MRI standard phantoms."
8. Sample Size for the Training Set
The document does not specify a sample size for any training set. It details verification and validation using predefined software test plans, MRI standard phantoms, and processing clinical images, but does not distinguish these as "training" or "test" sets in the context of machine learning model development. The algorithm is described as "Medic Vision's Algorithms" without further detail on its development process involving training data.
9. How the Ground Truth for the Training Set Was Established
Since no training set is mentioned, no information is provided on how its ground truth was established.
Ask a specific question about this device
(180 days)
MEDIC VISION IMAGING SOLUTIONS
The SafeCT-29 is intended for provided Tomography Dose Check feature to Computed Tomography X-ray systems.
The SafeCT-29 is specifically indicated for providing the Computed Tomography Dose Check feature which notifies and alerts the CT equipment operators, prior to a scan, if the estimated dose index is above the predefined thresholds, for CT scanners not equipped with this functionality. The device is indicated for use by professional personnel.
Medic Vision's SafeCT-29 provides a vendor-neutral Radiation Dose Check functionality, in accordance with the Dose Check feature that is specified by the NEMA XR-25 Standard. The device is a software and hardware system which includes Computer, dedicated display and controls. The SafeCT-29 is interfaced to CT scanners that are not equipped with the Dose Check function. The device is connected to the CT Console video output via a standard video connector. The SafeCT-29 Computer's internal video splitter captures the CT Console display video, in real time, without affecting neither the CT console itself nor its display. This analog video signal is digitized by a video grabber. The SafeCT-29 software analyzes the digital input video using OCR software continuously. The radiation dose information as calculated by the scanner and displayed to the CT operator is extracted and analyzed in real time. The SafeCT-29 captures the video of the CT Console display in real time, without affecting neither the CT console itself nor its display and workflow. As specified by NEMA XR-25 Standard, the safeCT-29 notifies and alerts the CT operators, prior to a scan, if the estimated dose index is above the predefined values set by the operating group, practice, or organization. SafeCT-29 prevents continuing an over-the-limit scan unless dose levels are reconfirmed by the operator, in accordance with NEMA XR-29*.The device records the details of such events, including the operator details and decisions. This record is available to the user.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
Acceptance Criteria and Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Specificity in detecting correct input data (protocol and dose data) | Better than 99.4% |
Delay in detecting correct input data | Less than 300ms |
Accuracy of accumulated CTDIvol calculation | Follows AAPM guidelines, assumes "worst-case scenario" (calculated CTDIvol may be higher than actual value) to prevent over-the-limit scans. |
Compliance with NEMA XR-25 Standard (Dose Check features) | The software output meets the input as defined by the Software Requirements Specifications (SRS), which encompasses Dose Check features as specified by NEMA XR-25 standard, display & operation controls, and applicable risk mitigations. |
Performance in simulating real-world CT scanner input | The device processed recorded data successfully and its Dose Check functionality was tested, confirming it meets performance specifications. |
Usability and reliability in user environment | Verified at 4 imaging centers using 4 different CT scanner models by end-users and Medic Vision engineers. |
Conformity to defined user needs and intended uses | Confirmed through validation testing of production units at 5 medical centers with 5 different CT scanner models, covering installation, operation, functionality processes, and features. |
Study Details
-
Sample sizes used for the test set and the data provenance:
- Test Set (for OCR software verification): 1800 screen images captured from various CT scanners. These images contained more than 6000 numeric and alphanumeric text images.
- Test Set (for system performance verification): Sequence of data recorded from various CT scanners (specific number not provided, but implies a diverse set).
- Test Set (for usability and reliability): Conducted at 4 imaging centers using 4 different CT scanner models.
- Test Set (for validation): Conducted at 5 medical centers with 5 different CT scanner models.
- Data Provenance: The text states "various CT scanners" and "different CT scanner models," indicating a diverse, but unspecified, range. It does not explicitly state the country of origin or whether the data was retrospective or prospective. Given the nature of a product submission, it's likely retrospective data collection from existing scanners.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- The text does not specify the number or qualifications of experts used to establish the ground truth. For the OCR performance, the "results were compared to the original CT screens' data," implying a direct comparison to the source rather than expert interpretation. For the overall system performance, the "results of process confirmed that device meets its performance specifications," implying validation against predefined expectations rather than expert consensus on individual cases.
-
Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- The text does not describe an adjudication method for the test set. For the OCR performance, it explicitly states a direct comparison ("results were compared to the original CT screens' data"). For other tests, it implies automated or predefined checks against specifications.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study was not done. The device, SafeCT-29, is an add-on to CT scanners to provide a "Dose Check" feature. It functions by analyzing the CT console's video output via OCR to alert operators about high dose indexes prior to a scan. It does not involve human readers interpreting images, nor does it provide AI assistance for image interpretation. Therefore, there's no "human readers improve with AI" metric applicable here.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone performance evaluation of the OCR software was done. The "accuracy of the OCR software" was tested by inputting 1800 screen images and comparing its extracted data (dose levels and protocols) to the "original CT screens' data." This can be considered a standalone evaluation of the core OCR algorithm. The overall system's functionality was also verified by simulating inputs and testing the "Dose Check functionality," which is also a standalone assessment of the algorithm's performance against specifications.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For OCR software: The ground truth was the original data displayed on the CT screens from which the images were captured. This is a direct, objective comparison.
- For overall system performance: The ground truth was based on predefined Software Requirements Specifications (SRS), which were aligned with the NEMA XR-25 standard and AAPM guidelines for dose calculation. This indicates a rules-based and standard-compliant ground truth.
-
The sample size for the training set:
- The document does not provide information on the sample size used for training the OCR software or any other part of the SafeCT-29 system. It focuses solely on verification and validation (testing) activities. It mentions an "Off-The Shelf (OTS) OCR software" is embedded, implying that Medic Vision may not have trained the core OCR algorithm from scratch, but rather verified its performance within their system.
-
How the ground truth for the training set was established:
- As no information on a training set is provided, the method for establishing its ground truth is also not described. If an OTS OCR was used, its training would have been handled by its original developer. For Medic Vision's own development, if any training was involved, the method is not disclosed.
Ask a specific question about this device
Page 1 of 1