Search Results
Found 3 results
510(k) Data Aggregation
(196 days)
The device is designed to perform radiographic x-ray examinations on all pediatric and adult patients, in all patient treatment areas.
The DRX-Revolution Mobile X-ray System is a mobile diagnostic x-ray system that utilizes digital technology for bedside or portable exams. Key components of the system are the x-ray generator, a tube head assembly (includes the x-ray tube and collimator) that allows for multiple axes of movement, a maneuverable drive system, touchscreen user interface(s) for user input. The system is designed with installable software for acquiring and processing medical diagnostic images outside of a standard stationary X-ray room. It is a mobile diagnostic system intended to generate and control X-rays for examination of various anatomical regions.
The provided text describes a 510(k) premarket notification for the DRX-Revolution Mobile X-ray System, which includes changes such as the addition of Smart Noise Cancellation (SNC) functionality and compatibility with a new detector (Lux 35). The study focuses on demonstrating the substantial equivalence of the modified device to a previously cleared predicate device (DRX-Revolution Mobile X-ray System, K191025).
Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided information:
1. A table of acceptance criteria and the reported device performance
Acceptance Criteria (for SNC) | Reported Device Performance |
---|---|
At least 99% of all image pixels were within ± 1 pixel value | Achieved. The results demonstrated that at least 99% of all image pixels were within ± 1 pixel value. |
Absolute maximum difference across all test images should be ≤ 10-pixel values | Achieved. The absolute maximum difference seen across all test images was 3-pixel values, meeting the acceptance criterion of a maximum allowable difference of 10-pixel values. |
Noise ratio values computed for every pixel of the test images should be |
Ask a specific question about this device
(29 days)
The device is indicated for use in obtaining diagnostic images to aid the physician with diagnosis. The system can be used to perform radiographic imaging of various portions of the human body, including the skull, spinal column, extremities, chest, abdomen and other body parts. The device is not indicated for use in mammography
The DRX-Compass System is a general purpose x-ray system used for acquiring radiographic images of various portions of the human body. The system consists of a combination of components including various models of high voltage x-ray generators, control panels or workstation computers, various models of patient support tables, wall-mounted image receptors/detectors for upright imaging, various models of tube support devices, x-ray tube, and collimator (beam-limiting device). The DRX-Compass can be used with digital radiography (DR) and computed radiography (CR) receptors. Smart Features are added to the DRX-Compass system to provide remote capabilities for existing functions of the DRX-Compass system. These remote capabilities simplify exam set up and improve workflow for the operator while preparing for the patient exposure. The "smart features", described below, are designed to reduce the technologist's manual tasks and to speed up workflow for existing features of the system. These improvements are referred to as "smart features" in the product documentation. Implementation of these "smart features" does not change the intended use of the system.
The provided text does not contain detailed information about specific acceptance criteria and a study that comprehensively proves the device meets those criteria for the DRX-Compass system. The document is a 510(k) summary for the FDA, which focuses on demonstrating substantial equivalence to a predicate device rather than a comprehensive efficacy study for new features.
However, based on the information provided, I can extract the relevant details that are present and explain why some requested information is not available in this document.
Here's a breakdown of what can be inferred and what is missing:
1. Table of Acceptance Criteria and Reported Device Performance
The document mentions that "Predefined acceptance criteria were met and demonstrated that the device is as safe, as effective, and performs as well as or better than the predicate device." However, the specific acceptance criteria themselves (e.g., specific thresholds for DQE/MTF, or performance metrics for the "smart features") are not explicitly detailed in this 510(k) summary. Similarly, the reported device performance values against those specific criteria are also not provided.
The closest information related to performance is:
Acceptance Criteria (Inferred/General) | Reported Device Performance (Inferred/General) |
---|---|
Image quality of additional detectors equivalent to predicate. | Flat panel detector DQE/MTF data shows the additional detectors (DRX Plus 2530, Focus HD 35, Focus HD 43, Lux 35) are equivalent in image quality to DRX Plus detectors cleared with the predicate. |
Compliance with electrical safety standards (IEC 60601-1, IEC 60601-1-2, IEC 60601-2-54). | Device complies with listed electrical safety standards. |
Compliance with usability standards (IEC 60601-1-6, IEC 62366). | Device complies with listed usability standards. |
No new risks identified that raise additional questions of safety and performance (ISO 14971). | All product risks have been mitigated; no changes to risk control measures; testing indicates substantial equivalence. |
"Smart Features" (Real-time Video, LLI, Collimation, Patient Picture) simplify exam setup and improve workflow without changing intended use. | These features are designed to reduce manual tasks and speed up workflow. (No specific quantitative performance metrics provided in this document). |
2. Sample Size Used for the Test Set and Data Provenance
This information is not provided in the 510(k) summary. The document states "Non-clinical testing such as standards testing are the same as that of the predicate. The verification and validation testing of the modified device demonstrates that the modified device performs as well as the predicate and is substantially equivalent." without detailing the specific sample sizes or data provenance for these tests. For imaging performance, it mentions DQE/MTF data for detectors, but not the sample size of images or patients used for performance evaluation of the overall system or its new "smart features."
3. Number of Experts Used to Establish Ground Truth and Qualifications
This information is not provided in the 510(k) summary. The document focuses on technical verification and validation, and comparison to a predicate device, rather than a clinical study requiring expert consensus on ground truth.
4. Adjudication Method for the Test Set
This information is not provided in the 510(k) summary. Given the absence of specific clinical study details or expert ground truth establishment, no adjudication method would be mentioned.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
A MRMC comparative effectiveness study is not mentioned in this document. The submission's focus is on demonstrating substantial equivalence through technical testing and compliance with recognized standards, particularly for the "smart features" which are described as workflow enhancements rather than diagnostic AI tools requiring reader performance studies. There is no mention of AI assistance for human readers or associated effect sizes.
6. Standalone (Algorithm Only) Performance Study
A standalone performance study of an algorithm without human-in-the-loop is not explicitly mentioned in this document. The "smart features" are described as functionalities to assist the operator, implying human-in-the-loop operation, rather than a standalone diagnostic algorithm. The document mentions "Flat panel detector DQE/MTF data shows that the additional detectors supported by the modified device (DRX-Compass) are equivalent in image quality to that of the DRX Plus detectors cleared with the predicate," which is a technical performance metric for the detector component, not an algorithm's diagnostic performance.
7. Type of Ground Truth Used
The type of ground truth used for any performance evaluation is not explicitly stated. For the detector performance, DQE/MTF data refers to physical image quality metrics rather than a diagnostic ground truth (like pathology or clinical outcomes). For the "smart features," their evaluation appears to be based on functional verification and validation of their workflow enhancement capabilities, rather than comparison to a ground truth for diagnostic accuracy.
8. Sample Size for the Training Set
This information is not provided in the 510(k) summary. The document does not describe the use of machine learning algorithms that would typically require a training set. The "smart features" appear to be rule-based or real-time processing functionalities rather than learning algorithms.
9. How Ground Truth for the Training Set Was Established
Since there is no mention of a training set or machine learning, details on establishing its ground truth are not provided.
In summary, the 510(k) submission for the DRX-Compass focuses on demonstrating substantial equivalence to a predicate device by:
- Ensuring the modified device's indications for use are identical.
- Confirming compliance with recognized electrical safety and performance standards (AAMI ES60601-1, IEC 60601-1-6, IEC 60601-1-3, IEC 60601-2-54, IEC 62366).
- Applying risk management (ISO 14971) to ensure no new risks are introduced.
- Showing that new components (e.g., additional detectors) maintain equivalent image quality (e.g., DQE/MTF data).
- Asserting that new "smart features" improve workflow without changing the device's intended use or safety profile.
The document does not provide the kind of detailed clinical study data often found for AI/ML-based diagnostic devices, including specific acceptance criteria values, sample sizes for test or training sets, expert qualifications, or adjudication methods, as these may not be typically required for modifications to a stationary X-ray system primarily focused on workflow enhancements and component upgrades.
Ask a specific question about this device
(219 days)
The software performs digital enhancement of a radiographic image generated by an x-ray device. The software can be used to process adult and pediatric x-ray images. This excludes mammography applications.
Eclipse software runs inside the ImageView product application software (also namely console software). The Eclipse image processing software II with Smart Noise Cancellation is similar to the predicate Eclipse image processing software (K180809). Eclipse with Smart Noise Cancellation is an optional feature that enhances projection radiography acquisitions captured from digital radiography imaging receptors (Computed Radiography (CR) and Direct Radiography (DR). The modified software is considered an extension of the software (it is not stand alone and is to be used only with the predicate device supports the Carestream DRX family of detectors, this includes all CR and DR detectors. The primary difference between the predicate and the subject device is the addition of a Smart Noise Cancellation module. The Smart Noise Cancellation module consists of a Convolutional Network (CNN) trained using clinical images with added simulated noise to represent reduced signal-to-noise acquisitions. Eclipse with Smart Noise Cancellation (modified device) incorporates enhanced noise reduction prior to executing Eclipse II image processing software.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
Based on the provided text, the device Eclipse II with Smart Noise Cancellation is considered substantially equivalent to its predicate Eclipse II (K180809) due to modifications primarily centered around an enhanced noise reduction feature. The acceptance criteria and the study that proves the device meets these criteria are inferred from the demonstrated equivalence to the predicate device and the evaluation of the new Smart Noise Cancellation module.
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are implicitly tied to the performance of the predicate device and the new feature's ability to maintain or improve upon key image quality attributes without introducing new safety or effectiveness concerns.
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Diagnostic Quality Preservation/Improvement: The investigational software (Eclipse II with Smart Noise Cancellation) must deliver diagnostic quality images equivalent to or exceeding the predicate software (Eclipse II). | Clinical Evaluation: "The statistical test results and graphical summaries demonstrate that the investigational software delivers diagnostic quality images that exceed the quality of the predicate software over a range of exams, detector types and exposure levels." |
No Substantial Residual Image Artifacts: The noise reduction should not introduce significant new artifacts. | Analysis of Difference Images: "The report focused on the analysis of the residual image artifacts. In conclusion, the images showed no substantial residual edge information within regions of interest." |
Preservation/Improvement of Detectability: The detectability of lesions should not be negatively impacted and ideally improved. | Ideal Observer Evaluation: "The evaluation demonstrated that detectability is preserved or improved with the investigational software for all supported detector types and exposure levels tested." |
No New Questions of Safety & Effectiveness: The modifications should not raise new safety or effectiveness concerns. | Risk Assessment: "Risks were assessed in accordance to ISO 14971 and evaluated and reduced as far as possible with risk mitigations and mitigation evidence." |
Overall Conclusion: "The differences within the software do not raise new or different questions of safety and effectiveness." | |
Same Intended Use: The device must maintain the same intended use as the predicate. | Indications for Use: "The software performs digital enhancement of a radiographic image generated by an x-ray device. The software can be used to process adult and pediatic x-ray images. This excludes mammography applications." (Stated as "same" for both predicate and modified device in comparison chart) |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: Not explicitly stated. The text mentions "a range of exams, detector types and exposure levels" for the clinical evaluation, and "clinical images with added simulated noise" for the CNN training.
- Data Provenance: Not explicitly stated. The text mentions "clinical images," implying real-world patient data, but does not specify the country of origin or whether it was retrospective or prospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: Not explicitly stated. The text mentions a "clinical evaluation was performed by board certified radiologists." It does not specify the number involved.
- Qualifications of Experts: "Board certified radiologists." No specific years of experience are provided.
4. Adjudication Method for the Test Set
- Adjudication Method: Not explicitly stated. The text mentions images were evaluated using a "5-point visual difference scale (-2 to +2) tied to diagnostic confidence" and a "4-point RadLex scale" for overall diagnostic capability. It does not describe a method for resolving discrepancies among multiple readers, such as 2+1 or 3+1.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
- MRMC Comparative Effectiveness Study: Yes, a clinical evaluation was performed by board-certified radiologists comparing the investigational software to the predicate software. While it doesn't explicitly use the term "MRMC," the description of a clinical evaluation by multiple radiologists comparing two versions of software suggests this type of study was conducted.
- Effect Size of Human Readers Improvement with AI vs. without AI Assistance: The text states, "The statistical test results and graphical summaries demonstrate that the investigational software delivers diagnostic quality images that exceed the quality of the predicate software over a range of exams, detector types and exposure levels." This indicates an improvement in diagnostic image quality with the new software (which incorporates AI - the CNN noise reduction), suggesting that human readers benefit from this enhancement. However, a specific effect size (e.g., AUC improvement, percentage increase in accuracy) is not provided in the summary.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Performance: Partially. The "Ideal Observer Evaluation" seems to be a more objective, algorithm-centric assessment of detectability, stating that "detectability is preserved or improved with the investigational software." Also, the "Analysis of the Difference Images" checked for artifacts without human interpretation as the primary outcome. However, the overall "diagnostic quality" assessment was clinical, involving human readers.
7. The Type of Ground Truth Used
- Type of Ground Truth: The text implies a human expert consensus/evaluation as the primary ground truth for diagnostic quality. The "5-point visual difference scale" and "4-point RadLex scale" evaluated by "board certified radiologists" serve as the basis for assessing diagnostic image quality. For the "Ideal Observer Evaluation," the ground truth likely involved simulated lesions.
8. The Sample Size for the Training Set
- Training Set Sample Size: Not explicitly stated. The text mentions "clinical images with added simulated noise" were used to train the Convolutional Network (CNN).
9. How the Ground Truth for the Training Set Was Established
- Ground Truth for Training Set: The ground truth for training the Smart Noise Cancellation module (a Convolutional Network) was established using "clinical images with added simulated noise to represent reduced signal-to-noise acquisitions." This suggests that the model was trained to learn the relationship between noisy images (simulated low SNR) and presumably clean or less noisy versions of those clinical images to perform noise reduction. The text doesn't specify how the "clean" versions were obtained or verified, but it implies a supervised learning approach where the desired noise-free output served as the ground truth.
Ask a specific question about this device
Page 1 of 1