Search Results
Found 4 results
510(k) Data Aggregation
(196 days)
The device is designed to perform radiographic x-ray examinations on all pediatric and adult patients, in all patient treatment areas.
The DRX-Revolution Mobile X-ray System is a mobile diagnostic x-ray system that utilizes digital technology for bedside or portable exams. Key components of the system are the x-ray generator, a tube head assembly (includes the x-ray tube and collimator) that allows for multiple axes of movement, a maneuverable drive system, touchscreen user interface(s) for user input. The system is designed with installable software for acquiring and processing medical diagnostic images outside of a standard stationary X-ray room. It is a mobile diagnostic system intended to generate and control X-rays for examination of various anatomical regions.
The provided text describes a 510(k) premarket notification for the DRX-Revolution Mobile X-ray System, which includes changes such as the addition of Smart Noise Cancellation (SNC) functionality and compatibility with a new detector (Lux 35). The study focuses on demonstrating the substantial equivalence of the modified device to a previously cleared predicate device (DRX-Revolution Mobile X-ray System, K191025).
Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided information:
1. A table of acceptance criteria and the reported device performance
Acceptance Criteria (for SNC) | Reported Device Performance |
---|---|
At least 99% of all image pixels were within ± 1 pixel value | Achieved. The results demonstrated that at least 99% of all image pixels were within ± 1 pixel value. |
Absolute maximum difference across all test images should be ≤ 10-pixel values | Achieved. The absolute maximum difference seen across all test images was 3-pixel values, meeting the acceptance criterion of a maximum allowable difference of 10-pixel values. |
Noise ratio values computed for every pixel of the test images should be |
Ask a specific question about this device
(134 days)
The device is designed to perform radiographic x-ray examinations on all pediatric and adult patient treatment areas.
The DRX-Rise Mobile X-ray System is a diagnostic mobile X-ray system utilizing digital radiography technology. The DRX-Rise consists of a self-contained X-ray generator, image receptor(s), imaging display and software for acquiring medical diagnostic images outside of a standard stationary X-ray room. These components are mounted on a motorized cart that is battery powered to enable the device to be driven from location to location by user interaction. The DRX-Rise system incorporates a flat-panel detector that can be used wirelessly for exams such as in-bed chest projections. The device acquires images using Carestream's clinical acquisition software platform (ImageView) and digital flat panel detectors. ImageView is considered software that is of Moderate Level of Concern and not intended for manipulation of medical images. The DRX-Rise Mobile X-ray System is designed for digital radiography (DR) with Carestream detectors.
The provided document is a 510(k) premarket notification for the DRX-Rise Mobile X-ray System, asserting its substantial equivalence to a predicate device (DRX-Revolution Mobile X-ray System, K191025). The document does not describe a study involving acceptance criteria for an AI/CADe device's performance when assisting human readers or evaluating standalone AI performance.
Instead, the document focuses on demonstrating that the DRX-Rise Mobile X-ray System itself, as a physical medical device, is substantially equivalent to an already cleared device. This is achieved through comparisons of technological characteristics and compliance with consensus standards.
Therefore, I cannot provide the requested information regarding acceptance criteria and studies for an AI/CADe device's performance (points 2-9) because the submission does not pertain to such a device or study.
Here's a breakdown of what can be extracted from the provided text, related to the device itself:
1. A table of acceptance criteria and the reported device performance:
The document doesn't present acceptance criteria in the typical "performance target" vs. "achieved performance" format for an AI/CADe. Instead, it compares the modified device's specifications to the predicate device's specifications, arguing that any differences do not impact safety or performance.
Criterion (Feature) | Predicate Device Performance (DRX-Revolution Mobile X-ray System K191025) | Modified Device Performance (DRX-Rise Mobile X-ray System K213568) | Impact Assessment (Implicit Acceptance Criterion) |
---|---|---|---|
Indications for Use | The device is designed to perform radiographic X-ray examinations on all pediatric and adult patients, in all patient treatment areas. | Same | Substantially equivalent (Same indications for use is an explicit statement of acceptance) |
Imaging Device Compatibility | Digital Radiography (DR) | Same | Substantially equivalent |
Digital Radiography Imaging Device (Detector) | DRX Plus Detectors (K150766), (K153142), (K183245) | Same | Substantially equivalent |
X-ray Generator Rating | 32kW | Same | Substantially equivalent |
mAs Range (Generator) | 0.1-320 mAs | 0.1 mAs~630 mAs | The DRX Rise (modified device) provides more power in generator output. No impact to safety/performance. (Implicitly accepted if no safety/performance impact) |
X-ray Tube Voltage Range | 40-150kV (1kV steps) | 40-125kV (1kV steps) | 40-125kV is the most commonly used kV range in clinical imaging. No impact to safety/performance. (Implicitly accepted if no safety/performance impact) |
X-ray Tube Model | Canon/XRR-3336X | Canon/E7242 (X / FX / GX) | Same supplier but different tube model is used with the modified device. No impact to safety/performance. (Implicitly accepted if no safety/performance impact) |
X-ray Tube Focal Spot Size | 0.6mm and 1.2mm | 0.6 mm and 1.5 mm | Small focal spot size is same as predicate. Large focus spot size is 20% larger but within expected range for clinical imaging. No impact to safety/performance or to image quality. (Implicitly accepted if no safety/performance impact or to image quality) |
System Power for Charging | Single Phase AC: 50/60 Hz, 1440 VA Voltage:100-240V | Same | Substantially equivalent |
Application System Software (Operator Console X-ray Control) | Carestream ImageView System software with image processing capability (K191025) | Same | Substantially equivalent |
Collapsible Column | Yes | No | The column is fixed on the modified device. No impact to safety/performance. (Implicitly accepted if no safety/performance impact) |
Column Height | 2193mm-1390mm | 1930mm (fixed column) | No impact to safety/performance. (Implicitly accepted if no safety/performance impact) |
Column Rotation Range | +/- 270 degrees | Same | Substantially equivalent |
Travel Method | Electric motor (battery powered) | Same | Substantially equivalent |
2. Sample sized used for the test set and the data provenance: Not applicable. This submission concerns a hardware medical device, not a performance study on a test set of images. The "test set" in this context refers to the device itself being tested against its specifications and existing standards.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable. Ground truth for image interpretation by experts is not relevant to this submission.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set: Not applicable.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: Not applicable. No AI assistance mentioned.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Not applicable.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc): Not applicable. The "ground truth" for this device's acceptance is its compliance with recognized consensus standards and its functional equivalence to a predicate device.
8. The sample size for the training set: Not applicable. There is no mention of a training set as this is not an AI/ML device submission.
9. How the ground truth for the training set was established: Not applicable.
Ask a specific question about this device
(219 days)
The software performs digital enhancement of a radiographic image generated by an x-ray device. The software can be used to process adult and pediatric x-ray images. This excludes mammography applications.
Eclipse software runs inside the ImageView product application software (also namely console software). The Eclipse image processing software II with Smart Noise Cancellation is similar to the predicate Eclipse image processing software (K180809). Eclipse with Smart Noise Cancellation is an optional feature that enhances projection radiography acquisitions captured from digital radiography imaging receptors (Computed Radiography (CR) and Direct Radiography (DR). The modified software is considered an extension of the software (it is not stand alone and is to be used only with the predicate device supports the Carestream DRX family of detectors, this includes all CR and DR detectors. The primary difference between the predicate and the subject device is the addition of a Smart Noise Cancellation module. The Smart Noise Cancellation module consists of a Convolutional Network (CNN) trained using clinical images with added simulated noise to represent reduced signal-to-noise acquisitions. Eclipse with Smart Noise Cancellation (modified device) incorporates enhanced noise reduction prior to executing Eclipse II image processing software.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
Based on the provided text, the device Eclipse II with Smart Noise Cancellation is considered substantially equivalent to its predicate Eclipse II (K180809) due to modifications primarily centered around an enhanced noise reduction feature. The acceptance criteria and the study that proves the device meets these criteria are inferred from the demonstrated equivalence to the predicate device and the evaluation of the new Smart Noise Cancellation module.
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are implicitly tied to the performance of the predicate device and the new feature's ability to maintain or improve upon key image quality attributes without introducing new safety or effectiveness concerns.
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Diagnostic Quality Preservation/Improvement: The investigational software (Eclipse II with Smart Noise Cancellation) must deliver diagnostic quality images equivalent to or exceeding the predicate software (Eclipse II). | Clinical Evaluation: "The statistical test results and graphical summaries demonstrate that the investigational software delivers diagnostic quality images that exceed the quality of the predicate software over a range of exams, detector types and exposure levels." |
No Substantial Residual Image Artifacts: The noise reduction should not introduce significant new artifacts. | Analysis of Difference Images: "The report focused on the analysis of the residual image artifacts. In conclusion, the images showed no substantial residual edge information within regions of interest." |
Preservation/Improvement of Detectability: The detectability of lesions should not be negatively impacted and ideally improved. | Ideal Observer Evaluation: "The evaluation demonstrated that detectability is preserved or improved with the investigational software for all supported detector types and exposure levels tested." |
No New Questions of Safety & Effectiveness: The modifications should not raise new safety or effectiveness concerns. | Risk Assessment: "Risks were assessed in accordance to ISO 14971 and evaluated and reduced as far as possible with risk mitigations and mitigation evidence." |
Overall Conclusion: "The differences within the software do not raise new or different questions of safety and effectiveness." | |
Same Intended Use: The device must maintain the same intended use as the predicate. | Indications for Use: "The software performs digital enhancement of a radiographic image generated by an x-ray device. The software can be used to process adult and pediatic x-ray images. This excludes mammography applications." (Stated as "same" for both predicate and modified device in comparison chart) |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: Not explicitly stated. The text mentions "a range of exams, detector types and exposure levels" for the clinical evaluation, and "clinical images with added simulated noise" for the CNN training.
- Data Provenance: Not explicitly stated. The text mentions "clinical images," implying real-world patient data, but does not specify the country of origin or whether it was retrospective or prospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: Not explicitly stated. The text mentions a "clinical evaluation was performed by board certified radiologists." It does not specify the number involved.
- Qualifications of Experts: "Board certified radiologists." No specific years of experience are provided.
4. Adjudication Method for the Test Set
- Adjudication Method: Not explicitly stated. The text mentions images were evaluated using a "5-point visual difference scale (-2 to +2) tied to diagnostic confidence" and a "4-point RadLex scale" for overall diagnostic capability. It does not describe a method for resolving discrepancies among multiple readers, such as 2+1 or 3+1.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
- MRMC Comparative Effectiveness Study: Yes, a clinical evaluation was performed by board-certified radiologists comparing the investigational software to the predicate software. While it doesn't explicitly use the term "MRMC," the description of a clinical evaluation by multiple radiologists comparing two versions of software suggests this type of study was conducted.
- Effect Size of Human Readers Improvement with AI vs. without AI Assistance: The text states, "The statistical test results and graphical summaries demonstrate that the investigational software delivers diagnostic quality images that exceed the quality of the predicate software over a range of exams, detector types and exposure levels." This indicates an improvement in diagnostic image quality with the new software (which incorporates AI - the CNN noise reduction), suggesting that human readers benefit from this enhancement. However, a specific effect size (e.g., AUC improvement, percentage increase in accuracy) is not provided in the summary.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Performance: Partially. The "Ideal Observer Evaluation" seems to be a more objective, algorithm-centric assessment of detectability, stating that "detectability is preserved or improved with the investigational software." Also, the "Analysis of the Difference Images" checked for artifacts without human interpretation as the primary outcome. However, the overall "diagnostic quality" assessment was clinical, involving human readers.
7. The Type of Ground Truth Used
- Type of Ground Truth: The text implies a human expert consensus/evaluation as the primary ground truth for diagnostic quality. The "5-point visual difference scale" and "4-point RadLex scale" evaluated by "board certified radiologists" serve as the basis for assessing diagnostic image quality. For the "Ideal Observer Evaluation," the ground truth likely involved simulated lesions.
8. The Sample Size for the Training Set
- Training Set Sample Size: Not explicitly stated. The text mentions "clinical images with added simulated noise" were used to train the Convolutional Network (CNN).
9. How the Ground Truth for the Training Set Was Established
- Ground Truth for Training Set: The ground truth for training the Smart Noise Cancellation module (a Convolutional Network) was established using "clinical images with added simulated noise to represent reduced signal-to-noise acquisitions." This suggests that the model was trained to learn the relationship between noisy images (simulated low SNR) and presumably clean or less noisy versions of those clinical images to perform noise reduction. The text doesn't specify how the "clean" versions were obtained or verified, but it implies a supervised learning approach where the desired noise-free output served as the ground truth.
Ask a specific question about this device
(29 days)
The device is designed to perform radiographic x-ray examinations on all pediatric and adult patient treatment areas.
The DRX-Revolution Mobile X-ray System is a diagnostic mobile x-ray system utilizing digital radiography (DR) technology. The system consists of a self-contained x-ray generator, image receptor(s), imaging display and software for acquiring medical diagnostic images outside of a standard stationary xray room. The DRX-Revolution system incorporates a flat-panel detector that can be used wirelessly for exams such as in-bed chest projections. The system can also be used to expose CR phosphor screens or films.
The Carestream DRX-Revolution Mobile X-ray System (K191025) underwent modifications compared to its predicate device (K120062). The primary changes include a different X-ray tube supplier, additional support for DRX Plus detectors, updated image acquisition software (ImageView), and a replaced high-voltage X-ray generator.
Here's an analysis of the acceptance criteria and the study proving adherence:
1. Table of Acceptance Criteria and Reported Device Performance:
Acceptance Criteria | Reported Device Performance |
---|---|
Image Quality Equivalency | "Results of this data demonstrated that image quality on the modified DRX-Revolution Mobile X-ray System is equivalent to the device on the market." |
"Testing demonstrates that the modified device produces diagnostic image quality that is the same or better than the predicate." | |
"The detectors have been tested and verified to meet the requirements for integration with the DRX-Revolution Mobile X-ray System (modified) device and the DQE/MTF data demonstrates image quality is the same as or better than the predicate." | |
"The image quality of the modified device is at least as good as or better than that of the predicate device." | |
Safety and Effectiveness Equivalency | "The modified DRX-Revolution Mobile X-ray System is substantially equivalent to the predicate device currently cleared on the market (K120062)." |
"The change in X-ray tube does not significantly change the functionality of the redesigned DRX-Revolution system, nor do changes significantly affect the safety or effectiveness of the device." | |
"The generator was verified and validated and passed all testing and demonstrates there is no significant impact on clinical functionality or performance that could significantly affect safety and effectiveness." | |
"Risks were assessed in accordance to ISO 14971 and risk control options were implemented with safety by design principles and with a risk methodology that reduces risks as far as possible." | |
"Results of non-clinical testing demonstrate that the modified device is as safe and as effective as the predicate device." | |
"The subject device is expected to be safe and effective for the device indications and are substantially equivalent to the predicate." | |
Maintenance of Intended Use | "In addition, the indications for use of the modified device, as described in labeling does not change as a result of the device modification(s)." |
"The intended use remains unchanged." | |
Fundamental Scientific Technology Equivalency | "The modified DRX-Revolution employs the same fundamental scientific technology as the predicate device." |
"The fundamental scientific technology of the modified device is the same and is substantially equivalent to the predicate." | |
Hardware Components Functionality (e.g., X-ray tube) | "The change in X-ray tube does not significantly change the functionality of the redesigned DRX-Revolution system, nor do changes significantly affect the safety or effectiveness of the device." |
Detector Integration and Performance | "The detectors have been tested and verified to meet the requirements for integration with the DRX-Revolution Mobile X-ray System (modified) device and the DQE/MTF data demonstrates image quality is the same as or better than the predicate." |
Software Functionality (ImageView) | "No changes have been made between the DRX Carestream Evolution with ImageView (K163203) and the subject device with ImageView, other than some minor changes necessary for the software to function on the subject device. The image processing between the two devices is the same. This change has no clinical impact on image diagnosis, bench testing data demonstrates substantial equivalence." |
Generator Performance | "The High-voltage X-ray generator has been replaced. This generator is considered a 1:1 replacement, there was no change in performance specifications. The generator was verified and validated and passed all testing and demonstrates there is no significant impact on clinical functionality or performance that could significantly affect safety and effectiveness." |
2. Sample size used for the test set and the data provenance:
- Sample Size: Not explicitly stated as a number of images or cases. The document mentions a "Phantom Imaging study."
- Data Provenance: The study was a "Phantom Imaging study," which implies the use of test phantoms rather than real patient data. This is typically done in a controlled laboratory environment. The country of origin is not specified but given Carestream's location (Rochester, New York), it is likely the US. The study type is retrospective, as it's bench testing to compare a modified device to an already marketed predicate.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- This information is not provided in the document. As the study was a non-clinical "phantom imaging study" evaluating technical image quality attributes, it might not have involved human expert readers establishing diagnostic ground truth in the traditional sense. The evaluation likely relied on quantitative measurements of image quality metrics.
4. Adjudication method for the test set:
- This information is not provided as the study was a phantom imaging study focusing on technical image quality. Adjudication methods like 2+1 or 3+1 are typically used in clinical studies with human readers interpreting medical images.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study was not done. The submission explicitly states: "Clinical testing was not required to establish substantial equivalence. Bench testing was sufficient to assess the device safety and effectiveness." This device is a mobile X-ray system, not an AI-powered diagnostic software.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:
- Yes, in a sense, a "standalone" evaluation of the device's image quality was performed through the "Phantom Imaging study" and DQE/MTF data. This tested the device's inherent capability to produce images without direct human interpretation for diagnostic purposes, focusing on technical image quality attributes rather than diagnostic accuracy.
7. The type of ground truth used:
- The ground truth used was based on technical image quality attributes such as detail, sharpness, noise, and appearance of artifacts, as evaluated through a "Phantom Imaging study" and by DQE/MTF data. This is an objective technical assessment against established metrics for image quality, rather than a clinical ground truth like pathology or expert consensus on a diagnosis.
8. The sample size for the training set:
- This information is not applicable/not provided. This device is a hardware X-ray system with standard image processing software, not an AI/Machine Learning algorithm that undergoes a "training" phase with a large dataset. The "ImageView" software is a web-based application to improve usability, and its image processing is stated to be the same as previously cleared versions.
9. How the ground truth for the training set was established:
- This information is not applicable/not provided for the same reasons as point 8.
Ask a specific question about this device
Page 1 of 1