Search Results
Found 48 results
510(k) Data Aggregation
(142 days)
Carestream Health Inc.
Ask a specific question about this device
(196 days)
Carestream Health Inc.
The device is designed to perform radiographic x-ray examinations on all pediatric and adult patients, in all patient treatment areas.
The DRX-Revolution Mobile X-ray System is a mobile diagnostic x-ray system that utilizes digital technology for bedside or portable exams. Key components of the system are the x-ray generator, a tube head assembly (includes the x-ray tube and collimator) that allows for multiple axes of movement, a maneuverable drive system, touchscreen user interface(s) for user input. The system is designed with installable software for acquiring and processing medical diagnostic images outside of a standard stationary X-ray room. It is a mobile diagnostic system intended to generate and control X-rays for examination of various anatomical regions.
The provided text describes a 510(k) premarket notification for the DRX-Revolution Mobile X-ray System, which includes changes such as the addition of Smart Noise Cancellation (SNC) functionality and compatibility with a new detector (Lux 35). The study focuses on demonstrating the substantial equivalence of the modified device to a previously cleared predicate device (DRX-Revolution Mobile X-ray System, K191025).
Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided information:
1. A table of acceptance criteria and the reported device performance
Acceptance Criteria (for SNC) | Reported Device Performance |
---|---|
At least 99% of all image pixels were within ± 1 pixel value | Achieved. The results demonstrated that at least 99% of all image pixels were within ± 1 pixel value. |
Absolute maximum difference across all test images should be ≤ 10-pixel values | Achieved. The absolute maximum difference seen across all test images was 3-pixel values, meeting the acceptance criterion of a maximum allowable difference of 10-pixel values. |
Noise ratio values computed for every pixel of the test images should be |
Ask a specific question about this device
(162 days)
Carestream Health, Inc.
The DRX-Evolution Plus is a permanently installed diagnostic x-ray system for general radiographic x-ray imaging. This device is a permanently installed diagnostic x-ray system for general radiographic x-ray imaging. This device also supports Dual Energy chest imaging. The Dual Energy feature is not to be used for imaging pediatric patients.
The DRX-Evolution Plus is a general purpose x-ray system used for acquiring radiographic images of various portions of the human body. The system consists of a combination of components including various models of high voltage x-ray generators, control panels or workstation computers, various models of patient support tables, wall-mounted image receptors/detectors for upright imaging, various models of tube support devices, x-ray tube, and collimator (beam-limiting device). In addition to general radiography applications, the system also includes the optional Dual Energy functionality. The DRX-Evolution Plus can be used with digital radiography (DR) and computed radiography (CR) receptors. "Smart" Features are added to the DRX-Evolution Plus system to provide remote exam set-up capabilities for existing functions of the DRX-Evolution Plus system. These remote capabilities simplify exam set up and improve workflow for the operator while preparing for the patient exposure. The "smart" features, described below, are designed to reduce the technologist's manual tasks and to speed up workflow for existing features of the system. Implementation of these features does not change the intended use of the system and does not affect the Dual Energy functionality.
The provided FDA 510(k) document for the Carestream Health, Inc. DRX-Evolution Plus System (K233381) does not contain the detailed information required to describe the acceptance criteria and the study that proves the device meets those criteria, specifically regarding AI/algorithm performance.
The document discusses the substantial equivalence of the DRX-Evolution Plus system to a predicate device (K190330), focusing on hardware components, new integrated detectors, and workflow enhancements referred to as "Smart" features (Real-time Video Assistance, Long Length Imaging, Collimation from User Interface, Patient Picture).
The "Smart" features described are workflow improvements that seem to involve remote control and visualization, not an AI/algorithm that performs diagnostic or detection tasks requiring rigorous performance criteria and clinical validation studies per the questions asked. The document explicitly states: "The 'smart' features, described below, are designed to reduce the technologist's manual tasks and to speed up workflow for existing features of the system. Implementation of these features does not change the intended use of the system and does not affect the Dual Energy functionality."
Therefore, I cannot extract the information requested about acceptance criteria for an AI/algorithm's diagnostic performance, sample sizes used for test sets, expert ground truth establishment, MRMC studies, or standalone algorithm performance from this specific document.
The document indicates:
- Non-clinical testing was performed for the "Smart" Feature user options, and these tests "indicated that the subject device as described in this submission meets the predetermined safety and effectiveness criteria." However, it does not specify what those criteria were for these workflow enhancements beyond general safety and effectiveness.
- Detector integration testing involved "functional testing, installation testing, media verification tests, performance tests, regression tests, risk mitigation testing, and serviceability testing." For the Lux 35 detector, "comprehensive image quality tests, vacuum testing to validate its liquid ingress (IP57) requirement, and Dual Energy functionality and performance testing" were done.
Given the nature of the device (a general diagnostic X-ray system with workflow enhancements), it's highly probable that the acceptance criteria and validation studies are related to hardware performance, image quality, electrical safety, usability, and compliance with recognized standards (IEC, ISO), rather than the diagnostic accuracy of an AI algorithm.
In summary, the provided text does not contain the information requested to answer the questions about AI/algorithm acceptance criteria and performance studies because the "Smart" features described are workflow enhancements, not diagnostic AI algorithms.
Ask a specific question about this device
(29 days)
Carestream Health, Inc.
The device is indicated for use in obtaining diagnostic images to aid the physician with diagnosis. The system can be used to perform radiographic imaging of various portions of the human body, including the skull, spinal column, extremities, chest, abdomen and other body parts. The device is not indicated for use in mammography
The DRX-Compass System is a general purpose x-ray system used for acquiring radiographic images of various portions of the human body. The system consists of a combination of components including various models of high voltage x-ray generators, control panels or workstation computers, various models of patient support tables, wall-mounted image receptors/detectors for upright imaging, various models of tube support devices, x-ray tube, and collimator (beam-limiting device). The DRX-Compass can be used with digital radiography (DR) and computed radiography (CR) receptors. Smart Features are added to the DRX-Compass system to provide remote capabilities for existing functions of the DRX-Compass system. These remote capabilities simplify exam set up and improve workflow for the operator while preparing for the patient exposure. The "smart features", described below, are designed to reduce the technologist's manual tasks and to speed up workflow for existing features of the system. These improvements are referred to as "smart features" in the product documentation. Implementation of these "smart features" does not change the intended use of the system.
The provided text does not contain detailed information about specific acceptance criteria and a study that comprehensively proves the device meets those criteria for the DRX-Compass system. The document is a 510(k) summary for the FDA, which focuses on demonstrating substantial equivalence to a predicate device rather than a comprehensive efficacy study for new features.
However, based on the information provided, I can extract the relevant details that are present and explain why some requested information is not available in this document.
Here's a breakdown of what can be inferred and what is missing:
1. Table of Acceptance Criteria and Reported Device Performance
The document mentions that "Predefined acceptance criteria were met and demonstrated that the device is as safe, as effective, and performs as well as or better than the predicate device." However, the specific acceptance criteria themselves (e.g., specific thresholds for DQE/MTF, or performance metrics for the "smart features") are not explicitly detailed in this 510(k) summary. Similarly, the reported device performance values against those specific criteria are also not provided.
The closest information related to performance is:
Acceptance Criteria (Inferred/General) | Reported Device Performance (Inferred/General) |
---|---|
Image quality of additional detectors equivalent to predicate. | Flat panel detector DQE/MTF data shows the additional detectors (DRX Plus 2530, Focus HD 35, Focus HD 43, Lux 35) are equivalent in image quality to DRX Plus detectors cleared with the predicate. |
Compliance with electrical safety standards (IEC 60601-1, IEC 60601-1-2, IEC 60601-2-54). | Device complies with listed electrical safety standards. |
Compliance with usability standards (IEC 60601-1-6, IEC 62366). | Device complies with listed usability standards. |
No new risks identified that raise additional questions of safety and performance (ISO 14971). | All product risks have been mitigated; no changes to risk control measures; testing indicates substantial equivalence. |
"Smart Features" (Real-time Video, LLI, Collimation, Patient Picture) simplify exam setup and improve workflow without changing intended use. | These features are designed to reduce manual tasks and speed up workflow. (No specific quantitative performance metrics provided in this document). |
2. Sample Size Used for the Test Set and Data Provenance
This information is not provided in the 510(k) summary. The document states "Non-clinical testing such as standards testing are the same as that of the predicate. The verification and validation testing of the modified device demonstrates that the modified device performs as well as the predicate and is substantially equivalent." without detailing the specific sample sizes or data provenance for these tests. For imaging performance, it mentions DQE/MTF data for detectors, but not the sample size of images or patients used for performance evaluation of the overall system or its new "smart features."
3. Number of Experts Used to Establish Ground Truth and Qualifications
This information is not provided in the 510(k) summary. The document focuses on technical verification and validation, and comparison to a predicate device, rather than a clinical study requiring expert consensus on ground truth.
4. Adjudication Method for the Test Set
This information is not provided in the 510(k) summary. Given the absence of specific clinical study details or expert ground truth establishment, no adjudication method would be mentioned.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
A MRMC comparative effectiveness study is not mentioned in this document. The submission's focus is on demonstrating substantial equivalence through technical testing and compliance with recognized standards, particularly for the "smart features" which are described as workflow enhancements rather than diagnostic AI tools requiring reader performance studies. There is no mention of AI assistance for human readers or associated effect sizes.
6. Standalone (Algorithm Only) Performance Study
A standalone performance study of an algorithm without human-in-the-loop is not explicitly mentioned in this document. The "smart features" are described as functionalities to assist the operator, implying human-in-the-loop operation, rather than a standalone diagnostic algorithm. The document mentions "Flat panel detector DQE/MTF data shows that the additional detectors supported by the modified device (DRX-Compass) are equivalent in image quality to that of the DRX Plus detectors cleared with the predicate," which is a technical performance metric for the detector component, not an algorithm's diagnostic performance.
7. Type of Ground Truth Used
The type of ground truth used for any performance evaluation is not explicitly stated. For the detector performance, DQE/MTF data refers to physical image quality metrics rather than a diagnostic ground truth (like pathology or clinical outcomes). For the "smart features," their evaluation appears to be based on functional verification and validation of their workflow enhancement capabilities, rather than comparison to a ground truth for diagnostic accuracy.
8. Sample Size for the Training Set
This information is not provided in the 510(k) summary. The document does not describe the use of machine learning algorithms that would typically require a training set. The "smart features" appear to be rule-based or real-time processing functionalities rather than learning algorithms.
9. How Ground Truth for the Training Set Was Established
Since there is no mention of a training set or machine learning, details on establishing its ground truth are not provided.
In summary, the 510(k) submission for the DRX-Compass focuses on demonstrating substantial equivalence to a predicate device by:
- Ensuring the modified device's indications for use are identical.
- Confirming compliance with recognized electrical safety and performance standards (AAMI ES60601-1, IEC 60601-1-6, IEC 60601-1-3, IEC 60601-2-54, IEC 62366).
- Applying risk management (ISO 14971) to ensure no new risks are introduced.
- Showing that new components (e.g., additional detectors) maintain equivalent image quality (e.g., DQE/MTF data).
- Asserting that new "smart features" improve workflow without changing the device's intended use or safety profile.
The document does not provide the kind of detailed clinical study data often found for AI/ML-based diagnostic devices, including specific acceptance criteria values, sample sizes for test or training sets, expert qualifications, or adjudication methods, as these may not be typically required for modifications to a stationary X-ray system primarily focused on workflow enhancements and component upgrades.
Ask a specific question about this device
(102 days)
Carestream Health, Inc.
The software performs digital enhancement of a radiographic image generated by an x-ray device. The software can be used to process adult and pediatric x-ray images. This excludes mammography applications.
Eclipse software runs inside the Image View product application software (not considered stand-alone software). Smart Noise Cancellation is an optional feature (module) that enhances projection radiography acquisitions captured from digital radiography imaging receptors (Computed Radiography (CR) and Digital Radiography (DR). Eclipse II with Smart Noise Cancellation supports the Carestream DRX family of detectors which includes all CR and DR detectors.
The Smart Noise Cancellation module consists of a Convolutional Network (CNN) trained using clinical images with added simulated noise to represent reduced signal-to-noise acquisitions.
Eclipse II with Smart Noise Cancellation incorporates enhanced noise reduction prior to executing Eclipse image processing software. The software has the capability to lower dose up to 50% when processed through the Eclipse II software with SNC, resulting in improved image quality. A 50% dose reduction for CSI panel images and 40% dose reduction for GOS panel images when processed with Eclipse II and SNC results in image quality as good as or better than nominal dose images
The provided document describes the modification of the Eclipse II software to include a Smart Noise Cancellation (SNC) module. The primary goal of this modification is to enable lower radiation doses while maintaining or improving image quality. The study discussed is a "concurrence study" involving board-certified radiologists to evaluate diagnostic image quality.
Here's the breakdown of the acceptance criteria and study details:
1. Table of Acceptance Criteria and Reported Device Performance:
The document doesn't explicitly state "acceptance criteria" in a table format with specific numerical thresholds for image quality metrics. Instead, it describes the objective of the study which effectively serves as the performance goal for the device.
Acceptance Criterion (Implicit Performance Goal) | Reported Device Performance |
---|---|
Diagnostic quality images at reduced dose. | Statistical test results and graphical summaries demonstrate that the software delivers diagnostic quality images at 50% dose reduction for CsI panel images and 40% dose reduction for GOS panel images. |
Image quality at reduced dose | Image quality with reduced radiation doses is equivalent to or exceeds the quality of nominal dose images of exams. |
2. Sample Size Used for the Test Set and Data Provenance:
- Sample Size for Test Set: Not explicitly stated. The document mentions "clinical images" and "exams, detector types and exposure levels" were used, but a specific number of images or cases for the test set is not provided.
- Data Provenance: Not explicitly stated. The document refers to "clinical images," but there is no information about the country of origin or whether the data was retrospective or prospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:
- Number of Experts: Not explicitly stated. The study was performed by "board certified radiologists." The number of radiologists is not specified.
- Qualifications of Experts: "Board certified radiologists." No information is given regarding their years of experience.
4. Adjudication Method for the Test Set:
- Adjudication Method: Not explicitly stated. The document mentions a "5-point visual difference scale (-2 to +2) tied to diagnostic confidence" and a "4-point RadLex scale" for evaluating overall diagnostic capability. However, it does not describe how multiple expert opinions were combined or adjudicated if there were disagreements (e.g., 2+1, 3+1).
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance:
- MRMC Study: The study appears to be a multi-reader study as it was "performed by board certified radiologists." However, it is not a comparative effectiveness study comparing human readers with AI assistance vs. without AI assistance. The study's aim was to determine if the software itself (Eclipse II with SNC) could produce diagnostic quality images at reduced dose, assessed by human readers. It's evaluating the output of the software, not the improvement of human readers using the software as an assistance tool.
- Effect Size: Not applicable, as it's not an AI-assisted human reading study.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done:
- Standalone Performance: No, a standalone (algorithm only) performance evaluation was not done. The evaluation involved "board certified radiologists" assessing the diagnostic quality of the images processed by the software. This is a human-in-the-loop assessment of the processed images, not a standalone performance of the algorithm making diagnoses.
7. The Type of Ground Truth Used:
- Type of Ground Truth: The ground truth for image quality and diagnostic capability was established by expert consensus (or at least expert assessment), specifically "board certified radiologists," using a 5-point visual difference scale and a 4-point RadLex scale. This is a subjective assessment by experts, rather than an objective ground truth like pathology or outcomes data.
8. The Sample Size for the Training Set:
- Sample Size for Training Set: Not explicitly stated. The document mentions that the Convolutional Network (CNN) was "trained using clinical images with added simulated noise." However, no specific number of images or cases used for training is provided.
9. How the Ground Truth for the Training Set Was Established:
- Ground Truth for Training Set: The document states the CNN was "trained using clinical images with added simulated noise to represent reduced signal-to-noise acquisitions." This implies that the ground truth for training likely revolved around distinguishing actual image data from added simulated noise. This is an intrinsic ground truth generated by the method of simulating noise on known clean clinical images, rather than a clinical ground truth established by expert review for diagnostic purposes.
Ask a specific question about this device
(219 days)
Carestream Health, Inc.
The software performs digital enhancement of a radiographic image generated by an x-ray device. The software can be used to process adult and pediatric x-ray images. This excludes mammography applications.
Eclipse software runs inside the ImageView product application software (also namely console software). The Eclipse image processing software II with Smart Noise Cancellation is similar to the predicate Eclipse image processing software (K180809). Eclipse with Smart Noise Cancellation is an optional feature that enhances projection radiography acquisitions captured from digital radiography imaging receptors (Computed Radiography (CR) and Direct Radiography (DR). The modified software is considered an extension of the software (it is not stand alone and is to be used only with the predicate device supports the Carestream DRX family of detectors, this includes all CR and DR detectors. The primary difference between the predicate and the subject device is the addition of a Smart Noise Cancellation module. The Smart Noise Cancellation module consists of a Convolutional Network (CNN) trained using clinical images with added simulated noise to represent reduced signal-to-noise acquisitions. Eclipse with Smart Noise Cancellation (modified device) incorporates enhanced noise reduction prior to executing Eclipse II image processing software.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
Based on the provided text, the device Eclipse II with Smart Noise Cancellation is considered substantially equivalent to its predicate Eclipse II (K180809) due to modifications primarily centered around an enhanced noise reduction feature. The acceptance criteria and the study that proves the device meets these criteria are inferred from the demonstrated equivalence to the predicate device and the evaluation of the new Smart Noise Cancellation module.
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are implicitly tied to the performance of the predicate device and the new feature's ability to maintain or improve upon key image quality attributes without introducing new safety or effectiveness concerns.
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Diagnostic Quality Preservation/Improvement: The investigational software (Eclipse II with Smart Noise Cancellation) must deliver diagnostic quality images equivalent to or exceeding the predicate software (Eclipse II). | Clinical Evaluation: "The statistical test results and graphical summaries demonstrate that the investigational software delivers diagnostic quality images that exceed the quality of the predicate software over a range of exams, detector types and exposure levels." |
No Substantial Residual Image Artifacts: The noise reduction should not introduce significant new artifacts. | Analysis of Difference Images: "The report focused on the analysis of the residual image artifacts. In conclusion, the images showed no substantial residual edge information within regions of interest." |
Preservation/Improvement of Detectability: The detectability of lesions should not be negatively impacted and ideally improved. | Ideal Observer Evaluation: "The evaluation demonstrated that detectability is preserved or improved with the investigational software for all supported detector types and exposure levels tested." |
No New Questions of Safety & Effectiveness: The modifications should not raise new safety or effectiveness concerns. | Risk Assessment: "Risks were assessed in accordance to ISO 14971 and evaluated and reduced as far as possible with risk mitigations and mitigation evidence." |
Overall Conclusion: "The differences within the software do not raise new or different questions of safety and effectiveness." | |
Same Intended Use: The device must maintain the same intended use as the predicate. | Indications for Use: "The software performs digital enhancement of a radiographic image generated by an x-ray device. The software can be used to process adult and pediatic x-ray images. This excludes mammography applications." (Stated as "same" for both predicate and modified device in comparison chart) |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: Not explicitly stated. The text mentions "a range of exams, detector types and exposure levels" for the clinical evaluation, and "clinical images with added simulated noise" for the CNN training.
- Data Provenance: Not explicitly stated. The text mentions "clinical images," implying real-world patient data, but does not specify the country of origin or whether it was retrospective or prospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: Not explicitly stated. The text mentions a "clinical evaluation was performed by board certified radiologists." It does not specify the number involved.
- Qualifications of Experts: "Board certified radiologists." No specific years of experience are provided.
4. Adjudication Method for the Test Set
- Adjudication Method: Not explicitly stated. The text mentions images were evaluated using a "5-point visual difference scale (-2 to +2) tied to diagnostic confidence" and a "4-point RadLex scale" for overall diagnostic capability. It does not describe a method for resolving discrepancies among multiple readers, such as 2+1 or 3+1.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
- MRMC Comparative Effectiveness Study: Yes, a clinical evaluation was performed by board-certified radiologists comparing the investigational software to the predicate software. While it doesn't explicitly use the term "MRMC," the description of a clinical evaluation by multiple radiologists comparing two versions of software suggests this type of study was conducted.
- Effect Size of Human Readers Improvement with AI vs. without AI Assistance: The text states, "The statistical test results and graphical summaries demonstrate that the investigational software delivers diagnostic quality images that exceed the quality of the predicate software over a range of exams, detector types and exposure levels." This indicates an improvement in diagnostic image quality with the new software (which incorporates AI - the CNN noise reduction), suggesting that human readers benefit from this enhancement. However, a specific effect size (e.g., AUC improvement, percentage increase in accuracy) is not provided in the summary.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Performance: Partially. The "Ideal Observer Evaluation" seems to be a more objective, algorithm-centric assessment of detectability, stating that "detectability is preserved or improved with the investigational software." Also, the "Analysis of the Difference Images" checked for artifacts without human interpretation as the primary outcome. However, the overall "diagnostic quality" assessment was clinical, involving human readers.
7. The Type of Ground Truth Used
- Type of Ground Truth: The text implies a human expert consensus/evaluation as the primary ground truth for diagnostic quality. The "5-point visual difference scale" and "4-point RadLex scale" evaluated by "board certified radiologists" serve as the basis for assessing diagnostic image quality. For the "Ideal Observer Evaluation," the ground truth likely involved simulated lesions.
8. The Sample Size for the Training Set
- Training Set Sample Size: Not explicitly stated. The text mentions "clinical images with added simulated noise" were used to train the Convolutional Network (CNN).
9. How the Ground Truth for the Training Set Was Established
- Ground Truth for Training Set: The ground truth for training the Smart Noise Cancellation module (a Convolutional Network) was established using "clinical images with added simulated noise to represent reduced signal-to-noise acquisitions." This suggests that the model was trained to learn the relationship between noisy images (simulated low SNR) and presumably clean or less noisy versions of those clinical images to perform noise reduction. The text doesn't specify how the "clean" versions were obtained or verified, but it implies a supervised learning approach where the desired noise-free output served as the ground truth.
Ask a specific question about this device
(31 days)
Carestream Health, Inc
The device is indicated for use in obtaining diagnostic images to aid the physician with diagnosis. The system can be used to perform radiographic imaging of various portions of the human body, including the skull, spinal column, extremities, chest, abdomen and other body parts. The device is not indicated for use in mammography.
The DRX-Compass System is a general purpose x-ray system used for acquiring radiographic images of various portions of the human body. The system consists of a combination of components including various models of high voltage x-ray generators, control panels or workstation computers, various models of patient support tables, wallmounted image receptors/detectors for upright imaging, a ceiling mounted tube support, x-ray tube, and collimator (beam-limiting device).
The DRX-Compass can be used with digital radiography (DR) and computed radiography (CR) receptors. Systems equipped with DR or CR receptors can also be configured to include a workstation computer that is fully integrated with the x-ray generator.
The modified (subject) device, DRX-Compass, is the previously cleared Q-Rad System stationary x-ray system which has been modified as follows:
- New marketing names DRX-Compass and DR-Fit will be used depending upon regional marketing strategies.
- Implementation of a new wall stand that provides options for automated vertical motion and vertical to horizontal manual tilt (90 degrees).
- Implementation of a different Overhead Tube Crane (OTC): This OTC is ceiling suspended and provides x-y movement capability for the tube head with respect to the detector. The tube head is capable of three options for alignment with the image acquisition device (detector) as follows: 1) manual alignment by moving the x-ray tube support, 2) manual alignment using the "tube-up/tube-down" switch on the tube support, or 3) automatic alignment using the "Auto Position" switch to activate motors on the tube support in x, y, z, and alpha directions
- Focus 35C and Focus 43C Detectors are added as additional optional detector selections for customers ordering a DRX-Compass system.
- X-Ray Generator: Several Carestream designed generators are available with the system depending on power requirements and regional configurations. These generators are functionally identical to the generators currently offered for sale with the Q-Rad System.
This looks like a 510(k) summary for a medical device called DRX-Compass, an X-ray system. The document does not contain the acceptance criteria or results of a study (like an AI model performance study) that would typically involve statistical metrics, ground truth establishment, or expert reviews.
Instead, this document describes:
- Device Name: DRX-Compass
- Regulatory Information: Product Code, Regulation Number, Class, etc.
- Predicate Device: Q-Rad System (K193574)
- Device Description: Components of the DRX-Compass system, including generator models, patient support tables, wall-mounted receptors, ceiling-mounted tube support, X-ray tube, and collimator. It also mentions the new additions/modifications compared to the predicate device (new marketing names, new wall stand, different Overhead Tube Crane (OTC), added detectors, and available generators).
- Indications for Use: Obtaining diagnostic images for various body parts.
- Substantial Equivalence: The primary claim is that the DRX-Compass is substantially equivalent to the predicate Q-Rad System, stating that modifications do not raise new issues of safety and effectiveness.
- Discussion of Testing: It briefly mentions "non-clinical (bench) testing" to evaluate performance, workflow, function, verification, and validation, and that "Predefined acceptance criteria were met." However, it does not specify what those acceptance criteria were or how they were met in terms of specific performance metrics. It's focused on demonstrating equivalence to the predicate device, not on proving performance against a detailed set of criteria that would typically be described for an AI/CAD device.
Therefore, based only on the provided text, I cannot extract the detailed information requested in the prompt. The document is a regulatory submission summary, not a clinical or performance study report.
If this were a submission for an AI/CAD device, the "Discussion of Testing" section would typically elaborate on a clinical study including:
- A table of acceptance criteria and the reported device performance: This would list metrics like sensitivity, specificity, AUC, etc., and the target performance values.
- Sample size used for the test set and the data provenance: Details on number of cases, patient demographics, and origin of data.
- Number of experts used to establish the ground truth for the test set and their qualifications: Information about the radiologists/pathologists.
- Adjudication method: How disagreements among experts were resolved.
- Multi-reader multi-case (MRMC) comparative effectiveness study: If conducted, the effect size (e.g., improvement in reader performance with AI).
- Standalone performance: The algorithmic performance without human intervention.
- Type of ground truth used: e.g., pathology, clinical follow-up.
- Sample size for the training set: Number of cases used for model development.
- How the ground truth for the training set was established: Similar to the test set, but for the training data.
In summary, the provided document does not contain the information requested because it pertains to a traditional X-ray system's substantial equivalence claim, not the performance evaluation of an AI/CAD (Computer-Aided Detection/Diagnosis) algorithm.
Ask a specific question about this device
(130 days)
Carestream Health, Inc.
The Vita Flex CR System is intended for digital radiography using a phosphor storage screen for standard radiographic diagnostic images. The LL is indicated for Long Length Imaging examinations of long areas of anatomy such as the leg and spine.
The Vita Flex CR System with LLI is a Computer Radiography (CR) acquisition scanner, which includes mechanical and software interface to the LLI cassette. The device is constructed from a Man Machine Interface panel, a CR scanner and infrastructure, which enables connection to external applications, i.e. to import command messages, to export images and provide status messages. The LLI is a CR cassette, which is used for Long Length Imaging X Ray examinations of long areas of anatomy.
The Vita Flex CR system with LLI accepts an x-ray cassette with a screen. An X-ray cassette is a light-resistant container that protects the screen from exposure to daylight, and allows the passage of X-rays through the front cover on to the phosphor layer of the screen. When stroked by radiation the intensifying screen fluoresces emitting a light that creates the image.
Our Vita Flex CR system take a cassette as an input and it extracts an exposed screen and scans in the image off the screen. The image is stored on the computer system attached to the Vita Flex CR system. Once the scan is complete the screen data is erased and the screen is placed back inside the cassette to be used again by the customer.
When a cassette is properly inserted into the scanner, the scanner will lock the cassette in place. Once locked into place the cassette door can be opened to allow the scanner to feed the screen into the unit.
The operation of the scanning of the LLI cassette and screen will be done exactly as the predicate. Since the size of a long length imaging screen and cassette is large, the operation consists of 2 scans – scanning one half of the image, then turning the cassette around and scanning the second half of the image.
The document describes the regulatory submission for the Vita Flex CR System with LLI, a digital radiography system. The key argument for its clearance is its substantial equivalence to a previously cleared predicate device. Therefore, the "acceptance criteria" and "study that proves the device meets the acceptance criteria" are framed within the context of demonstrating this substantial equivalence through non-clinical testing, rather than a clinical trial with human subjects.
Here's the breakdown of the information requested:
1. A table of acceptance criteria and the reported device performance
The acceptance criteria are implicitly defined by the demonstration of equivalent or improved performance compared to the predicate device across various features and operational parameters. The reported device performance is presented as a comparison between the modified device (Vita Flex CR System with LLI) and the predicate device (Point of Care including LLI).
Feature / Acceptance Criteria Category | Predicate Device (Performance Baseline) | Vita Flex CR System with LLI (Reported Performance) | Met/Exceeds Criteria (Demonstrates Substantial Equivalence or Improvement) |
---|---|---|---|
Intended Use / Indications for Use | "digital radiography using a phosphor storage screen for standard radiographic diagnostic images. The LLI is indicated for Long Length Imaging examinations of long areas of anatomy such as the leg and spine." | Identical | Met - Unchanged |
Safety Standards | IEC60601-1, IEC60601-1-2, IEC 60825-1 (Class 1 Laser) | IEC60601-1, IEC60601-1-2, IEC 60825-1 (Class 1 Laser) | Met - Conformance verified by an OSHA approved test lab |
Working Environment | Ambient: +10 to +40°C, RH: 30-70% | Ambient: +5 to +45°C, RH: 25-81%, Atmospheric pressure: 700-1060 hPa | Exceeds/Broader - Improved operational range |
Physical Size | 658mm x 735mm x 358mm, 45KG Weight | 668mm x 675mm x 385mm, 30KG Weight | Different but within acceptable range for function, lighter weight (Improvement) |
Power Input | Multiple profiles (90-250VAC, 50/60Hz) | Unified profile (100-240VAC, 50/60Hz, 1.5A) | Improvement - Streamlined power input |
Power Module | Internal AC/DC converter | External AC/DC converter | Different - No impact on safety or effectiveness |
Cassette Loading | Manual loading | Manual loading | Met - Unchanged |
Screen Access | Autofeed by Driving Roller in Screen Transportation unit | Autofeed by Driving Roller in Screen Transportation Unit | Met - Unchanged |
Imaging Module | Laser Platen Scanning (Vertical & Horizontal Direction) | Laser Platen Scanning (Vertical & Horizontal Direction) | Met - Unchanged fundamental technology |
Laser Beam Wavelength | Red Light: 655 ± 10 nm | Red Light: 660 ± 7nm | Met - "Negligible difference," "no impact to safety or effectiveness" |
Laser Output Power (mW) | 22~25 | 30 ± 2 | Met - "Slight increase," "no impact to safety or effectiveness" |
Laser Level | Class 3B | Class 3B | Met - Unchanged |
Screen Erase Module | Achromatic Light Eraser, Fluorescent Lamps | Monochromatic Light Eraser, Red LED Light Source | Improvement - "More stable over longer period," "no impact to safety or effectiveness" |
Console Connector | USB 2.0 | USB 2.0 | Met - Unchanged |
Software Development Kit | Ultra Lite SDK | Ultra Lite SDK | Met - Unchanged |
Long Length Imaging Software | CR Long-Length Imaging System (K021829) | DR Long Length Imaging Software (K130567) (FDA cleared, K100094, for Carestream Image Suite Software) | Met - Uses newer, also cleared software, deemed "no impact to safety or effectiveness" |
DICOM | 3.0 | 3.0 | Met - Unchanged |
Image Pixel Depth (Bit) | 12 | 12 | Met - Unchanged |
Phosphor Screen & Cassette Spec. | 14x17", 10x12", 8x10", 24x30cm, 14x14", 14x33" (LLI), 15x30cm (Dental) and some not supported (10x10", 9.5x9.5") | Same supported sizes, plus 10x10" (Dental Vet) newly supported; 9.5x9.5" still not supported. | Exceeds/Improvement - Broader compatibility with some cassette sizes |
Throughput Tolerance ±5% (PPH) | Examples specific values (e.g., 14x17" @ 21; 14x33" @ 2.5) | Examples specific values (e.g., 14x17" @ 30 and higher; 14x33" @ 2.5) | Exceeds/Improvement - Higher PPH for some configurations |
Max Spatial Resolution (LP/mm) | Examples specific values (e.g., 8x10" @ 4.2; 10x12" @ 3.5) | Examples specific values (e.g., 8x10" @ 4.2; 10x12" @ 4.2) | Exceeds/Improvement - Higher resolution for some configurations |
Min Pixel Pitch ($\mu$m) | Examples specific values (e.g., 14x33" @ 173; 8x10" @ 100) | Examples specific values (e.g., 14x33" @ 160; 8x10" @ 86) | Exceeds/Improvement - Smaller pixel pitch for some configurations |
2. Sample size used for the test set and the data provenance
The document explicitly states: "Given the differences from the predicate device, clinical testing is not necessary for the subject device. Bench testing alone is sufficient to demonstrate substantial equivalence."
Therefore, there was no "test set" in the sense of a dataset of patient images. The evaluation was based on bench testing of the device's hardware and software components. The "sample size" would refer to the number of devices tested, or the number of tests performed on a device, not a patient image sample size. No specific numbers are provided for the quantity of bench tests or units tested, beyond the general statement that "Bench testing was performed."
Data Provenance: Not applicable as no clinical or image data was used for testing.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Not applicable. As no clinical testing or image-based test set was used, there was no need for expert radiologists to establish ground truth.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not applicable. No image-based test set where adjudication would be relevant.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No. An MRMC study was not performed. The device is a digital radiography system, not an AI-powered diagnostic aid meant to assist human readers. The submission focuses on the safety and performance of the imaging equipment itself in comparison to its predicate.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
No. This describes the performance of the imaging system and its included components, not a standalone algorithm.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
Not applicable. The "ground truth" for this submission was the established performance and safety characteristics of the predicate device and relevant industry standards (IEC, etc.). The modified device was evaluated against these benchmarks using non-clinical (bench) testing.
8. The sample size for the training set
Not applicable. This device is a hardware system with integrated software, not a machine learning model that requires a "training set."
9. How the ground truth for the training set was established
Not applicable. No training set was used.
Ask a specific question about this device
(30 days)
Carestream Health, Inc.
The Q-Rad Radiographic System is indicated for use in obtaining diagnostic images to aid the physician with diagnosis. The system can be used to perform radiographic imaging of various portions of the human body, including the skull, spinal column, extremities, chest, abdomen and other body parts. The Q-Rad System is not indicated for use in mammography
The Q-Rad System is a general purpose x-ray system used for acquiring radiographic images of various portions of the human body. The system consists of a combination of components including various models of high voltage x-ray generators, control panels or workstation computers, various models of patient support tables, wall-mounted image receptors/detectors for upright imaging, tube supports (ceiling-suspended or floormounted), x-ray tube, and collimator (beam-limiting device).
The Q-Rad System can be used with conventional analog (film cassette), digital radiography (DR) and computed radiography (CR) receptors. Systems equipped with DR or CR receptors can also be configured to include a workstation computer that is fully integrated with the x-ray generator.
The modified (subject) device is the previously cleared Q-Rad System stationary x-ray system which has been modified as follows:
- Integration of the FDA-Cleared ImageView Software (K163203) with the Q-Rad ● System.
- A circuit board (CIB+ Board) has been implemented on the Q-Rad System to ● facilitate a new communication protocol between the ImageView Software and the generator.
- The QMI (Quantum Medical Imaging) high voltage generator has been replaced with a Carestream-designed high voltage generator.
- The VacuTec Dose Area Product (DAP) meter Model 1560015 has been replaced ● with an equivalent DAP from a different supplier, the IBA Kermax plus with Ethernet interface 120-131 ETH (Standard Size).
- The Generator Control Box has been replaced. This control box is used to switch ● the generator on and off. Changes to the control box are cosmetic only and do not impact its functionality.
Here's an analysis of the acceptance criteria and study information provided, focusing on the Q-Rad System:
This document is a 510(k) summary for a modified X-ray system, comparing it to a predicate device. It primarily focuses on demonstrating substantial equivalence rather than a clinical study proving new diagnostic performance. Therefore, many typical AI/software study elements (like expert ground truth, MRMC studies, specific performance metrics like AUC) are not detailed here because they aren't generally required for this type of submission.
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category | Specific Criteria (Implied/Stated) | Reported Device Performance |
---|---|---|
Safety | No new unmitigated risks identified due to modifications. | Risk assessment of the modifications did not identify any new unmitigated risks. |
Effectiveness/Performance | Conforms to specifications and provides equivalent safety and performance to predicate. | Non-clinical test results demonstrated that the device conforms to its specifications. Predefined acceptance criteria were met, demonstrating the device is as safe, as effective, and performs as well as or better than the predicate device. Performance characteristics, operation/usability, intended workflow, related performance, overall function, verification, and validation of requirements were evaluated. |
Software Requirements | Reliability of system software requirements. | Reliability of the system software requirements was demonstrated. |
Regulatory Compliance | Meets recognized prevailing consensus standards. | Testing to recognized prevailing consensus standards was performed. |
Functional Equivalence | Identical Indications for Use to the predicate device. | The Indications for Use for the subject device are identical to the predicate device's, and the intended use remains unchanged. |
Hardware Equivalence | Components are equivalent or replacement does not impact functionality. | A circuit board, generator, and DAP meter were replaced with functionally equivalent or "cosmetically only" changed components. The ImageView Software (already cleared) was integrated. |
2. Sample Size Used for the Test Set and Data Provenance
This document describes non-clinical (bench) testing rather than a clinical study with patient data. Therefore, there is no "test set" in the traditional sense of patient cases or images for evaluating diagnostic performance. The testing was focused on the system's technical and functional performance.
- Sample Size: Not applicable in the context of patient data. The "sample" would be the modified Q-Rad System itself and its components undergoing various bench tests.
- Data Provenance: Not applicable for patient data. The testing was described as "non-clinical (bench) testing." No information on country of origin for any data or retrospective/prospective nature is provided, as it's not a clinical data study.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
This information is not provided and is not applicable because the study described is non-clinical bench testing for substantial equivalence of an X-ray system, not a clinical diagnostic performance study requiring expert ground truth for patient findings.
4. Adjudication Method for the Test Set
This information is not provided and is not applicable for the same reasons as #3.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and the effect size of how much human readers improve with AI vs without AI assistance
There is no indication that an MRMC study was performed. This submission is for an X-ray imaging system, not an AI-powered diagnostic algorithm for which an MRMC study would typically be conducted to evaluate human reader performance with and without AI assistance. The "ImageView Software" mentioned is already FDA-cleared (K163203) and its integration into the Q-Rad System is one of the modifications, but its diagnostic performance with human readers is not reassessed here.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Study was done
This is not applicable. The Q-Rad System is an X-ray imaging device, not an AI algorithm performing a standalone diagnostic task. While it integrates an "ImageView Software," the 510(k) submission describes physical and software modifications to the system overall, not a standalone evaluation of an AI algorithm.
7. The Type of Ground Truth Used
This is not applicable for a clinical sense of "ground truth" (e.g., pathology, outcomes data). The ground truth for bench testing would typically involve engineering specifications, defined performance metrics, and compliance with consensus standards.
8. The Sample Size for the Training Set
This is not applicable. This document describes modifications to an existing X-ray system and its non-clinical testing. It does not mention any machine learning or AI models being trained as part of this specific submission. The ImageView Software is already cleared.
9. How the Ground Truth for the Training Set was Established
This is not applicable for the same reasons as #8.
Ask a specific question about this device
(158 days)
Carestream Health, Inc.
The device is a permanently installed diagnostic x-ray system for general radiographic x-ray imaging including tomography. This device also supports digital tomosynthesis. The tomography and digital tomosynthesis features are not to be used for imaging pediatric patients.
Carestream Digital Tomosynthesis (DT) is a limited "sweep" imaging technique that generates multiple two-dimensional (2D) coronal slices (i.e. planes) from a series of low dose x-ray images of the same anatomy taken at the same exposure but at different angles. During a tomosynthesis acquisition the detector will be stationary while the tube head travels (sweeps) in a straight path (i.e. focal spot travel path). For each exposure, the tube will be angled toward the center of the detector. The Carestream Digital Tomosynthesis contains 3 options: Sweep angle option is to provide the desired slice thickness. The number of images per degree of sweep angle. The Projection Image Resolution allows for the selection of speed of capture versus image resolution.
The Carestream Digital Tomosynthesis (DT) system was evaluated through non-clinical (bench) testing and a clinical reader study to demonstrate its diagnostic image quality and equivalence to predicate devices.
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category | Specific Criteria | Reported Device Performance | Comments |
---|---|---|---|
Diagnostic Image Quality | Mean RadLex Rating (4-point scale) for scout and Digital Tomosynthesis exams. | 3.7156 | The RadLex scale ranged from 1 (non-diagnostic) to 4 (exemplary). All ratings were above non-diagnostic. |
Equivalence to Predicate | "Equivalent or better in diagnostic quality compared to images obtained using commercially available predicate and reference devices." | Achieved | Statistical test results demonstrated equivalence or superiority. |
Non-clinical Performance | Conformance to specifications, intended workflow, related performance, overall function, verification and validation of requirements for intended use, and reliability of system software. | Met | Predefined acceptance criteria were met, demonstrating the device is as safe, effective, and performs as well as or better than the predicate device. |
2. Sample Size and Data Provenance
- Test Set Sample Size:
- Clinical Images: 17 Digital Tomosynthesis image cases from adult human subjects (patients). Each case included a thoracic digital radiograph (PA and lateral chest exposure) and a DT exam (scout PA chest image and DT exposures).
- Phantom Images: 11 Digital Tomosynthesis phantom exams and corresponding Linear Tomography exams.
- Data Provenance: Clinical study conducted at Toronto General Hospital located in Toronto, Ontario, Canada (prospective). Phantom studies were also conducted.
3. Number of Experts and Qualifications for Ground Truth
- Number of Experts: Seven (7) board certified radiologists.
- Qualifications: "general varying reading experience." (No further specific details on years of experience were provided in the text).
4. Adjudication Method for the Test Set
The text indicates that seven radiologists performed an evaluation, but it does not specify an adjudication method (e.g., 2+1, 3+1 consensus). It only states they used a "graduated 4-point RadLex rating scale" and the mean rating was calculated from their assessments.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
A clinical reader study was performed. The study involved seven radiologists evaluating images from the investigational device, a reference comparison (standard of care PA and lateral chest x-rays), and the predicate device (Linear Tomography phantom studies). The "statistical test results demonstrate the Carestream Digital Tomosynthesis delivers quality imaging performance that is equivalent or better in diagnostic quality compared to images obtained using commercially available predicate and reference devices."
- Effect Size: The document does not provide a specific quantitative effect size of how much human readers improved with AI (Digital Tomosynthesis) vs. without AI assistance. It states the DT system was found to be "equivalent or better" in diagnostic quality.
6. Standalone (Algorithm Only) Performance
The document describes the "Carestream Digital Tomosynthesis reconstruction software leverages algorithms that are the same in principle to those applied in computed tomography (CT), such as filtered back projection or iterative reconstruction etc." While it implies algorithm processing, the overall evaluation was of the imaging system producing the images for radiologist interpretation. The reader study assessed the diagnostic image quality facilitated by the DT feature. It is not explicitly stated whether a standalone algorithm-only performance assessment without human-in-the-loop was conducted. The focus was on the diagnostic utility of the images produced by the device.
7. Type of Ground Truth Used for the Test Set
The ground truth for the clinical cases was based on the diagnostic image quality ratings by board-certified radiologists using a RadLex scale. For the phantom studies, the ground truth would inherently be known from the phantom's construction and expected imaging characteristics, used for comparison with Linear Tomography.
8. Sample Size for the Training Set
The document does not provide information on the sample size used for the training set of any AI or reconstruction algorithms.
9. How Ground Truth for the Training Set Was Established
The document does not provide information on how ground truth was established for the training set. It focuses on the validation of the device performance.
Ask a specific question about this device
Page 1 of 5