Search Results
Found 681 results
510(k) Data Aggregation
(167 days)
100176
CHINA
Re: K250788
Trade/Device Name: Definium Tempo Select
Regulation Number: 21 CFR 892.1680
- Digital Radiographic System
Regulation Name: Stationary X-Ray System
Regulation: 21 CFR 892.1680
Predicate Device:**
21 CFR 807.92(a)(3)
Discovery XR656 HD with VolumeRad (K191699)
21CFR 892.1680
China
Reference Device:
Definium Pace Select (K231892)
21CFR 892.1680 (KPR, MQB)
Class II
The Definium Tempo Select is intended to generate digital radiographic images of the skull, spinal column, chest, abdomen, extremities, and other body parts in patients of all ages. Applications can be performed with the patient sitting, standing, or lying in the prone or supine position and the system is intended for use in all routine radiography exams. Optional image pasting function enables the operator to stitch sequentially acquired radiographs into a single image.
This device is not intended for mammographic applications.
The Definium Tempo Select Radiography X-ray System is designed as a modular system with components that include an Overhead Tube Suspension (OTS) with a tube, an auto collimator and a depth camera, an elevating table, a motorized wall stand, a cabinet with X-ray high voltage generator, a wireless access point and wireless detectors in exam room and PC, monitor and control box with hand-switch in control room. The system generates diagnostic radiographic images which can be reviewed or managed locally and sent through a DICOM network for applications including reviewing, storage and printing.
By leveraging platform components/ design, Definium Tempo Select is similar to the predicate device Discovery XR656 HD (K191699) and the reference device Definium Pace Select (K231892) with regards to the user interface layout, patient worklist refresh and selection, protocol selection, image acquisition, and image processing based on the raw image. This product introduces a new high voltage generator which has the same key specifications as the predicate. A wireless detector used in referenced product Definium Pace Select is introduced. Image Pasting is improved with individual exposure parameter adjustable on images on both Table and Wall Stand Mode. Tube auto angulation is added for better auto positioning based on current auto-positioning. Camera Workflow is introduced based on existing depth camera. OTS is changed with 4 axis motorizations. An update was made to the previously cleared Tissue Equalization feature under K013481 to introduce a Deep Learning AI model that provides more consistent image presentations to the user which reduces additional workflow to adjust the image display parameters. The other minor changes including PC change, Wall Stand change and Table change.
The provided FDA 510(k) clearance letter and summary for the Definium Tempo Select offers some, but not all, of the requested information regarding the acceptance criteria and the study proving the device meets them. Notably, specific quantitative acceptance criteria for the AI Tissue Equalization feature are not explicitly stated.
Here's a breakdown of the available information and the identified gaps:
1. Table of Acceptance Criteria and Reported Device Performance
Note: The 510(k) summary does not explicitly list quantitative acceptance criteria for the AI Tissue Equalization algorithm. Instead, it states that "The verification tests confirmed that the algorithm meets the performance criteria, and the safety and efficacy of the device has not been affected." Without specific performance metrics or thresholds, a direct comparison in a table format is not possible for the AI component.
For the overall device, the acceptance criteria are implicitly performance metrics that ensure it functions comparably to the predicate device, as indicated by the "Equivalent" and "Identical" discussions in Table 1 (pages 7-11). However, these are primarily functional and technical equivalency statements rather than performance metrics for the AI feature.
Therefore, this section will focus on the AI Tissue Equalization feature as it's the part that underwent specific verification using a clinical image dataset.
AI Tissue Equalization Feature:
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Provides more consistent image presentations to the user. | "The verification tests confirmed that the algorithm meets the performance criteria, and the safety and efficacy of the device has not been affected." |
"The image processing algorithm uses artificial intelligence to dynamically estimate thick and thin regions to improve contrast and visibility in over-penetrated and under-penetrated regions." | |
"The algorithm is the same but parameters per anatomy/view are determined by artificial intelligence to provide better consistence and easier user interface in the proposed device." | |
Reduces additional workflow to adjust image display parameters. | Achieved (stated as a benefit of the AI model). |
Safety and efficacy are not affected. | Confirmed through verification tests. |
Missing Information:
- Specific quantitative metrics (e.g., AUC, sensitivity, specificity, image quality scores, expert rating differences) that define "more consistent image presentations" are not provided.
- The exact thresholds or target values for these metrics are not stated.
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: Not explicitly stated as a number of images or cases. The document refers to "clinical images retrospectively collected across various anatomies...and Patient Sizes."
- Data Provenance: Retrospective collection from locations in the US, Europe, and Asia.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
Missing Information. The document does not specify:
- The number of experts involved in establishing ground truth.
- Their qualifications (e.g., specific subspecialty, years of experience, board certification).
- Whether experts were even used to establish ground truth for this verification dataset, as the purpose was to confirm the AI met performance criteria rather than to directly compare its diagnostic accuracy against human readers or a different ground truth standard.
4. Adjudication Method for the Test Set
Missing Information. No adjudication method (e.g., 2+1, 3+1) is described for the test set.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No. A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not explicitly mentioned or described in the provided document. The verification tests focused on the algorithm meeting performance criteria, not on comparing human reader performance with or without AI assistance.
- Effect Size: Not applicable, as no MRMC study was described.
6. If a Standalone Study (Algorithm Only Without Human-in-the-Loop Performance) Was Done
Yes, implicitly. The "AI Tissue Equalization algorithms verification dataset" was used to perform "verification tests" to confirm that "the algorithm meets the performance criteria, and the safety and efficacy of the device has not been affected." This suggests a standalone evaluation of the algorithm's output (image presentation consistency) against specific, albeit unstated, criteria. While human review of the output images was likely involved, the study's stated purpose was to verify the algorithm itself.
7. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, etc.)
Implied through image processing improvement, not diagnostic ground truth. For the AI Tissue Equalization feature, the "ground truth" is not in the traditional clinical diagnostic sense (e.g., disease presence confirmed by pathology). Instead, it appears to be related to the goal of "more consistent image presentations" and improving "contrast and visibility in over-penetrated and under-penetrated regions." This suggests the ground truth was an ideal or desired image presentation quality rather than a disease state. It's likely based on existing best practices for image processing and subjective assessment of image quality by experts, or perhaps a comparative assessment against the predicate's tissue equalization.
Missing Information: The precise method or criteria for this ground truth (e.g., a panel of radiologists rating image quality, a quantitative metric for contrast/visibility) is not specified.
8. The Sample Size for the Training Set
Missing Information. The document describes the "verification dataset" (test set) but does not provide any information on the sample size or composition of the training set used to develop the Deep Learning AI model for Tissue Equalization.
9. How the Ground Truth for the Training Set Was Established
Missing Information. As the training set size and composition are not mentioned, neither is the method for establishing its ground truth. It can be inferred that the training process involved data labeled or optimized to achieve "more consistent image presentations" by dynamically estimating thick and thin regions, likely through expert-guided optimization or predefined image processing targets.
Ask a specific question about this device
(140 days)
08390
SOUTH KOREA
Re: K250790
Trade/Device Name: INNOVISION-DXII
Regulation Number: 21 CFR 892.1680
Stationary x-ray system |
| Classification Name | Stationary X-Ray System |
| Regulation Number | 892.1680
INNOVISION-DXII is a stationery X-ray system intended for obtaining radiographic images of various anatomical parts of the human body, both pediatrics and adults, in a clinical environment. INNOVISION-DXII is not intended for mammography, angiography, interventional, or fluoroscopy use.
INNOVISION-DXII is a stationary X-ray system using single and three phase power and consists of Tube, HVG(High voltage generator), Ceiling suspended X-ray tube support, Floor to Ceiling X-ray tube support, patient table, detector stand, and X-ray control console. The X-ray control console is a window-based software that can view X-ray images and a mobile console mounted on an Android-based board that only controls X-rays without viewer function.
After turning on the control unit, it irradiates the set X-ray on the exposure position properly generating X-ray in the inverter generator using IGBT. The compositions like supporters for X-ray tube and tables are used to supply power from the High Voltage generator. When inverter type of Generator creates X-ray irradiation by certain exposure conditions, and X-ray penetrates the patient's body. X-ray information is transferred to a visible ray by a sensor's scintillator, and it turns to an electric signal through A-Si after transmitting photodiode to a TFT Array. This X-ray system is used with FDA cleared X-ray detectors. The electric signal is magnified and turned into a digital signal to create image data. The image is transferred to the PC display by an Ethernet Interface, and it can be adjusted.
The FDA 510(k) clearance letter for INNOVISION-DXII explicitly states that clinical testing was not performed for this device. Therefore, there is no study described within this document that proves the device meets acceptance criteria related to clinical performance or human reader studies.
The provided document focuses on non-clinical performance tests to demonstrate substantial equivalence to the predicate device.
Here's an analysis based on the information provided, outlining what is and isn't available regarding acceptance criteria and studies:
Acceptance Criteria and Device Performance (Non-Clinical)
The acceptance criteria for the INNOVISION-DXII are implicitly the successful completion of the bench tests according to recognized international standards and demonstration that the differences from the predicate device do not raise new safety or effectiveness concerns. The "reported device performance" is the successful passing of these tests, indicating the device is safe and effective in its essential functions.
Table 1: Acceptance Criteria and Reported Device Performance (Non-Clinical Bench Testing)
Test Category | Specific Test | Acceptance Criteria (Implicit) | Reported Device Performance |
---|---|---|---|
X-ray Tube, Collimator, HVG | Tube Voltage accuracy | Meet specified accuracy standards (e.g., within a tolerance) | Passed |
Accuracy of X-RAY TUBE CURRENT | Meet specified accuracy standards | Passed | |
Reproducibility of the RADIATION output | Meet specified reproducibility standards | Passed | |
Linearity and constancy in RADIOGRAPHY | Meet specified linearity and constancy standards | Passed | |
Half Value Layer (HVL) / Total filtration | Meet specified HVL/filtration standards | Passed | |
Accuracy of Loading Time | Meet specified loading time accuracy | Passed | |
Detector | System Instability | No unacceptable system instability observed | Passed |
Installation error | No unacceptable installation errors | Passed | |
System Error | No unacceptable system errors | Passed | |
Image Loss, Deletion, and Restoration | Proper handling of image loss, deletion, and restoration | Passed | |
Image Save Error | No unacceptable image save errors | Passed | |
Image Information Error | No unacceptable image information errors | Passed | |
Image Transmission and Reception | Reliable image transmission and reception | Passed | |
Header Verification | Correct header verification | Passed | |
Security | Meet specified security requirements | Passed | |
Image Acquisition Test | Successful image acquisition | Passed | |
Search Function | Functional search capability | Passed | |
Application Function (ELUI S/W) | Functional application software | Passed | |
Resolution | Meet specified resolution standards | Passed | |
Mechanical Components (Support, Table) | Moving distance | Accurate and controlled movement within specifications | Passed |
Study Details for Demonstrating Substantial Equivalence (Non-Clinical)
The study described is a series of bench tests (functional tests) conducted to ensure the safety and essential performance effectiveness of the INNOVISION-DXII X-ray system.
-
Sample size used for the test set and data provenance:
- Sample Size: Not applicable. These are functional tests of the device itself rather than tests on a dataset. The "sample" refers to the physical device components and the system as a whole.
- Data Provenance: Not applicable in the context of image data. The tests are performed on the device in a laboratory setting. The standards referenced are international (IEC).
-
Number of experts used to establish the ground truth for the test set and qualifications of those experts:
- Not applicable. Ground truth in this context refers to the expected functional performance of the device according to engineering specifications and regulatory standards (IEC 60601 series). These standards define the "ground truth" for electrical safety, mechanical performance, and radiation emission/accuracy. Experts are involved in conducting and interpreting these standardized tests, but there isn't a "ground truth" established by a panel of medical experts as there would be for image interpretation.
-
Adjudication method for the test set:
- Not applicable. The tests are typically pass/fail based on objective measurements against predefined thresholds specified in the IEC standards. There is no subjective adjudication process mentioned.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study was explicitly NOT done. The document states: "Clinical testing is not performed for the subject device as the detectors were already 510(k) cleared and the imaging software (Elui) is the same as the predicate device. There were no significant changes." This device is a stationary X-ray system, not an AI-assisted diagnostic tool for image interpretation.
-
If a standalone (i.e. algorithm only without human-in-the loop performance) was done:
- No, a standalone algorithm performance study was not done. This device is an X-ray imaging system; it does not feature a standalone diagnostic algorithm. While it includes an imaging software (Elui), its performance is assessed as part of the overall system's image acquisition and processing capabilities, not as an independent diagnostic algorithm.
-
The type of ground truth used:
- For the non-clinical bench tests, the "ground truth" is defined by the engineering specifications and the requirements of the referenced international standards (IEC 60601-1-3, IEC 60601-2-28, IEC 60601-2-54). These standards specify acceptable ranges for parameters like tube voltage accuracy, radiation output linearity, image resolution, and system stability.
-
The sample size for the training set:
- Not applicable. This device is an X-ray system, not a machine learning algorithm that requires a training set of data.
-
How the ground truth for the training set was established:
- Not applicable, as there is no training set for this device.
Summary of Clinical/AI-related information:
The FDA 510(k) clearance for INNOVISION-DXII does not include any clinical studies or evaluations of AI performance, human reader performance, or diagnostic accuracy. The clearance is based purely on the non-clinical bench testing demonstrating that the device meets safety and essential performance standards and is substantially equivalent to its predicate device for obtaining radiographic images.
Ask a specific question about this device
(142 days)
MALVERN, PA 19355
Re: K250738
Trade/Device Name: YSIO X.pree
Regulation Number: 21 CFR 892.1680
Stationary x-ray system
Classification Panel: Radiology
Classification Regulation: 21 CFR §892.1680
Stationary x-ray system
Classification Panel: Radiology
Classification Regulation: 21 CFR §892.1680
Regulation Description | Stationary X-Ray System | Stationary X-Ray System | Same |
| Regulation Number | §892.1680
| §892.1680 | Same |
| Classification Product Code | KPR | KPR | Same |
| Model Number | 11107464 |
The intended use of the device YSIO X.pree is to visualize anatomical structures of human beings by converting an X-ray pattern into a visible image.
The device is a digital X-ray system to generate X-ray images from the whole body including the skull, chest, abdomen, and extremities. The acquired images support medical professionals to make diagnostic and/or therapeutic decisions.
YSIO X.pree is not for mammography examinations.
The YSIO X.pree is a radiography X-ray system. It is designed as a modular system with components such as a ceiling suspension with an X-ray tube, Bucky wall stand, Bucky table, X-ray generator, portable wireless, and fixed integrated detectors that may be combined into different configurations to meet specific customer needs.
The following modifications have been made to the cleared predicate device:
- Updated generator
- Updated collimator
- Updated patient table
- Updated Bucky Wall Stand
- New X.wi-D 24 portable wireless detector
- New virtual AEC selection
- New status indicator lights
The provided 510(k) clearance letter and summary for the YSIO X.pree device (K250738) indicate that the device is substantially equivalent to a predicate device (K233543). The submission primarily focuses on hardware and minor software updates, asserting that these changes do not impact the device's fundamental safety and effectiveness.
However, the provided text does not contain the detailed information typically found in a clinical study report regarding acceptance criteria, sample sizes, ground truth establishment, or expert adjudication for an AI-enabled medical device. This submission appears to be for a conventional X-ray system with some "AI-based" features like auto-cropping and auto-collimation, which are presented as functionalities that assist the user rather than standalone diagnostic algorithms requiring extensive efficacy studies for regulatory clearance.
Based on the provided document, here's an attempt to answer your questions, highlighting where information is absent or inferred:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state quantitative acceptance criteria in terms of performance metrics (e.g., sensitivity, specificity, or image quality scores) with corresponding reported device performance values for the AI features. The "acceptance" appears to be qualitative and based on demonstrating equivalence to the predicate device and satisfactory usability/image quality.
If we infer acceptance criteria from the "Summary of Clinical Tests" and "Conclusion as to Substantial Equivalence," the criteria seem to be:
Acceptance Criteria (Inferred) | Reported Device Performance (as stated in document) |
---|---|
Overall System: Intended use met, clinical needs covered, stability, usability, performance, and image quality are satisfactory. | "The clinical test results stated that the system's intended use was met, and the clinical needs were covered." |
New Wireless Detector (X.wi-D24): Images acquired are of adequate radiographic quality and sufficiently acceptable for radiographic usage. | "All images acquired with the new detector were adequate and considered to be of adequate radiographic quality." and "All images acquired with the new detector were sufficiently acceptable for radiographic usage." |
Substantial Equivalence: Safety and effectiveness are not affected by changes. | "The subject device's technological characteristics are same as the predicate device, with modifications to hardware and software features that do not impact the safety and effectiveness of the device." and "The YSIO X.pree, the subject of this 510(k), is similar to the predicate device. The operating environment is the same, and the changes do not affect safety and effectiveness." |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: Not explicitly stated as a number of cases or images. The "Customer Use Test (CUT)" was performed at two university hospitals.
- Data Provenance: The Customer Use Test (CUT) was performed at "Universitätsklinikum Augsburg" in Augsburg, Germany, and "Klinikum rechts der Isar, Technische Universität München" in Munich, Germany. The document states "clinical image quality evaluation by a US board-certified radiologist" for the new detector, implying that the images themselves might have originated from the German sites but were reviewed by a US expert. The study design appears to be prospective in the sense that the new device was evaluated in a clinical setting in use rather than historical data being analyzed.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Experts
- Number of Experts: For the overall system testing (CUT), it's not specified how many clinicians/radiologists were involved in assessing "usability," "performance," and "image quality." For the new wireless detector (X.wi-D24), it states "a US board-certified radiologist."
- Qualifications of Experts: For the new wireless detector's image quality evaluation, the expert was a "US board-certified radiologist." No specific experience level (e.g., years of experience) is provided.
4. Adjudication Method for the Test Set
No explicit adjudication method (e.g., 2+1, 3+1 consensus) is described for the clinical evaluation or image quality assessment. The review of the new detector was done by a single US board-certified radiologist, not multiple independent readers with adjudication.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and what was the effect size of how much human readers improve with AI vs. without AI assistance.
- MRMC Study: No MRMC comparative effectiveness study is described where human readers' performance with and without AI assistance was evaluated. The AI features mentioned (Auto Cropping, Auto Thorax Collimation, Auto Long-Leg/Full-Spine collimation) appear to be automatic workflow enhancements rather than diagnostic AI intended to directly influence reader diagnostic accuracy.
- Effect Size: Not applicable, as no such study was conducted or reported.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done.
The document does not describe any standalone performance metrics for the AI-based features (Auto Cropping, Auto Collimation). These features seem to be integrated into the device's operation to assist the user, rather than providing a diagnostic output that would typically be evaluated in a standalone study. The performance of these AI functions would likely be assessed as part of the overall "usability" and "performance" checks.
7. The Type of Ground Truth Used
- For the overall system and the new detector, the "ground truth" seems to be expert opinion/consensus (qualitative clinical assessment) on the system's performance, usability, and the adequacy of image quality for radiographic use. There is no mention of pathology, outcomes data, or other definitive "true" states related to findings on the images.
8. The Sample Size for the Training Set
The document does not provide any information about a training set size for the AI-based auto-cropping and auto-collimation features. This is typical for 510(k) submissions of X-ray systems where such AI features are considered ancillary workflow tools rather than primary diagnostic aids.
9. How the Ground Truth for the Training Set was Established
Since no training set information is provided, there is no information on how ground truth was established for any training data.
In summary: The 510(k) submission for the YSIO X.pree focuses on demonstrating substantial equivalence for an updated X-ray system. The "AI-based" features appear to be workflow automation tools that were assessed as part of general system usability and image quality in a "Customer Use Test" and a limited clinical image quality evaluation for the new detector. It does not contain the rigorous quantitative performance evaluation data for AI software as might be seen for a diagnostic AI algorithm that requires a detailed clinical study for clearance.
Ask a specific question about this device
(179 days)
Re: K250211
Trade/Device Name: Yushan x-ray flat panel detector
Regulation Number: 21 CFR 892.1680
Review panel: Radiology
Product code: MQB
Regulation number: 21 CFR 892.1680
**Device
Review panel: Radiology
Product code: MQB
Regulation number: 21 CFR 892.1680
**Device
Review panel: Radiology
Product code: MQB
Regulation number: 21 CFR 892.1680
**Device
Review panel: Radiology
Product code: MQB
Regulation number: 21 CFR 892.1680
**Device
The Wireless and Wired Yushan X-Ray Flat Panel Detector is intended to capture for display radiographic images of human anatomy. It is intended for use in general projection radiographic applications wherever conventional film/screen or CR systems may be used. The Yushan X-Ray Flat Panel Detector is not intended for mammography, fluoroscopy, tomography, and angiography applications. The use of this product is not recommended for pregnant women and the risk of radioactivity must be evaluated by a physician.
The Subject Device Yushan X-Ray Flat Panel Detector is static digital x-ray detector, model V14C PLUS, F14C PLUS, V17C PLUS are portable (wireless/ wired) detectors, while V17Ce PLUS is a non-portable (wired) detector. The Subject Device is equivalent to it's predicate device K243171, K201528, K210988, and K220510.
The Subject Device is designed to be used in any environment that would typically use a radiographic cassette for examinations. Detectors can be placed in a wall bucky for upright exams, a table bucky for recumbent exams, or removed from the bucky for non-grid or free cassette exams. The Subject Device has memory exposure mode, and extended image readout feature. Additionally, rounded-edge design for easy handling, image compression algorithm for faster image transfer, LED design for easy detector identification, extra protection against ingress of water.The Detector is currently indicated for general projection radiographic applications and the scintillator material is cesium iodide (CsI).
The Subject Device can automatically collect x-ray images from an x-ray source. It collects x-rays and digitizes the images for their transfer and display to a computer. The x-ray generator (an integral part of a fully-functional diagnostic system) is not part of the device. The sensor includes a flat panel for x-ray acquisition and digitization and a computer (including proprietary processing software) for processing, annotating and storing x-ray images.
The Subject Device is working by using DROC (Digital Radiography Operating Console), Xresta or DR console, which are unchanged from the predicate device, cleared under K201528 for DROC and K243171 for Xresta and DR console. The DROC or Xresta is a software running on a Windows PC/Laptop as a user interface for radiologist to perform a general radiography exam. The function includes:
- Detector status update
- Xray exposure workflow
- Image viewer and measurement
- Post image process and DICOM file I/O
- Image database: DROC or Xresta supports the necessary DICOM Services to allow a smooth integration into the clinical network
The DR Console is a software/app-based device, which is a software itself. When this app is operating the OTS can be considered as the iOS system (iOS 16 or above), the safety and effectiveness of this OTS has been assessed and evaluated through the software testing (compatibility) action and also the usability test (summative evaluation). All the functions operate normally and successfully under this OTS framework. The function includes:
- Imaging procedure review
- Worklist settings
- Detector connection settings
- Calibration
- Image processing
The software level of concern for the Yushan X-Ray Flat Panel Detector with DROC, Xresta, or DR Console has been determined to be basic based on the "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices"; and the cybersecurity risks of the Yushan X-Ray Flat Panel Detector with DROC, Xresta, or DR Console have also been addressed to assure that no new or increased cybersecurity risks were introduced as a part of device risk analysis. These risks are defined as sequence of events leading to a hazardous situation, and the controls for these risks were treated and implemented as proposed in the risk analysis (e.g., requirements, verification).
Acceptance Criteria and Study for Yushan X-Ray Flat Panel Detector (K250211)
This documentation describes the acceptance criteria and the study conducted for the Yushan X-Ray Flat Panel Detector (models V14C PLUS, F14C PLUS, V17C PLUS, V17Ce PLUS). The device has received 510(k) clearance (K250211) based on substantial equivalence to predicate devices (K243171, K201528, K210988, K220510).
The primary change in the subject device compared to its predicates is an increase in the CsI scintillator thickness from 400µm (in some predicate CsI models) to 600µm. This change impacts image quality metrics but, according to the manufacturer, does not introduce new safety or effectiveness concerns.
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria for this device are implicitly tied to demonstrating that the changes in scintillator thickness do not negatively impact safety or effectiveness, and ideally, improve image quality. The primary performance metrics affected by the scintillator change are DQE, MTF, and Sensitivity.
Performance Metric | Acceptance Criteria (Implicit: No degradation in clinical utility compared to predicate, ideally improvement) | Reported Device Performance (Subject Device - 600µm CsI) | Predicate Device (CsI Models - 400µm CsI) Performance |
---|---|---|---|
DQE (Detective Quantum Efficiency) @ 1 lp/mm, RQA5 | Maintain or improve upon predicate's CsI DQE value. | 0.60 (Typical) | 0.48 - 0.50 |
DQE (Detective Quantum Efficiency) @ 2 lp/mm | (Not explicitly stated for acceptance, but shown for performance) | 0.45 (Typical) | Not explicitly listed for predicate |
MTF (Modulation Transfer Function) @ 1 lp/mm, RQA5 | Maintain comparable MTF to predicate's CsI MTF (acknowledging potential trade-offs for improved DQE). | 0.64 (Typical) | 0.63 - 0.69 |
MTF (Modulation Transfer Function) @ 2 lp/mm | (Not explicitly stated for acceptance, but shown for performance) | 0.34 (Typical) | Not explicitly listed for predicate |
Sensitivity | (Not explicitly stated for acceptance, but shown for performance) | 715 lsb/uGy | Not explicitly listed for predicate |
Noise Performance | Superior noise performance compared to predicate. | Superior noise performance | Inferior to subject device |
Image Smoothness | Smoother image quality compared to predicate. | Smoother image quality | Inferior to subject device |
Compliance with Standards | Conformance to relevant safety and performance standards (e.g., IEC 60601 series, ISO 10993). | All specified standards met. | All specified standards met. |
Basic Software Level of Concern | Maintained as basic. | Level of concern remains basic. | Level of concern remains basic. |
Cybersecurity Risks | No new or increased cybersecurity risks introduced. | Risks addressed, no new or increased risks. | Risks addressed. |
Load-Bearing Characteristics | Pass specified tests. | Passed. | Passed. |
Protection against ingress of water | Pass specified tests. | Passed. | Passed. |
Biocompatibility | Demonstrated through ISO 10993 series. | Demonstrated. | Demonstrated. |
Summary of Device Performance vs. Acceptance:
The subject device demonstrates improved DQE, superior noise performance, and smoother images compared to the predicate device (specifically, CsI models), while maintaining comparable MTF and meeting all other safety and performance standards. The slight reduction in MTF compared to the highest performing predicate CsI model (0.69 vs 0.64 at 1 lp/mm) is likely considered an acceptable trade-off given the improvements in DQE and noise, and it is still significantly higher than GOS models.
2. Sample Size Used for the Test Set and Data Provenance
The document does not explicitly state the numerical sample size for the test set used for the performance evaluation of the image quality metrics (DQE, MTF, Sensitivity, noise, smoothness). These metrics are typically derived from physical measurements on a controlled test setup rather than a clinical image dataset.
Data Provenance: Not explicitly stated regarding country of origin or retrospective/prospective nature. However, the evaluation results for image quality metrics, noise, and smoothness are generated internally by the manufacturer during design verification and validation activities.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
Not applicable. The ground truth for DQE, MTF, and Sensitivity measurements is established through standardized physical phantom measurements (e.g., using RQA5 beam quality) rather than expert consensus on clinical images. These are quantifiable engineering parameters.
4. Adjudication Method for the Test Set
Not applicable. The evaluation of DQE, MTF, and Sensitivity is based on objective instrumental measurements, not on reader interpretations or consensus methods.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly mentioned or performed as part of this 510(k) submission. The submission focuses on demonstrating substantial equivalence based on technical specifications and physical performance measurements rather than a clinical trial assessing reader performance.
6. Standalone Performance Study
Yes, a standalone performance evaluation was conducted for the device. The reported DQE, MTF, and Sensitivity values, as well as the assessments of noise performance and image smoothness, are measures of the algorithm's (and the underlying detector hardware's) intrinsic performance without human-in-the-loop assistance. The comparison of these metrics between the subject device and the predicate device forms the basis of the standalone performance study.
7. Type of Ground Truth Used
The ground truth used for the performance evaluations (DQE, MTF, Sensitivity, noise, smoothness) is based on objective physical measurements and standardized phantom evaluations. These are quantitative technical specifications derived under controlled laboratory conditions, not expert consensus on pathology, clinical outcomes, or interpretations of patient images.
8. Sample Size for the Training Set
Not applicable. This device is an X-ray flat panel detector, a hardware component that captures images. While it includes embedded software (firmware, image processing algorithms), the document does not indicate that these algorithms rely on a "training set" in the context of machine learning. The image processing algorithms are likely deterministic or parameter-tuned, not learned from a large dataset like an AI model for diagnosis.
9. How the Ground Truth for the Training Set Was Established
Not applicable, as there is no indication of a machine learning "training set" as described in the context of AI models. The ground truth for the development and validation of the detector's physical performance characteristics is established through established metrology and engineering testing protocols.
Ask a specific question about this device
(131 days)
Stationary x-ray system
Classification Panel: Radiology
Classification Regulation: 21 CFR §892.1680
LUMINOS Q.namix T and LUMINOS Q.namix R are devices intended to visualize anatomical structures by converting an X-ray pattern into a visible image. It is a multifunctional, general R/F system, suitable for routine radiography and fluoroscopy examinations, including gastrointestinal- and urogenital examinations and specialist areas like arthrography, angiography and pediatrics.
LUMINOS Q.namix T and LUMINOS Q.namix R are not intended to be used for mammography examinations.
The LUMINOS Q.namix T is an under-table fluoroscopy system and the LUMINOS Q.namix R is an over-table fluoroscopy system. Both systems are multifunctional, general R/F systems, suitable for routine radiography and fluoroscopy examinations, including gastrointestinal- and urogenital examinations and specialist areas like arthrography, angiography and pediatrics. They are designed as modular systems with components such as main fluoro table including fixed fluoroscopy detector and X-ray tube, a ceiling suspension with X-ray tube, Bucky wall stand, X-ray generator, monitors, a bucky tray in the table as well as portable wireless and fixed integrated detectors that may be combined into different configurations to meet specific customer needs.
This FDA 510(k) clearance letter and summary discuss the LUMINOS Q.namix T and LUMINOS Q.namix R X-ray systems. The provided documentation does not include specific acceptance criteria (e.g., numerical thresholds for image quality, diagnostic accuracy, or performance metrics) in the same way an AI/ML device often would. Instead, it relies on demonstrating substantial equivalence to predicate devices and adherence to recognized standards.
The study presented focuses primarily on image quality evaluation for the new detectors (X.fluoro and X.wi-D24) for diagnostic acceptability, rather than establishing acceptance criteria for the entire system's overall performance.
Here's an attempt to extract and present the requested information based on the provided document:
1. Table of Acceptance Criteria and Reported Device Performance
As explicit quantitative acceptance criteria for the overall device performance are not stated in the provided 510(k) summary, this section will reflect the available qualitative performance assessment for the new detectors. The primary "acceptance criterion" implied for the overall device is substantial equivalence to predicate devices and acceptability for diagnostic use.
Feature/Metric | Acceptance Criteria (Implied/Direct) | Reported Device Performance (LUMINOS Q.namix T/R with new detectors) |
---|---|---|
Overall Device Equivalence | Substantially equivalent to predicate devices (Luminos Agile Max, Luminos dRF Max) in indications for use, design, material, functionality, technology, and energy source. | Systems are comparable and substantially equivalent to predicate devices. Test results show comparability. |
New Detector Image Quality (X.fluoro, X.wi-D24) | Acceptable for diagnostic use in radiography & fluoroscopy. | Evaluated images and fluorography studies from different body regions were qualified for proper diagnosis by a US board-certified radiologist and by expert evaluations. |
Compliance with Standards | Compliance with relevant medical electrical safety, performance, and software standards (e.g., IEC 60601 series, ISO 14971, IEC 62304, DICOM). | The LUMINOS Q.namix T/LUMINOS Q.namix R systems were tested and comply with the listed voluntary standards. |
Risk Management | Application of risk management process (per ISO 14971). | Risk Analysis was applied. |
Software Life Cycle | Application of software life cycle processes (per IEC 62304). | IEC 62304 (Medical device software - Software life cycle processes) was applied. |
Usability | Compliance with usability engineering standards (per IEC 60601-1-6, IEC 62366-1). | IEC 60601-1-6 and IEC 62366-1 were applied. |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Description: "expert evaluations" for the new detectors X.fluoro and X.wi-D24.
- Sample Size: The exact number of images or fluorography studies evaluated is not specified. The document mentions "multiple images and fluorography studies from different body regions" for the US board-certified radiologist's evaluation.
- Data Provenance:
- Countries of Origin: Germany (University Hospital Augsburg, Klinikum rechts der Isar Munich, Herz-Jesu-Krankenhaus Münster/Hiltrup) and Belgium (ZAS Jan Palfijn Hospital of Merksem).
- Retrospective or Prospective: Not explicitly stated, but clinical image quality evaluations often involve prospective data collection or a mix with retrospective cases. Given they are evaluating "new detectors" and "clinical image quality evaluation", it implies real or simulated clinical scenarios.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- Number of Experts:
- Initial Evaluations: Multiple "expert evaluations" (implies more than one) were conducted across the listed hospitals. The exact number of individual experts is not specified.
- Specific Evaluation: One "US board-certified radiologist" performed a dedicated clinical image quality evaluation.
- Qualifications of Experts:
- For the general "expert evaluations": Not specified beyond being "experts."
- For the specific evaluation: "US board-certified radiologist." No mention of years of experience is provided.
4. Adjudication Method for the Test Set
The document does not specify any formal adjudication method (e.g., 2+1, 3+1 consensus voting) for establishing ground truth or evaluating the image quality. The evaluations appear to be individual or group assessments leading to a conclusion of "acceptability for diagnostic use."
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was an MRMC study done? The document does not describe a formal MRMC comparative effectiveness study designed to quantify the improvement of human readers with AI vs. without AI assistance.
- Effect Size of Human Reader Improvement: Therefore, no effect size is reported.
- Note: While the device includes "AI-based Auto Cropping" and "AI based Automatic collimation," the study described is an evaluation of the detectors' image quality and the overall system's substantial equivalence, not the clinical impact of these specific AI features on human reader performance.
6. Standalone Performance Study (Algorithm Only)
- The document primarily describes an evaluation of the new detectors within the LUMINOS Q.namix T/R systems and the overall system's substantial equivalence.
- While the device includes "AI-based Auto Cropping" and "AI based Automatic collimation," the document does not report on a standalone performance study specifically for these AI algorithms in isolation from the human-in-the-loop system. The AI features are listed as technological characteristics that contribute to the device's overall updated design.
7. Type of Ground Truth Used
For the detector image quality evaluation, the ground truth was based on expert assessment ("qualified for proper diagnosis"). This falls under expert consensus or expert judgment regarding diagnostic acceptability.
8. Sample Size for the Training Set
The document does not provide any information regarding the sample size used for the training set for any AI components. The focus of this 510(k) summary is on substantiating equivalence and safety/effectiveness of the entire X-ray system, not on the development of individual AI algorithms within it.
9. How the Ground Truth for the Training Set Was Established
Since no information is provided about a training set, the method for establishing its ground truth is not mentioned in the document.
Ask a specific question about this device
(104 days)
Turnpike
WAYNE, NJ 07470
Re: K250665
Trade/Device Name: SKR 3000
Regulation Number: 21 CFR 892.1680
Turnpike
WAYNE, NJ 07470
Re: K250665
Trade/Device Name: SKR 3000
Regulation Number: 21 CFR 892.1680
3000
Common Name: Digital Radiography
Classification Name: Stationary x-ray system (21 CFR 892.1680
Regulation Number: 21 CFR 892.1680
Regulation Name: Stationary x-ray system
**Regulatory
Regulation Number: 21 CFR 892.1680
Regulation Name: Stationary x-ray system
**Regulatory
This device is indicated for use in generating radiographic images of human anatomy. It is intended to a replace radiographic film/screen system in general-purpose diagnostic procedures. This device is not indicated for use in mammography, fluoroscopy, and angiography applications.
The digital radiography SKR 3000 performs X-ray imaging of the human body using an X-ray planar detector that outputs a digital signal, which is then input into an image processing device, and the acquired image is then transmitted to a filing system, printer, and image display device as diagnostic image data.
- This device is not intended for use in mammography
- This device is also used for carrying out exposures on children.
The Console CS-7, which controls the receiving, processing, and output of image data, is required for operation. The CS-7 is a software with basic documentation level. CS-7 implements the following image processing; gradation processing, frequency processing, dynamic range compression, smoothing, rotation, reversing, zooming, and grid removal process/scattered radiation correction (Intelligent-Grid). The Intelligent-Grid is cleared in K151465.
The FPDs used in SKR 3000 can communicate with the image processing device through the wired Ethernet and/or the Wireless LAN (IEEE802.11a/n and FCC compliant). The WPA2-PSK (AES) encryption is adopted for a security of wireless connection.
The SKR 3000 is distributed under a commercial name AeroDR 3.
The purpose of the current premarket submission is to add pediatric use indications for the SKR 3000 imaging system.
The provided FDA 510(k) clearance letter and summary for the SKR 3000 device focuses on adding a pediatric use indication. However, it does not contain the detailed performance data, acceptance criteria, or study specifics typically found in a clinical study report. The document states that "image quality evaluation was conducted in accordance with the 'Guidance for the Submission of 510(k)s for Solid State X-ray Imaging Devices'" and that "pediatric image evaluation using small-size phantoms was performed on the P-53." It also mentions that "The comparative image evaluation demonstrated that the SKR 3000 with P-53 provides substantially equivalent image performance to the comparative device, AeroDR System 2 with P-52, for pediatric use."
Based on the information provided, it's not possible to fully detail the acceptance criteria and the study that proves the device meets them according to your requested format. The document implies that the "acceptance criteria" likely revolved around demonstrating "substantially equivalent image performance" to a predicate device (AeroDR System 2 with P-52) for pediatric use, primarily through phantom studies, rather than a clinical study with human patients and detailed diagnostic performance metrics.
Therefore, many of the requested fields cannot be filled directly from the provided text. I will provide the information that can be inferred or directly stated from the document and explicitly state when information is not available.
Disclaimer: The information below is based solely on the provided 510(k) clearance letter and summary. For a comprehensive understanding, one would typically need access to the full 510(k) submission, which includes the detailed performance data and study reports.
Acceptance Criteria and Device Performance Study for SKR 3000 (Pediatric Use Indication)
The primary objective of the study mentioned in the 510(k) summary was to demonstrate substantial equivalence for the SKR 3000 (specifically with detector P-53) for pediatric use, compared to a predicate device (AeroDR System 2 with P-52).
1. Table of Acceptance Criteria and Reported Device Performance
Given the nature of the submission (adding a pediatric indication based on substantial equivalence), the acceptance criteria are not explicitly quantifiable metrics like sensitivity/specificity for a specific condition. Instead, the focus was on demonstrating "substantially equivalent image performance" through phantom studies.
Acceptance Criteria (Inferred from Document) | Reported Device Performance (Inferred/Stated) |
---|---|
Image quality of SKR 3000 with P-53 for pediatric applications to be "substantially equivalent" to predicate device (AeroDR System 2 with P-52). | "The comparative image evaluation demonstrated that the SKR 3000 with P-53 provides substantially equivalent image performance to the comparative device, AeroDR System 2 with P-52, for pediatric use." |
Compliance with "Guidance for the Submission of 510(k)s for Solid State X-ray Imaging Devices" for pediatric image evaluation using small-size phantoms. | "image quality evaluation was conducted in accordance with the 'Guidance for the Submission of 510(k)s for Solid State X-ray Imaging Devices'. Pediatric image evaluation using small-size phantoms was performed on the P-53." |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size (Test Set): Not specified. The document indicates "small-size phantoms" were used, implying a phantom study, not a human clinical trial. The number of phantom images or specific phantom configurations is not detailed.
- Data Provenance: Not specified. Given it's a phantom study, geographical origin is less relevant than for patient data. It's an internal study conducted to support the 510(k) submission. Retrospective or prospective status is not applicable as it's a phantom study.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Number of Experts: Not specified. Given this was a phantom study, ground truth would likely be based on physical measurements of the phantoms and expected image quality metrics, rather than expert interpretation of pathology or disease. If human evaluation was part of the "comparative image evaluation," the number and qualifications of evaluators are not provided.
- Qualifications: Not specified.
4. Adjudication Method for the Test Set
- Adjudication Method: Not specified. For a phantom study demonstrating "substantially equivalent image performance," adjudication methods like 2+1 or 3+1 (common in clinical reader studies) are generally not applicable. The comparison would likely involve quantitative metrics from the generated images.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done
- MRMC Study: No. The document states "comparative image evaluation" and "pediatric image evaluation using small-size phantoms." This strongly implies a technical performance assessment using phantoms, rather than a clinical MRMC study with human readers interpreting patient cases. Therefore, no effect size of human readers improving with AI vs. without AI assistance can be reported, as AI assistance in image interpretation (e.g., CAD) is not the focus of this submission; it's about the imaging system's ability to produce quality images for diagnosis.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done
- Standalone Performance: Not applicable in the traditional sense of an AI algorithm's diagnostic performance. The device is an X-ray imaging system. The "performance" being evaluated is its ability to generate images, not to provide an automated diagnosis. The "Intelligent-Grid" feature mentioned is an image processing algorithm (scattered radiation correction), but its standalone diagnostic performance is not the subject of this specific submission; its prior clearance (K151465) is referenced.
7. The Type of Ground Truth Used
- Ground Truth Type: For the pediatric image evaluation, the ground truth was based on phantom characteristics and expected image quality metrics. This is inferred from the statement "pediatric image evaluation using small-size phantoms was performed."
8. The Sample Size for the Training Set
- Training Set Sample Size: Not applicable. The SKR 3000 is an X-ray imaging system, not an AI model that requires a "training set" in the machine learning sense for its primary function of image acquisition. While image processing algorithms (like Intelligent-Grid) integrated into the system might have been developed using training data, the submission focuses on the imaging system's performance for pediatric use.
9. How the Ground Truth for the Training Set Was Established
- Ground Truth for Training Set: Not applicable, as no training set (in the context of an AI model's image interpretation learning) is explicitly mentioned or relevant for the scope of this 510(k) submission for an X-ray system.
Ask a specific question about this device
(135 days)
Trade/Device Name: Wireless/ Wired X-Ray Flat Panel Detectors
Regulation Number: 21 CFR 892.1680
** | Stationary x-ray system. |
| Review Panel: | Radiology |
| Regulation Number: | 21 CFR 892.1680
| 21 CFR 892.1680 | 21 CFR 892.1680 |
| Product name | | | |
| Product name | Wireless/ Wired
x-ray system ISO 13485 ISO 14971 ANSI/AAMI ES60601-1 IEC 62220-1-1 ISO 20417 | FDA Standards 21 CFR 892.1680
x-ray system ISO 13485 ISO 14971 ANSI/AAMI ES60601-1 IEC 62220-1-1 ISO 20417 | FDA Standards 21 CFR 892.1680
Allengers Wireless/ Wired X-Ray Flat Panel Detectors used with AWS (Acquisition Workstation Software) Synergy DR FDX/Synergy DR is used to acquire/Process/Display/Store/Export radiographic images of all body parts using Radiographic techniques. It is intended for use in general radiographic applications wherever a conventional film/screener CR system is used.
Allengers Wireless/Wired X-ray Flat Panel Detectors are not intended for mammography applications.
The Wireless/ Wired X-Ray Flat Panel Detectors are designed to be used in any environment that would typically use a radiographic cassette for examinations. Detectors can be placed in a wall bucky for upright exams, a table bucky for recumbent exams, or removed from the bucky for non-grid or free cassette exams. These medical devices have memory exposure mode, and extended image readout feature. Additionally, rounded-edge design for easy handling, image compression algorithm for faster image transfer, LED design for easy detector identification, extra protection against ingress of water. This Device is currently indicated for general projection radiographic applications and the scintillator material is using cesium iodide (CsI). The Wireless/ Wired X-Ray Flat Panel Detectors sensor can automatically collect x-ray from an x-ray source. It collects the x-ray and converts it into digital image and transfers it to Desktop computer / Laptop/ Tablet for image display. The x-ray generator (an integral part of a complete x-ray system), is not part of the submission. The sensor includes a flat panel for x-ray acquisition and digitization and a computer (including proprietary processing software) for processing, annotating and storing x-ray images, the personal computer is not part of this submission.
Wireless/ Wired X-Ray Flat Panel Detectors used with Accessory: "AWS (Acquisition Workstation Software) Synergy DR FDX/ Synergy DR", runs on a Windows based Desktop computer/ Laptop/ Tablet as a user interface for radiologist to perform a general radiography exam. The function includes:
- User Login
- Display Connectivity status of hardware devices like detector
- Patient entry (Manual, Emergency and Worklist)
- Exam entry
- Image processing
- Search patient Data
- Print DICOM Image
- Exit
This document describes the 510(k) clearance for Allengers Wireless/Wired X-Ray Flat Panel Detectors (K243734). The core of the submission revolves around demonstrating substantial equivalence to a predicate device (K223009) and several reference devices (K201528, K210988, K220510). The key modification in the subject device compared to the predicate is an increased scintillator thickness from 400µm to 600µm, which consequently impacts the Modulating Transfer Function (MTF) and Detective Quantum Efficiency (DQE) of the device.
Based on the provided text, the 510(k) relies on non-clinical performance data (bench testing and adherence to voluntary standards) to demonstrate substantial equivalence, rather than extensive clinical studies involving human subjects or AI-assisted human reading.
Here's a breakdown of the requested information based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are implicitly defined by the comparison to the predicate device's performance, particularly for image quality metrics (MTF and DQE). The goal is to demonstrate that despite changes, the device maintains diagnostic image quality and does not raise new safety or effectiveness concerns.
Metric (Units) | Acceptance Criteria (Implicit - Maintain Diagnostic Image Quality | Reported Device Performance (Subject Device) | Comments/Relation to Predicate |
---|---|---|---|
DQE @ 0.5 lp/mm (Max.) | $\ge$ Predicate: 0.78 (for Glass) / 0.79 (for Non-Glass) | 0.85 (for G4343RC, G4343RWC, G4336RWC - Glass) | |
0.79 (for T4336RWC - Non-Glass) | Meets/Exceeds predicate values. Improves for Glass substrate models. Matches for Non-Glass substrate model. | ||
DQE @ 1 lp/mm (Max.) | $\ge$ Predicate: 0.55 (for Glass) / 0.58 (for Non-Glass) | 0.69 (for G4343RWC, G4336RWC, G4343RC - Glass) | |
0.58 (for T4336RWC - Non-Glass) | Meets/Exceeds predicate values. Improves for Glass substrate models. Matches for Non-Glass substrate model. | ||
DQE @ 2 lp/mm (Max.) | $\ge$ Predicate: 0.47 (for Glass) / 0.49 (for Non-Glass) | 0.54 (for G4343RC, G4343RWC, G4336RWC - Glass) | |
0.49 (for T4336RWC - Non-Glass) | Meets/Exceeds predicate values. Improves for Glass substrate models. Matches for Non-Glass substrate model. | ||
MTF @ 0.5 lp/mm (Max.) | $\sim$ Predicate: 0.90 (for Glass) / 0.85 (for Non-Glass) | 0.95 (for G4343RC, G4343RWC, G4336RWC - Glass) | |
0.90 (for T4336RWC - Non-Glass) | Meets/Exceeds predicate values. Improves for Glass substrate models. Improves for Non-Glass substrate model. | ||
MTF @ 1 lp/mm (Max.) | $\sim$ Predicate: 0.76 (for Glass) / 0.69 (for Non-Glass) | 0.70 (for G4343RWC, G4336RWC, G4343RC - Glass) | |
0.69 (for T4336RWC - Non-Glass) | Slightly lower for Glass substrate models (0.70 vs 0.76). Matches for Non-Glass substrate model. The submission claims this does not lead to "clinically significant degradation of details or edges." | ||
MTF @ 2 lp/mm (Max.) | $\sim$ Predicate: 0.47 (for Glass) / 0.42 (for Non-Glass) | 0.41 (for G4343RC, G4343RWC, G4336RWC - Glass) | |
0.42 (for T4336RWC - Non-Glass) | Slightly lower for Glass substrate models (0.41 vs 0.47). Matches for Non-Glass substrate model. The submission claims this does not lead to "clinically significant degradation of details or edges." | ||
Thickness of Scintillator | Not an acceptance criterion in itself, but a design change. | 600 µm | Increased from predicate (400 µm). |
Sensitivity (Typ.) | $\sim$ Predicate: 574 LSB/uGy | 715 LSB/uGy | Increased from predicate. |
Max. Resolution | 3.57 lp/mm (Matches predicate) | 3.57 lp/mm | Matches predicate. |
General Safety and Effectiveness | No new safety and effectiveness issues raised compared to predicate. | Verified by adherence to voluntary standards and risk analysis. | Claimed to be met. The increased scintillator thickness is "deemed acceptable" and experimental results confirm "superior noise performance and smoother image quality compared to the 400μm CsI, without clinically significant degradation of details or edges." |
2. Sample Size Used for the Test Set and Data Provenance
The document explicitly states that the submission relies on "Non-clinical Performance Data" and "Bench testing". There is no mention of a clinical test set involving human subjects or patient imaging data with a specified sample size. The data provenance would be laboratory bench testing results. The country of origin of the data is not explicitly stated beyond the company being in India, but it's performance data, not patient data. The testing is described as functional testing to evaluate the impact of different scintillator thicknesses.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
This information is not applicable as the clearance is based on non-clinical, bench testing data (physical performance characteristics like MTF and DQE) rather than clinical image interpretation or diagnostic performance that would require human expert ground truth.
4. Adjudication Method for the Test Set
Not applicable, as there is no mention of a human-read test set or ground truth adjudication process.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
No. The document does not mention an MRMC study or any study involving human readers, with or without AI assistance. The device is an X-ray detector, not an AI software.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) study was done
Not applicable in the context of an AI algorithm, as this device is an X-ray detector and associated acquisition software. However, the "standalone" performance of the detector itself (MTF, DQE, sensitivity) was assessed through bench testing and measurements, which can be considered its "standalone" performance.
7. The Type of Ground Truth Used
The "ground truth" for the performance claims (MTF, DQE, sensitivity) is based on physical phantom measurements and engineering specifications obtained through controlled bench testing following recognized industry standards (e.g., IEC 62220-1-1). It is not based on expert consensus, pathology, or outcomes data from patient studies.
8. The Sample Size for the Training Set
Not applicable. This submission is for an X-ray flat panel detector, not an AI/ML model that would require a "training set" of data.
9. How the Ground Truth for the Training Set was Established
Not applicable. As stated above, this device does not involve an AI/ML model with a training set.
Ask a specific question about this device
(188 days)
Re: K242770
Trade/Device Name: EXPD 114; EXPD 114G; EXPD 114P; EXPD 114PG Regulation Number: 21 CFR 892.1680
Name: Stationary X-ray System
-
Classification Panel: Radiology ●
-
Classification Regulation: 21 CFR 892.1680
| -
Classification Panel: ● Stationary X-ray System
-
Classification Regulation: 21 CFR 892.1680
|
| Classification
Regulation | 21 CFR 892.1680
| 21 CFR 892.1680
EXPD 114, EXPD 114P, EXPD 114G, EXPD 114PG Digital X-ray detector is indicated for digital imaging solution designed for providing general radiographic diagnosis of human anatomy. This device is intended to replace film or screen based radiographic systems in all general purpose diagnostic procedures. This device is not intended for mammography applications. It is intended for both adult and pediatric populations.
EXPD 114, EXPD 114G, EXPD 114P, EXPD 114PG are flat-panel type digital X-ray detector that captures projection radiographic images in digital format within seconds, eliminating the need for an entire x-ray film or an image plate as an image capture medium. EXPD 114, EXPD 114G, EXPD 114P, EXPD 114PG differs from traditional X-ray systems in that, instead of exposing a film and chemically processing it to create a hard copy image, a device called a Detector is used to capture the image in electronic form.
EXPD 114, EXPD 114G, EXPD 114P, EXPD 114PG are indirect conversion devices in the form of a square plate in which converts the incoming X-rays into visible light. This visible light is then collected by an optical sensor, which generates an electric charges representation of the spatial distribution of the incoming X-ray quanta.
The charges are converted to a modulated electrical signal thin film transistors. The amplified signal is converted to a voltage signal and is then converted from an analog to digital signal which can be transmitted to a viewed image print out, transmitted to remote viewing or stored as an electronic data file for later viewing.
The DRTECH Corporation's EXPD 114, EXPD 114G, EXPD 114P, EXPD 114PG Digital X-ray detectors were assessed for substantial equivalence to a predicate device (K223124). The company conducted non-clinical performance testing (bench tests) and a "Concurrence Study" for image quality to demonstrate this.
Here's a breakdown of the requested information based on the provided document:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state "acceptance criteria" in a numerical or pass/fail format for the Concurrence Study beyond broad equivalence. Instead, it compares the performance of the subject device to the predicate device. The performance data mentioned primarily relates to technical specifications and general image quality assessment.
Parameter | Acceptance Criteria (Implied: Equivalent to or comparable to predicate) | Reported Device Performance (Subject Device) | Reported Predicate Device Performance (K223124) |
---|---|---|---|
DQE | Equivalent to or comparable to predicate | EXPD 114: 45% @0.5lp/mm | |
EXPD 114G: 25% @0.5lp/mm | |||
EXPD 114P: 45% @0.5lp/mm | |||
EXPD 114PG: 25% @0.5lp/mm | EXPD 129P, EXPD 86P: 50.0 % at 0.5 lp/mm | ||
EXPD 129PG, EXPD 86PG: 25.0 % at 0.5 lp/mm | |||
MTF | Equivalent to or comparable to predicate | EXPD 114: 40% @2.0lp/mm | |
EXPD 114G: 40% @2.0lp/mm | |||
EXPD 114P: 40% @2.0lp/mm | |||
EXPD 114PG: 40% @2.0lp/mm | EXPD 129P, EXPD 86P: 45.0 % at 2.0 lp/mm | ||
EXPD 129PG, EXPD 86PG: 45.0 % at 2.0 lp/mm | |||
Resolution | Equivalent to or comparable to predicate | 3.5 lp/mm | 3.5 lp/mm |
Image Quality (Clinical Assessment) | Equivalent to predicate device. | "the image quality of the subject device is equivalent to that of the predicate device" | Standard established by predicate device. |
Note on DQE and MTF: The subject device's DQE for EXPD 114/P is slightly lower than the predicate EXPD 129P/86P (45% vs 50%). Similarly, the MTF for all subject devices is lower than the predicate (40% vs 45%). Despite these numerical differences, the overall conclusion states "basically equal or worth the predicate device" and that the device meets acceptance criteria. This suggests that the measured differences were considered clinically acceptable within the context of substantial equivalence.
2. Sample Size Used for the Test Set and Data Provenance
The document states: "Our Concurrence Study for Image Quality was based on body parts (Chest, C-spine AP, L-spine AP, Shoulder AP, Pelvis AP, Extremity) to compare subject device and predicate device(K223124)."
- Sample Size: The exact number of images or cases analyzed in the Concurrence Study is not specified in the provided text. It only lists the anatomical sites included.
- Data Provenance: The document does not specify the country of origin of the data or whether the study was retrospective or prospective. It is a "Concurrence Study" which implies a direct comparison, likely of newly acquired images, but this is not explicitly stated.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- Number of Experts: "a qualified clinical expert" (singular) is mentioned.
- Qualifications of Experts: The expert is described as "qualified clinical expert." No further details on their specific qualifications (e.g., radiologist, years of experience, board certification) are provided.
4. Adjudication Method for the Test Set
- Adjudication Method: The document only mentions "a qualified clinical expert confirmed" the image quality. This strongly suggests a single-reader assessment without any explicit adjudication method (e.g., 2+1, 3+1 consensus).
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- MRMC Study: Based on the description of "a qualified clinical expert confirmed," it appears that a formal MRMC comparative effectiveness study was not conducted. The assessment seems to be a qualitative comparison of image quality by a single expert.
- Effect Size of Human Reader Improvement: As an MRMC study was not indicated, there is no information on the effect size of how much human readers improve with AI vs. without AI assistance. The device is a digital X-ray detector, not an AI software intended for interpretation assistance.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study
- The device is a digital X-ray detector, not an AI algorithm. Therefore, the concept of "standalone performance" of an AI algorithm is not applicable in this context. The "performance" described relates to the physical characteristics of the detector (DQE, MTF, Resolution) and its ability to produce images of diagnostic quality.
7. Type of Ground Truth Used
- Type of Ground Truth: The ground truth for the Concurrence Study was based on the expert's subjective assessment of image quality compared to the predicate device, stating that "the image quality of the subject device is equivalent to that of the predicate device." This is a form of expert consensus, albeit with only one expert explicitly mentioned. It's not based on pathology or outcomes data.
8. Sample Size for the Training Set
- The document describes a device (digital X-ray detector), not a machine learning model. Therefore, the concept of a "training set" in the context of an AI/ML algorithm is not applicable.
9. How the Ground Truth for the Training Set Was Established
- As the device is not an AI/ML algorithm, the concept of "training set ground truth" is not applicable.
Ask a specific question about this device
(133 days)
Expd 4343np; Expd 3643n1; Expd 3643n; Expd 3643u1; Expd 3643nu; Expd 3643np Regulation Number: 21 CFR 892.1680
Regulation Number: 21 CFR 892.1680
- .
Product Code: MOB - Regulation Number: 21 CFR 892.1680 ●
- Device Class:
5
|
| Classification
Regulation | 21 CFR 892.1680
The Digital X-ray detector, EXPD-N Series, is designed for use in digital imaging solutions for general radiographic diagnosis of human anatomy. This device is intended for use in all general diagnostic procedures, replacing film or screen-based radiographic systems for both adult and paediatric patients. It is not intended for use in mammography.
In comparison to existing devices, the new detectors incorporate a Flexible a-Si in the TFT material within the panel. The primary difference from the conventional glass a-Si panel is that the electronic circuits, such as silicon, are deposited on a plastic substrate instead of a glass substrate during the manufacturing of the TFT panel. Since only the material of the substrate on which the silicon is deposited changes, the overall image performance remains unaffected. Another difference is the pixel pitch. While existing products feature only a pixel pitch of 140μm, the new models include an option with a pixel pitch of 100um. The resolution of an X-ray detector has a significant impact on MTF (Modulation Transfer Function) and sensitivity.
This medical device submission is for an X-ray detector, not an AI/ML device. Therefore, the typical acceptance criteria and study requirements for AI/ML devices, such as those related to multi-reader multi-case studies, standalone performance, and ground truth establishment with expert consensus or pathology, are not applicable here.
The submission focuses on establishing substantial equivalence to a predicate device based on technical characteristics and physical performance, confirming it is suitable for general radiographic diagnosis.
Here's a breakdown of the provided information, tailored to the context of a non-AI X-ray detector:
1. Table of Acceptance Criteria and the Reported Device Performance
The acceptance criteria are implicitly defined by demonstrating substantial equivalence to the predicate device in terms of technical characteristics and performance metrics relevant to X-ray image quality. The table below compares the subject device's performance to the predicate device, highlighting where performance is similar or improved.
Item | Acceptance Criteria (Implied by Predicate Device K193017 Performance) | Subject Device (EXPD-N Series) Reported Performance |
---|---|---|
Intended Use | General radiographic diagnosis, replaces film/screen-based systems, adult & pediatric, Not for mammography. | General radiographic diagnosis, replaces film/screen-based systems, adult & pediatric, Not for mammography. (Same) |
Anatomical Sites | General Radiography | General Radiography (Same) |
Dimensions (mm) | EVS 3643W/WG/WP: 460(W) x 386(L) x 15(H) | |
EVS 4343W/WG/WP: 460(W) x 460(L) x 15(H) | EXPD 3643N/NP/NU/N1/U1: 460(W) x 386(L) x 15.5(H) | |
EXPD 4343N/NP/NU/N1/U1: 460(W) x 460(L) x 15.5(H) (Slight difference in thickness, otherwise similar) | ||
Pixel Pitch | 140 μm | 140 μm (for N/NP/NU models) |
100 μm (for N1/U1 models) (Improved resolution option added) | ||
Image Size (pixels) | EVS 3643W/WG/WP: 2,560 x 3,072 | |
EVS 4343W/WG/WP: 3,072 x 3,072 | EXPD 4343N/NP/NU: 3,072 × 3,072 (Same) | |
EXPD 3643N/NP/NU: 2,560 × 3,072 (Same) | ||
EXPD 4343N1/U1: 4,302 × 4,302 (Improved with 100μm pixel pitch) | ||
EXPD 3643N1/U1: 3,534 × 4,302 (Improved with 100μm pixel pitch) | ||
Active Area (mm) | EVS 3643W/WG/WP: 430 x 358 | |
EVS 4343W/WG/WP: 430 x 430 | EXPD 4343N/NP/NU: 430.2mm × 430.2mm (Similar) | |
EXPD 3643N/NP/NU: 353.4mm × 430.2mm (Similar) | ||
EXPD 4343N1/U1: 430.08mm × 430.08mm (Similar, adapted for 100μm pixel pitch) | ||
EXPD 3643N1/U1: 358.4mm × 430.08mm (Similar, adapted for 100μm pixel pitch) | ||
TFT Material | a-Si, IGZO | a-Si, Flexible a-Si, IGZO (New Flexible a-Si material introduced, otherwise similar) |
Cycle Time |
Ask a specific question about this device
(237 days)
Secondary Predicate Device: Radlink® GPS (K142718), Regulation Number: 892.1680, Product Code: LLZ/MQB
| 892.1680
Radlink GPS Pro Imaging is an image-processing software intended to assist in hip procedures by measuring the acetabulum's position relative to local bone structures identified from radiological images. The device allows for overlaying digital annotations on radiological images and includes tools for performing measurements using the images and digital annotations. Clinical judgment and experience are required to properly use the software. The software is not for primary image interpretation. The software is not for use on mobile phones.
The Radlink® GPS Pro Imaging is an image processing software designed to assist orthopedic surgeons in positioning hip, knee, and trauma components. The software leverages advanced geographic measurement and stitching technologies, which have been previously FDA cleared, ensuring precise and reliable image handling. These foundational technologies continue to be the primary features of software. Additionally, Radlink GPS Pro Imaging software supports complete image acquisition and DICOM transmission capabilities, facilitating seamless integration with existing hospital systems. It is compatible with Windows operating system, and requires minimal hardware specifications, making it a versatile solution for different clinical setups.
The provided text describes the Radlink GPS Pro Imaging device and its substantial equivalence to predicate devices, but it does not contain detailed information about the acceptance criteria or a specific study proving the device meets those criteria with statistical significance.
The "Performance Data" section ([6]) states: "Comprehensive cybersecurity testing and software verification activities were conducted in accordance with applicable FDA guidance and recognized standards. These assessments confirm that the device's software meets the required performance, safety, and security criteria." However, it does not specify what those "required performance... criteria" are in quantifiable terms, nor does it provide a study with specific results.
Therefore, most of the requested information cannot be extracted directly from this document.
Here's what can be gathered and what is missing:
1. Table of Acceptance Criteria and Reported Device Performance
Not available in the provided document. The document vaguely states "required performance... criteria" but does not define them or report specific device performance metrics against these criteria.
2. Sample Size Used for the Test Set and Data Provenance
Not available in the provided document. No information is given regarding the sample size of any test set or the provenance of the data (e.g., country of origin, retrospective/prospective).
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications
Not available in the provided document. There is no mention of experts or how ground truth might have been established for a test set.
4. Adjudication Method for the Test Set
Not available in the provided document. The document does not describe any adjudication methods.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
Not available in the provided document. The document does not mention an MRMC study or any effect size for human reader improvement with AI assistance. The "Indications for Use" section ([3]) explicitly states, "Clinical judgment and experience are required to properly use the software. The software is not for primary image interpretation." and "The software is not for use on mobile phones.", which suggests it is an assistive tool, but no comparative study is detailed.
6. Standalone Performance Study
Not available in the provided document. While "software performance, segmentation accuracy" are mentioned as being tested ([6]), no specific standalone performance study results, metrics, or methods are provided. The statement that "Clinical judgment and experience are required to properly use the software" ([3]) reinforces that it's not intended for standalone interpretation.
7. Type of Ground Truth Used
Not available in the provided document. The document broadly states "segmentation accuracy" was part of testing ([6]), implying some form of ground truth for segmentation, but the specific type (e.g., expert consensus, pathology, outcome data) is not detailed.
8. Sample Size for the Training Set
Not available in the provided document. The document does not provide any information about a training set or its size.
9. How the Ground Truth for the Training Set was Established
Not available in the provided document. As there is no information about a training set, the method for establishing its ground truth is also not provided.
In summary, the provided FDA 510(k) clearance letter and summary discuss the device's function, intended use, and its substantial equivalence to predicate devices, but they do not disclose the detailed technical performance data, acceptance criteria, or study methodologies that would typically be found in a full submission or scientific publication. The document focuses on regulatory compliance and the claim of substantial equivalence rather than explicit performance metrics and supporting study details.
Ask a specific question about this device
Page 1 of 69