Search Results
Found 684 results
510(k) Data Aggregation
(152 days)
201807
CHINA
Re: K252000
Trade/Device Name: uDR Arria & uDR Aris
Regulation Number: 21 CFR 892.1680
Stationary X-Ray System
Classification Panel: Radiology
Classification Regulation: 21 CFR §892.1680
Remark |
|---|---|---|---|
| General | | | |
| Product Code | KPR | KPR | Same |
| Regulation No. | 892.1680
Remark |
|---|---|---|---|
| General | | | |
| Product Code | KPR | KPR | Same |
| Regulation No. | 892.1680
Digital Medical X-ray Imaging System is intended to acquire X-ray images of the human body by a qualified technician, examples include acquiring two-dimensional X-ray images of the skull, spinal column, chest, abdomen, extremities, limbs and trunk. The visualization of such anatomical structures provide visual evidence to radiologists and clinicians in making diagnostic decisions. This device is not intended for mammography.
uDR Arria and uDR Aris are two models of Digital Medical X-ray Imaging System developed and manufactured by Shanghai United Imaging Healthcare Co., Ltd(UIH). The system is equipped with imaging chain components and utilizes enhanced processing technology, so it can offer radiographic images with high image quality. The intuitive user interface and easy-to-use functions provide clinical users with a experience during patient examination and image processing.
The system is intended to acquire X-ray images of the human body by a qualified technician, examples include acquiring two-dimensional X-ray images of the skull, spinal column, chest, abdomen, extremities, limbs and trunk. The visualization of such anatomical structures provide visual evidence to radiologists and clinicians in making diagnostic decisions. This device is not intended for mammography.
Here's a breakdown of the acceptance criteria and study details for the uDR Arria and uDR Aris devices, based on the provided FDA 510(k) clearance letter:
1. Table of Acceptance Criteria and Reported Device Performance
The 510(k) summary provided details for two specific optional software functions: uVision and uAid. Other aspects of the device are X-ray imaging systems, for which the primary "performance" is image quality, which is evaluated subjectively through clinical image evaluation rather than quantitative metrics and acceptance criteria.
| Feature | Acceptance Criteria | Reported Device Performance |
|---|---|---|
| uVision (Optional) | When users employ the uVision function for automatic positioning, the automatically set system position and field size will meet clinical technicians' criteria with 95% compliance. | In 95% of patient positioning processes, the light field and equipment position automatically set by uVision can meet the clinical positioning and shooting requirements. In the remaining 5% of cases, based on the light field and system position automatically set by the equipment, technicians still need to make manual adjustments. |
| uAid (Optional) | A 90% pass rate for "Grade A" images, aligning with industry standards (e.g., European Radiology and ACR-AAPM-SPR Practice Parameter guidelines which state Grade A image rates in public hospitals generally range between 80% and 90%).Implicitly, for specific criteria: Sensitivity and specificity of whether there is a foreign body, whether the lung field is intact, and whether the scapula is open all exceed 0.9. | Average time of uAid algorithm: 1.359 seconds (longest does not exceed 2 seconds).Maximum memory occupation of uAid algorithm: not more than 2GB.Sensitivity and specificity of whether there is a foreign body, whether the lung field is intact, and whether the scapula is open all exceed 0.9.The uAid function can correctly identify four types of results: Foreign object, Incomplete lung fields, Unexposed shoulder blades, and Centerline deviation, making classifications (Green: qualified, Yellow: secondary, Red: waste). |
| Clinical Image Quality | Each image was reviewed with a statement indicating that image quality is sufficient for clinical diagnosis. | Sample images of chest, abdomen, spine, pelvis, upper extremity and lower extremity were provided. A board-certified radiologist evaluated the image quality for sufficiency for clinical diagnosis. |
2. Sample Size Used for the Test Set and Data Provenance
uVision:
- Sample Size: The evaluation results provided are from a single week's worth of data, totaling 328 chest cases and 20 full spine or full lower limb stitching cases from the specified period (2024.12.17 - 2024.12.23).
- Data Provenance: The device with uVision function (serial number 11XT7E0001) has been in use for over a year at a hospital. The testing data includes individuals of all genders and varying heights capable of standing independently. It is prospective in the sense that it was collected during routine clinical operation after installation and commissioning. The country of origin is not explicitly stated but implied to be China, given the manufacturer's location.
uAid:
- Sample Size: The document does not provide a single total number for the test set. Instead, it breaks down the data by age/gender distribution and the distribution of positive/negative cases for each criterion.
- Age/Gender Distribution: Total 5680 patients (sum of all male/female/no age/no age, no gender categories).
- Criterion-specific counts:
- Lung field segmentation: 465 Negative, 31 Positive
- Spinal centerline segmentation: 815 Negative, 68 Positive
- Shoulder blades segmentation: 210 Negative, 1089 Positive
- Foreign object: 1078 Negative, 3080 Positive
- Data Provenance: Data collection started in October 2017, from the uDR 780i and "different cooperative hospitals." The study was approved by the institutional review board. The data is stored in DICOM format. It is retrospective, collected from existing hospital data. Country of origin not explicitly stated but implied to be China.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
uVision:
- Number of Experts: The results were "statistically analyzed by clinical experts." The exact number is not specified, but the term "experts" suggests more than one.
- Qualifications: "Clinical experts" is a general term. Specific qualifications (e.g., radiologist, radiologic technologist, years of experience) are not provided.
uAid:
- Ground truth establishment for uAid's classification categories (Foreign object, Incomplete lung fields, Unexposed shoulder blades, Centerline deviation) is not explicitly described in terms of experts or their qualifications. The acceptance criteria reference "relevant research and literature" and "industry guidelines and standards," suggesting that the ground truth for image quality categorization might be derived from these established definitions.
Clinical Image Quality (General):
- Number of Experts: "A board certified radiologist." This indicates one expert.
- Qualifications: "Board certified radiologist." This is a specific and high qualification for evaluating radiographic images.
4. Adjudication Method for the Test Set
uVision:
- The document states that the results were "statistically analyzed by clinical experts." This implies some form of review and judgment, but a formal adjudication method (e.g., 2+1, 3+1) is not described. It's presented as an evaluation of the system's compliance with technician criteria.
uAid:
- The method for establishing ground truth classifications for uAid's criteria (Foreign object, Incomplete lung fields, Unexposed shoulder blades, Centerline deviation) is not detailed, so an adjudication method is not provided.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
- No, an MRMC comparative effectiveness study was not explicitly described for either uVision or uAid or the overall device. The studies focused on evaluating the standalone performance of the AI features and the overall clinical image quality (subjective expert review), rather than comparing human readers with and without AI assistance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- Yes, standalone performance was evaluated for both uVision and uAid.
- uVision: The reported performance is how well the "automatically set system position and field size" meet "clinical technicians' criteria" when the uVision function is used for automatic positioning. While technicians use the function, the performance metric itself is about the accuracy of the system's automatic output before manual adjustment.
- uAid: The reported performance metrics (sensitivity, specificity, processing time, memory usage) are direct measurements of the algorithm's output in categorizing image quality criteria. The system classifies images and presents the evaluation "instantly accessible to radiologic technologists," but the evaluation itself is algorithmic.
7. The Type of Ground Truth Used
uVision:
- The ground truth for uVision appears to be clinical technicians' criteria. The metric is "meet clinical technicians' criteria with 95% compliance," implying that the "correct" positioning is defined by the standards and expectations of experienced technicians.
uAid:
- The ground truth for uAid's image quality classification is based on established clinical quality control criteria for chest X-rays. The acceptance criteria explicitly reference "mature industry guidelines and standards, such as those from European Radiology and the ACR-AAPM-SPR Practice Parameter." This suggests an expert consensus-driven or guideline-based ground truth for what constitutes a "Grade A" image or the presence of specific issues like foreign objects or incomplete lung fields.
8. The Sample Size for the Training Set
uVision:
- The sample size for the training set for uVision is not provided. The document explicitly states that the "testing dataset was collected independently from the training dataset, with separated subjects and during different time periods."
uAid:
- The sample size for the training set for uAid is not provided. It is mentioned that "The data collection started in October 2017... from different cooperative hospitals" for the overall data reservoir, but specific numbers for training versus testing are not given. Similar to uVision, the testing data is stated to be "entirely independent and does not share any overlap with the training data."
9. How the Ground Truth for the Training Set Was Established
uVision:
- The method for establishing the ground truth for the training set for uVision is not provided.
uAid:
- The method for establishing the ground truth for the training set for uAid is not provided. While the testing set's ground truth is implied to be based on industry guidelines and clinical criteria, the process for prospectively labeling training data is not detailed.
Ask a specific question about this device
(27 days)
Re: K252911
Trade/Device Name: Lux HD 2530 Detector (Lux HD 2530)
Regulation Number: 21 CFR 892.1680
Classification Name:** Stationary X-Ray System
Product Code: MQB
Regulation Number: 21 CFR 892.1680
Product Code:** MQB
Classification Name: Stationary X-Ray System
Regulation Number: 21 CFR 892.1680
Name | Stationary X-ray system | Same |
| Product Code | MQB | Same |
| Regulation Number | 21 CFR 892.1680
Lux HD 2530 Detector is indicated for digital imaging solutions designed to provide general radiographic diagnosis for human anatomy including both adult and pediatric patients. It is intended to replace film/screen systems in all general–purpose diagnostic procedures. Lux HD 2530 Detector is not intended for mammography or dental applications.
Lux HD 2530 Detector is digital flat panel detector. They support the single frame mode, with the key component of TFT/PD image sensor flat panel of active area: 25cm × 30cm (Lux HD 2530 Detector). The differences between two models are overall change in the dimensions of the image receptor.
The sensor plate of Lux HD 2530 Detector is direct-deposited with CsI scintillator to achieve the conversion from X-ray to visible photon. The visible photons are transformed to electron signals by diode capacitor array within TFT panel, which are composed and processed by connecting to scanning and readout electronics, consequently to form a panel image by transmitting to PC through the user interface.
The major function of the Lux HD 2530 Detector is to convert the X-ray to digital image, with the application of high resolution X-ray imaging.
The Digital Radiographic Imaging Acquisition Software Platform - DR is part of the system, it is used to acquire, enhance, view image from Lux HD 2530 Detector. Based on the risks and intended use, documentation level of the software is basic.
N/A
Ask a specific question about this device
(157 days)
SHANGHAI, 201807
CHINA
Re: K251167
Trade/Device Name: uDR Aurora CX
Regulation Number: 21 CFR 892.1680
Stationary X-Ray System
Classification Panel: Radiology
Classification Regulation: 21 CFR §892.1680
-----------|---------|
| General | | | |
| Product Code | KPR | KPR | Same |
| Regulation No. | 892.1680
uDR Aurora CX is intended to acquire X-ray images of the human body by a qualified technician, examples include acquiring two-dimensional X-ray images of the skull, spinal column, chest, abdomen, extremities, limbs and trunk. The visualization of such anatomical structures provide visual evidence to radiologists and clinicians in making diagnostic decisions. This device is not intended for mammography.
uDR Aurora CX is a model of Digital Medical X-ray Imaging System developed and manufactured by Shanghai United Imaging Healthcare Co., Ltd(UIH). It includes X-ray Generator, X-ray Imaging System. The X-ray Generator produces controlled X-rays by high-voltage generator and X-ray tube assembly, ensuring stable energy output for human body penetration. The X-ray Imaging System converts X-ray photons into electrical signals by detectors, and generates DICOM-standard images by workstation to reflecting density variations of human body.
This document describes the acceptance criteria and study details for two features of the uDR Aurora CX device: uVision and uAid.
1. Acceptance Criteria and Reported Device Performance
| Feature | Acceptance Criteria | Reported Device Performance |
|---|---|---|
| uVision | When users employ the uVision function for automatic positioning, the automatically set system position and field size will meet clinical technicians' criteria with 95% compliance. This demonstrates that uVision can effectively assist clinical technicians in positioning tasks, specifically by aiming to reduce retake rates attributed to incorrect positioning (which studies indicate can range from 9% to 28%). | In 95% of patient positioning processes, the light field and equipment position automatically set by uVision met clinical positioning and shooting requirements for chest PA, whole-spine, and whole-lower-limb stitching exams. In the remaining 5% of cases, manual adjustments by technicians were needed. |
| uAid | The accuracy of non-standard image recognition (specifically, the rate of "Grade A" images recognized) should meet a 90% pass rate, aligning with industry standards derived from guidelines like those from European Radiology and ACR-AAPM-SPR Practice Parameter (which indicate Grade A image rates between 80% and 90% in public hospitals). This demonstrates that uAid can effectively assist clinical technicians in managing standardized image quality. | Overall Performance: The uAid function can correctly identify four types of results (Foreign object, Incomplete lung fields, Unexposed shoulder blades, and Centerline deviation) and classify images into Green (qualified), Yellow (secondary), or Red (waste). It meets the requirement for checking examination and positioning quality. |
| Specific Metric/Quantitative Performance (from "Summary"):- Average algorithm time: 1.359 seconds (longest not exceeding 2 seconds).- Maximum memory occupation: Not more than 2G.- For foreign body, lung field integrity, and scapula opening, both sensitivity and specificity for recognition exceed 0.9. |
2. Sample Size and Data Provenance for the Test Set
| Feature | Sample Size for Test Set | Data Provenance |
|---|---|---|
| uVision | 348 cases (328 Chest PA cases + 20 Full Spine or Full Lower Limb Stitching cases) collected over one week from 2024.12.17 to 2024.12.23. The device had been installed for over a year, with an average daily volume of ~80 patients, ~45 chest X-rays/day, and ~10-20 stitching cases/week. | Prospective/Retrospective Hybrid: The data was collected prospectively from a device (serial number 11XT7E0001) in clinical use after installation and commissioning over a year prior to the reported test period. It was collected from individuals of all genders and varying heights (able to stand independently). The testing was conducted in a real-world clinical setting. Country of Origin: Not explicitly stated, but the company is in Shanghai, China, suggesting the data is likely from China. |
| uAid | Not explicitly stated as a single total number of cases. Instead, the data distribution is provided, indicating various counts for different conditions across gender and age groups. For example, "lung field segmentation" had 465 negative and 31 positive cases. "Foreign object" had 1078 negative and 3080 positive cases. The sum of these individual counts suggests a total dataset of several thousand images. | Retrospective: Data collection for uAid started in October 2017, with a wide range of data sources, including different cooperative hospitals. The data was cleaned and stored in DICOM format. Country of Origin: Not explicitly stated, but the company is in Shanghai, China, suggesting the data is likely from China. |
3. Number and Qualifications of Experts for Ground Truth (Test Set)
| Feature | Number of Experts | Qualifications of Experts |
|---|---|---|
| uVision | Not explicitly stated. The statement says, "The results automatically set by the system are then statistically analyzed by clinical experts." | "Clinical experts." No specific qualifications (e.g., years of experience, specialty) are provided. |
| uAid | Not explicitly stated. The document mentions "The study was approved by the institutional review board of the hospitals," which implies expert review but does not detail the number or roles of experts in establishing the ground truth labels for the specific image characteristics tested. | Not explicitly stated for establishing ground truth labels. |
4. Adjudication Method (Test Set)
| Feature | Adjudication Method |
|---|---|
| uVision | Not explicitly stated. The data was "statistically analyzed by clinical experts." It does not specify if multiple experts reviewed cases or how disagreements were resolved. |
| uAid | Not explicitly stated. The process mentions data cleaning and sorting, and IRB approval, but not the specific adjudication method for individual image labels. |
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- uVision: No MRMC comparative effectiveness study was done to compare human readers with and without AI assistance. The study evaluates the AI's direct assistance in positioning, measured by compliance with clinical criteria, rather than comparing diagnostic performance of human readers.
- uAid: No MRMC comparative effectiveness study was done. The study focuses on the standalone performance of the algorithm in identifying image quality issues, not on how it impacts human reader diagnostic accuracy or efficiency.
6. Standalone Performance (Algorithm Only)
- uVision: Yes, a standalone performance study was done. The "95% compliance" rate refers to the algorithm's direct ability to set system position and FOV that meet clinical technician criteria without a human actively adjusting or guiding the initial AI-generated settings during the compliance evaluation. Technicians could manually adjust those settings if needed.
- uAid: Yes, a standalone performance study was done. The algorithm processes images and outputs a quality classification (Green, Yellow, Red) and identifies specific issues (foreign object, incomplete lung fields, etc.). Its sensitivity and specificity metrics are standalone performance indicators.
7. Type of Ground Truth Used
- uVision: Expert Consensus/Clinical Criteria: The ground truth for uVision's performance (i.e., whether the automatically set position/FOV was "compliant") was established by "clinical experts" based on "clinical technicians' criteria" for proper positioning and shooting requirements.
- uAid: Expert Consensus/Manual Labeling: The ground truth for uAid's evaluation (e.g., presence of foreign objects, complete lung fields, open scapula, centerline deviation) was established through a "classification" process, implying manual labeling or consensus by experts after data collection and cleaning. The document mentions "negative" and "positive" data distributions for each criterion.
8. Sample Size for the Training Set
- uVision: Not explicitly stated in the provided text. The testing data was confirmed to be "collected independently from the training dataset, with separated subjects and during different time periods."
- uAid: Not explicitly stated in the provided text. The document mentions "The data collection started in October 2017, with a wide range of data sources" for training, but does not provide specific numbers for the training set size.
9. How Ground Truth for Training Set was Established
- uVision: Not explicitly stated for the training set. It can be inferred that a similar process to the test set, involving expert review against clinical criteria, would have been used.
- uAid: Not explicitly stated for the training set. Given that the data was collected from "different cooperative hospitals," "multiple cleaning and sorting" was performed, and the study was "approved by the institutional review board," it is highly likely that the ground truth for the training set involved manual labeling by clinical experts/radiologists, followed by a review process (potentially consensus-based or single-expert) to establish the labels for image characteristics and quality.
Ask a specific question about this device
(167 days)
100176
CHINA
Re: K250788
Trade/Device Name: Definium Tempo Select
Regulation Number: 21 CFR 892.1680
- Digital Radiographic System
Regulation Name: Stationary X-Ray System
Regulation: 21 CFR 892.1680
Predicate Device:**
21 CFR 807.92(a)(3)
Discovery XR656 HD with VolumeRad (K191699)
21CFR 892.1680
China
Reference Device:
Definium Pace Select (K231892)
21CFR 892.1680 (KPR, MQB)
Class II
The Definium Tempo Select is intended to generate digital radiographic images of the skull, spinal column, chest, abdomen, extremities, and other body parts in patients of all ages. Applications can be performed with the patient sitting, standing, or lying in the prone or supine position and the system is intended for use in all routine radiography exams. Optional image pasting function enables the operator to stitch sequentially acquired radiographs into a single image.
This device is not intended for mammographic applications.
The Definium Tempo Select Radiography X-ray System is designed as a modular system with components that include an Overhead Tube Suspension (OTS) with a tube, an auto collimator and a depth camera, an elevating table, a motorized wall stand, a cabinet with X-ray high voltage generator, a wireless access point and wireless detectors in exam room and PC, monitor and control box with hand-switch in control room. The system generates diagnostic radiographic images which can be reviewed or managed locally and sent through a DICOM network for applications including reviewing, storage and printing.
By leveraging platform components/ design, Definium Tempo Select is similar to the predicate device Discovery XR656 HD (K191699) and the reference device Definium Pace Select (K231892) with regards to the user interface layout, patient worklist refresh and selection, protocol selection, image acquisition, and image processing based on the raw image. This product introduces a new high voltage generator which has the same key specifications as the predicate. A wireless detector used in referenced product Definium Pace Select is introduced. Image Pasting is improved with individual exposure parameter adjustable on images on both Table and Wall Stand Mode. Tube auto angulation is added for better auto positioning based on current auto-positioning. Camera Workflow is introduced based on existing depth camera. OTS is changed with 4 axis motorizations. An update was made to the previously cleared Tissue Equalization feature under K013481 to introduce a Deep Learning AI model that provides more consistent image presentations to the user which reduces additional workflow to adjust the image display parameters. The other minor changes including PC change, Wall Stand change and Table change.
The provided FDA 510(k) clearance letter and summary for the Definium Tempo Select offers some, but not all, of the requested information regarding the acceptance criteria and the study proving the device meets them. Notably, specific quantitative acceptance criteria for the AI Tissue Equalization feature are not explicitly stated.
Here's a breakdown of the available information and the identified gaps:
1. Table of Acceptance Criteria and Reported Device Performance
Note: The 510(k) summary does not explicitly list quantitative acceptance criteria for the AI Tissue Equalization algorithm. Instead, it states that "The verification tests confirmed that the algorithm meets the performance criteria, and the safety and efficacy of the device has not been affected." Without specific performance metrics or thresholds, a direct comparison in a table format is not possible for the AI component.
For the overall device, the acceptance criteria are implicitly performance metrics that ensure it functions comparably to the predicate device, as indicated by the "Equivalent" and "Identical" discussions in Table 1 (pages 7-11). However, these are primarily functional and technical equivalency statements rather than performance metrics for the AI feature.
Therefore, this section will focus on the AI Tissue Equalization feature as it's the part that underwent specific verification using a clinical image dataset.
AI Tissue Equalization Feature:
| Acceptance Criteria (Implied) | Reported Device Performance |
|---|---|
| Provides more consistent image presentations to the user. | "The verification tests confirmed that the algorithm meets the performance criteria, and the safety and efficacy of the device has not been affected." "The image processing algorithm uses artificial intelligence to dynamically estimate thick and thin regions to improve contrast and visibility in over-penetrated and under-penetrated regions." "The algorithm is the same but parameters per anatomy/view are determined by artificial intelligence to provide better consistence and easier user interface in the proposed device." |
| Reduces additional workflow to adjust image display parameters. | Achieved (stated as a benefit of the AI model). |
| Safety and efficacy are not affected. | Confirmed through verification tests. |
Missing Information:
- Specific quantitative metrics (e.g., AUC, sensitivity, specificity, image quality scores, expert rating differences) that define "more consistent image presentations" are not provided.
- The exact thresholds or target values for these metrics are not stated.
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: Not explicitly stated as a number of images or cases. The document refers to "clinical images retrospectively collected across various anatomies...and Patient Sizes."
- Data Provenance: Retrospective collection from locations in the US, Europe, and Asia.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
Missing Information. The document does not specify:
- The number of experts involved in establishing ground truth.
- Their qualifications (e.g., specific subspecialty, years of experience, board certification).
- Whether experts were even used to establish ground truth for this verification dataset, as the purpose was to confirm the AI met performance criteria rather than to directly compare its diagnostic accuracy against human readers or a different ground truth standard.
4. Adjudication Method for the Test Set
Missing Information. No adjudication method (e.g., 2+1, 3+1) is described for the test set.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No. A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not explicitly mentioned or described in the provided document. The verification tests focused on the algorithm meeting performance criteria, not on comparing human reader performance with or without AI assistance.
- Effect Size: Not applicable, as no MRMC study was described.
6. If a Standalone Study (Algorithm Only Without Human-in-the-Loop Performance) Was Done
Yes, implicitly. The "AI Tissue Equalization algorithms verification dataset" was used to perform "verification tests" to confirm that "the algorithm meets the performance criteria, and the safety and efficacy of the device has not been affected." This suggests a standalone evaluation of the algorithm's output (image presentation consistency) against specific, albeit unstated, criteria. While human review of the output images was likely involved, the study's stated purpose was to verify the algorithm itself.
7. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, etc.)
Implied through image processing improvement, not diagnostic ground truth. For the AI Tissue Equalization feature, the "ground truth" is not in the traditional clinical diagnostic sense (e.g., disease presence confirmed by pathology). Instead, it appears to be related to the goal of "more consistent image presentations" and improving "contrast and visibility in over-penetrated and under-penetrated regions." This suggests the ground truth was an ideal or desired image presentation quality rather than a disease state. It's likely based on existing best practices for image processing and subjective assessment of image quality by experts, or perhaps a comparative assessment against the predicate's tissue equalization.
Missing Information: The precise method or criteria for this ground truth (e.g., a panel of radiologists rating image quality, a quantitative metric for contrast/visibility) is not specified.
8. The Sample Size for the Training Set
Missing Information. The document describes the "verification dataset" (test set) but does not provide any information on the sample size or composition of the training set used to develop the Deep Learning AI model for Tissue Equalization.
9. How the Ground Truth for the Training Set Was Established
Missing Information. As the training set size and composition are not mentioned, neither is the method for establishing its ground truth. It can be inferred that the training process involved data labeled or optimized to achieve "more consistent image presentations" by dynamically estimating thick and thin regions, likely through expert-guided optimization or predefined image processing targets.
Ask a specific question about this device
(140 days)
08390
SOUTH KOREA
Re: K250790
Trade/Device Name: INNOVISION-DXII
Regulation Number: 21 CFR 892.1680
Stationary x-ray system |
| Classification Name | Stationary X-Ray System |
| Regulation Number | 892.1680
INNOVISION-DXII is a stationery X-ray system intended for obtaining radiographic images of various anatomical parts of the human body, both pediatrics and adults, in a clinical environment. INNOVISION-DXII is not intended for mammography, angiography, interventional, or fluoroscopy use.
INNOVISION-DXII is a stationary X-ray system using single and three phase power and consists of Tube, HVG(High voltage generator), Ceiling suspended X-ray tube support, Floor to Ceiling X-ray tube support, patient table, detector stand, and X-ray control console. The X-ray control console is a window-based software that can view X-ray images and a mobile console mounted on an Android-based board that only controls X-rays without viewer function.
After turning on the control unit, it irradiates the set X-ray on the exposure position properly generating X-ray in the inverter generator using IGBT. The compositions like supporters for X-ray tube and tables are used to supply power from the High Voltage generator. When inverter type of Generator creates X-ray irradiation by certain exposure conditions, and X-ray penetrates the patient's body. X-ray information is transferred to a visible ray by a sensor's scintillator, and it turns to an electric signal through A-Si after transmitting photodiode to a TFT Array. This X-ray system is used with FDA cleared X-ray detectors. The electric signal is magnified and turned into a digital signal to create image data. The image is transferred to the PC display by an Ethernet Interface, and it can be adjusted.
The FDA 510(k) clearance letter for INNOVISION-DXII explicitly states that clinical testing was not performed for this device. Therefore, there is no study described within this document that proves the device meets acceptance criteria related to clinical performance or human reader studies.
The provided document focuses on non-clinical performance tests to demonstrate substantial equivalence to the predicate device.
Here's an analysis based on the information provided, outlining what is and isn't available regarding acceptance criteria and studies:
Acceptance Criteria and Device Performance (Non-Clinical)
The acceptance criteria for the INNOVISION-DXII are implicitly the successful completion of the bench tests according to recognized international standards and demonstration that the differences from the predicate device do not raise new safety or effectiveness concerns. The "reported device performance" is the successful passing of these tests, indicating the device is safe and effective in its essential functions.
Table 1: Acceptance Criteria and Reported Device Performance (Non-Clinical Bench Testing)
| Test Category | Specific Test | Acceptance Criteria (Implicit) | Reported Device Performance |
|---|---|---|---|
| X-ray Tube, Collimator, HVG | Tube Voltage accuracy | Meet specified accuracy standards (e.g., within a tolerance) | Passed |
| Accuracy of X-RAY TUBE CURRENT | Meet specified accuracy standards | Passed | |
| Reproducibility of the RADIATION output | Meet specified reproducibility standards | Passed | |
| Linearity and constancy in RADIOGRAPHY | Meet specified linearity and constancy standards | Passed | |
| Half Value Layer (HVL) / Total filtration | Meet specified HVL/filtration standards | Passed | |
| Accuracy of Loading Time | Meet specified loading time accuracy | Passed | |
| Detector | System Instability | No unacceptable system instability observed | Passed |
| Installation error | No unacceptable installation errors | Passed | |
| System Error | No unacceptable system errors | Passed | |
| Image Loss, Deletion, and Restoration | Proper handling of image loss, deletion, and restoration | Passed | |
| Image Save Error | No unacceptable image save errors | Passed | |
| Image Information Error | No unacceptable image information errors | Passed | |
| Image Transmission and Reception | Reliable image transmission and reception | Passed | |
| Header Verification | Correct header verification | Passed | |
| Security | Meet specified security requirements | Passed | |
| Image Acquisition Test | Successful image acquisition | Passed | |
| Search Function | Functional search capability | Passed | |
| Application Function (ELUI S/W) | Functional application software | Passed | |
| Resolution | Meet specified resolution standards | Passed | |
| Mechanical Components (Support, Table) | Moving distance | Accurate and controlled movement within specifications | Passed |
Study Details for Demonstrating Substantial Equivalence (Non-Clinical)
The study described is a series of bench tests (functional tests) conducted to ensure the safety and essential performance effectiveness of the INNOVISION-DXII X-ray system.
-
Sample size used for the test set and data provenance:
- Sample Size: Not applicable. These are functional tests of the device itself rather than tests on a dataset. The "sample" refers to the physical device components and the system as a whole.
- Data Provenance: Not applicable in the context of image data. The tests are performed on the device in a laboratory setting. The standards referenced are international (IEC).
-
Number of experts used to establish the ground truth for the test set and qualifications of those experts:
- Not applicable. Ground truth in this context refers to the expected functional performance of the device according to engineering specifications and regulatory standards (IEC 60601 series). These standards define the "ground truth" for electrical safety, mechanical performance, and radiation emission/accuracy. Experts are involved in conducting and interpreting these standardized tests, but there isn't a "ground truth" established by a panel of medical experts as there would be for image interpretation.
-
Adjudication method for the test set:
- Not applicable. The tests are typically pass/fail based on objective measurements against predefined thresholds specified in the IEC standards. There is no subjective adjudication process mentioned.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study was explicitly NOT done. The document states: "Clinical testing is not performed for the subject device as the detectors were already 510(k) cleared and the imaging software (Elui) is the same as the predicate device. There were no significant changes." This device is a stationary X-ray system, not an AI-assisted diagnostic tool for image interpretation.
-
If a standalone (i.e. algorithm only without human-in-the loop performance) was done:
- No, a standalone algorithm performance study was not done. This device is an X-ray imaging system; it does not feature a standalone diagnostic algorithm. While it includes an imaging software (Elui), its performance is assessed as part of the overall system's image acquisition and processing capabilities, not as an independent diagnostic algorithm.
-
The type of ground truth used:
- For the non-clinical bench tests, the "ground truth" is defined by the engineering specifications and the requirements of the referenced international standards (IEC 60601-1-3, IEC 60601-2-28, IEC 60601-2-54). These standards specify acceptable ranges for parameters like tube voltage accuracy, radiation output linearity, image resolution, and system stability.
-
The sample size for the training set:
- Not applicable. This device is an X-ray system, not a machine learning algorithm that requires a training set of data.
-
How the ground truth for the training set was established:
- Not applicable, as there is no training set for this device.
Summary of Clinical/AI-related information:
The FDA 510(k) clearance for INNOVISION-DXII does not include any clinical studies or evaluations of AI performance, human reader performance, or diagnostic accuracy. The clearance is based purely on the non-clinical bench testing demonstrating that the device meets safety and essential performance standards and is substantially equivalent to its predicate device for obtaining radiographic images.
Ask a specific question about this device
(142 days)
MALVERN, PA 19355
Re: K250738
Trade/Device Name: YSIO X.pree
Regulation Number: 21 CFR 892.1680
Stationary x-ray system
Classification Panel: Radiology
Classification Regulation: 21 CFR §892.1680
Stationary x-ray system
Classification Panel: Radiology
Classification Regulation: 21 CFR §892.1680
Regulation Description | Stationary X-Ray System | Stationary X-Ray System | Same |
| Regulation Number | §892.1680
| §892.1680 | Same |
| Classification Product Code | KPR | KPR | Same |
| Model Number | 11107464 |
The intended use of the device YSIO X.pree is to visualize anatomical structures of human beings by converting an X-ray pattern into a visible image.
The device is a digital X-ray system to generate X-ray images from the whole body including the skull, chest, abdomen, and extremities. The acquired images support medical professionals to make diagnostic and/or therapeutic decisions.
YSIO X.pree is not for mammography examinations.
The YSIO X.pree is a radiography X-ray system. It is designed as a modular system with components such as a ceiling suspension with an X-ray tube, Bucky wall stand, Bucky table, X-ray generator, portable wireless, and fixed integrated detectors that may be combined into different configurations to meet specific customer needs.
The following modifications have been made to the cleared predicate device:
- Updated generator
- Updated collimator
- Updated patient table
- Updated Bucky Wall Stand
- New X.wi-D 24 portable wireless detector
- New virtual AEC selection
- New status indicator lights
The provided 510(k) clearance letter and summary for the YSIO X.pree device (K250738) indicate that the device is substantially equivalent to a predicate device (K233543). The submission primarily focuses on hardware and minor software updates, asserting that these changes do not impact the device's fundamental safety and effectiveness.
However, the provided text does not contain the detailed information typically found in a clinical study report regarding acceptance criteria, sample sizes, ground truth establishment, or expert adjudication for an AI-enabled medical device. This submission appears to be for a conventional X-ray system with some "AI-based" features like auto-cropping and auto-collimation, which are presented as functionalities that assist the user rather than standalone diagnostic algorithms requiring extensive efficacy studies for regulatory clearance.
Based on the provided document, here's an attempt to answer your questions, highlighting where information is absent or inferred:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state quantitative acceptance criteria in terms of performance metrics (e.g., sensitivity, specificity, or image quality scores) with corresponding reported device performance values for the AI features. The "acceptance" appears to be qualitative and based on demonstrating equivalence to the predicate device and satisfactory usability/image quality.
If we infer acceptance criteria from the "Summary of Clinical Tests" and "Conclusion as to Substantial Equivalence," the criteria seem to be:
| Acceptance Criteria (Inferred) | Reported Device Performance (as stated in document) |
|---|---|
| Overall System: Intended use met, clinical needs covered, stability, usability, performance, and image quality are satisfactory. | "The clinical test results stated that the system's intended use was met, and the clinical needs were covered." |
| New Wireless Detector (X.wi-D24): Images acquired are of adequate radiographic quality and sufficiently acceptable for radiographic usage. | "All images acquired with the new detector were adequate and considered to be of adequate radiographic quality." and "All images acquired with the new detector were sufficiently acceptable for radiographic usage." |
| Substantial Equivalence: Safety and effectiveness are not affected by changes. | "The subject device's technological characteristics are same as the predicate device, with modifications to hardware and software features that do not impact the safety and effectiveness of the device." and "The YSIO X.pree, the subject of this 510(k), is similar to the predicate device. The operating environment is the same, and the changes do not affect safety and effectiveness." |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: Not explicitly stated as a number of cases or images. The "Customer Use Test (CUT)" was performed at two university hospitals.
- Data Provenance: The Customer Use Test (CUT) was performed at "Universitätsklinikum Augsburg" in Augsburg, Germany, and "Klinikum rechts der Isar, Technische Universität München" in Munich, Germany. The document states "clinical image quality evaluation by a US board-certified radiologist" for the new detector, implying that the images themselves might have originated from the German sites but were reviewed by a US expert. The study design appears to be prospective in the sense that the new device was evaluated in a clinical setting in use rather than historical data being analyzed.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Experts
- Number of Experts: For the overall system testing (CUT), it's not specified how many clinicians/radiologists were involved in assessing "usability," "performance," and "image quality." For the new wireless detector (X.wi-D24), it states "a US board-certified radiologist."
- Qualifications of Experts: For the new wireless detector's image quality evaluation, the expert was a "US board-certified radiologist." No specific experience level (e.g., years of experience) is provided.
4. Adjudication Method for the Test Set
No explicit adjudication method (e.g., 2+1, 3+1 consensus) is described for the clinical evaluation or image quality assessment. The review of the new detector was done by a single US board-certified radiologist, not multiple independent readers with adjudication.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and what was the effect size of how much human readers improve with AI vs. without AI assistance.
- MRMC Study: No MRMC comparative effectiveness study is described where human readers' performance with and without AI assistance was evaluated. The AI features mentioned (Auto Cropping, Auto Thorax Collimation, Auto Long-Leg/Full-Spine collimation) appear to be automatic workflow enhancements rather than diagnostic AI intended to directly influence reader diagnostic accuracy.
- Effect Size: Not applicable, as no such study was conducted or reported.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done.
The document does not describe any standalone performance metrics for the AI-based features (Auto Cropping, Auto Collimation). These features seem to be integrated into the device's operation to assist the user, rather than providing a diagnostic output that would typically be evaluated in a standalone study. The performance of these AI functions would likely be assessed as part of the overall "usability" and "performance" checks.
7. The Type of Ground Truth Used
- For the overall system and the new detector, the "ground truth" seems to be expert opinion/consensus (qualitative clinical assessment) on the system's performance, usability, and the adequacy of image quality for radiographic use. There is no mention of pathology, outcomes data, or other definitive "true" states related to findings on the images.
8. The Sample Size for the Training Set
The document does not provide any information about a training set size for the AI-based auto-cropping and auto-collimation features. This is typical for 510(k) submissions of X-ray systems where such AI features are considered ancillary workflow tools rather than primary diagnostic aids.
9. How the Ground Truth for the Training Set was Established
Since no training set information is provided, there is no information on how ground truth was established for any training data.
In summary: The 510(k) submission for the YSIO X.pree focuses on demonstrating substantial equivalence for an updated X-ray system. The "AI-based" features appear to be workflow automation tools that were assessed as part of general system usability and image quality in a "Customer Use Test" and a limited clinical image quality evaluation for the new detector. It does not contain the rigorous quantitative performance evaluation data for AI software as might be seen for a diagnostic AI algorithm that requires a detailed clinical study for clearance.
Ask a specific question about this device
(179 days)
Re: K250211
Trade/Device Name: Yushan x-ray flat panel detector
Regulation Number: 21 CFR 892.1680
Review panel: Radiology
Product code: MQB
Regulation number: 21 CFR 892.1680
**Device
Review panel: Radiology
Product code: MQB
Regulation number: 21 CFR 892.1680
**Device
Review panel: Radiology
Product code: MQB
Regulation number: 21 CFR 892.1680
**Device
Review panel: Radiology
Product code: MQB
Regulation number: 21 CFR 892.1680
**Device
The Wireless and Wired Yushan X-Ray Flat Panel Detector is intended to capture for display radiographic images of human anatomy. It is intended for use in general projection radiographic applications wherever conventional film/screen or CR systems may be used. The Yushan X-Ray Flat Panel Detector is not intended for mammography, fluoroscopy, tomography, and angiography applications. The use of this product is not recommended for pregnant women and the risk of radioactivity must be evaluated by a physician.
The Subject Device Yushan X-Ray Flat Panel Detector is static digital x-ray detector, model V14C PLUS, F14C PLUS, V17C PLUS are portable (wireless/ wired) detectors, while V17Ce PLUS is a non-portable (wired) detector. The Subject Device is equivalent to it's predicate device K243171, K201528, K210988, and K220510.
The Subject Device is designed to be used in any environment that would typically use a radiographic cassette for examinations. Detectors can be placed in a wall bucky for upright exams, a table bucky for recumbent exams, or removed from the bucky for non-grid or free cassette exams. The Subject Device has memory exposure mode, and extended image readout feature. Additionally, rounded-edge design for easy handling, image compression algorithm for faster image transfer, LED design for easy detector identification, extra protection against ingress of water.The Detector is currently indicated for general projection radiographic applications and the scintillator material is cesium iodide (CsI).
The Subject Device can automatically collect x-ray images from an x-ray source. It collects x-rays and digitizes the images for their transfer and display to a computer. The x-ray generator (an integral part of a fully-functional diagnostic system) is not part of the device. The sensor includes a flat panel for x-ray acquisition and digitization and a computer (including proprietary processing software) for processing, annotating and storing x-ray images.
The Subject Device is working by using DROC (Digital Radiography Operating Console), Xresta or DR console, which are unchanged from the predicate device, cleared under K201528 for DROC and K243171 for Xresta and DR console. The DROC or Xresta is a software running on a Windows PC/Laptop as a user interface for radiologist to perform a general radiography exam. The function includes:
- Detector status update
- Xray exposure workflow
- Image viewer and measurement
- Post image process and DICOM file I/O
- Image database: DROC or Xresta supports the necessary DICOM Services to allow a smooth integration into the clinical network
The DR Console is a software/app-based device, which is a software itself. When this app is operating the OTS can be considered as the iOS system (iOS 16 or above), the safety and effectiveness of this OTS has been assessed and evaluated through the software testing (compatibility) action and also the usability test (summative evaluation). All the functions operate normally and successfully under this OTS framework. The function includes:
- Imaging procedure review
- Worklist settings
- Detector connection settings
- Calibration
- Image processing
The software level of concern for the Yushan X-Ray Flat Panel Detector with DROC, Xresta, or DR Console has been determined to be basic based on the "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices"; and the cybersecurity risks of the Yushan X-Ray Flat Panel Detector with DROC, Xresta, or DR Console have also been addressed to assure that no new or increased cybersecurity risks were introduced as a part of device risk analysis. These risks are defined as sequence of events leading to a hazardous situation, and the controls for these risks were treated and implemented as proposed in the risk analysis (e.g., requirements, verification).
Acceptance Criteria and Study for Yushan X-Ray Flat Panel Detector (K250211)
This documentation describes the acceptance criteria and the study conducted for the Yushan X-Ray Flat Panel Detector (models V14C PLUS, F14C PLUS, V17C PLUS, V17Ce PLUS). The device has received 510(k) clearance (K250211) based on substantial equivalence to predicate devices (K243171, K201528, K210988, K220510).
The primary change in the subject device compared to its predicates is an increase in the CsI scintillator thickness from 400µm (in some predicate CsI models) to 600µm. This change impacts image quality metrics but, according to the manufacturer, does not introduce new safety or effectiveness concerns.
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria for this device are implicitly tied to demonstrating that the changes in scintillator thickness do not negatively impact safety or effectiveness, and ideally, improve image quality. The primary performance metrics affected by the scintillator change are DQE, MTF, and Sensitivity.
| Performance Metric | Acceptance Criteria (Implicit: No degradation in clinical utility compared to predicate, ideally improvement) | Reported Device Performance (Subject Device - 600µm CsI) | Predicate Device (CsI Models - 400µm CsI) Performance |
|---|---|---|---|
| DQE (Detective Quantum Efficiency) @ 1 lp/mm, RQA5 | Maintain or improve upon predicate's CsI DQE value. | 0.60 (Typical) | 0.48 - 0.50 |
| DQE (Detective Quantum Efficiency) @ 2 lp/mm | (Not explicitly stated for acceptance, but shown for performance) | 0.45 (Typical) | Not explicitly listed for predicate |
| MTF (Modulation Transfer Function) @ 1 lp/mm, RQA5 | Maintain comparable MTF to predicate's CsI MTF (acknowledging potential trade-offs for improved DQE). | 0.64 (Typical) | 0.63 - 0.69 |
| MTF (Modulation Transfer Function) @ 2 lp/mm | (Not explicitly stated for acceptance, but shown for performance) | 0.34 (Typical) | Not explicitly listed for predicate |
| Sensitivity | (Not explicitly stated for acceptance, but shown for performance) | 715 lsb/uGy | Not explicitly listed for predicate |
| Noise Performance | Superior noise performance compared to predicate. | Superior noise performance | Inferior to subject device |
| Image Smoothness | Smoother image quality compared to predicate. | Smoother image quality | Inferior to subject device |
| Compliance with Standards | Conformance to relevant safety and performance standards (e.g., IEC 60601 series, ISO 10993). | All specified standards met. | All specified standards met. |
| Basic Software Level of Concern | Maintained as basic. | Level of concern remains basic. | Level of concern remains basic. |
| Cybersecurity Risks | No new or increased cybersecurity risks introduced. | Risks addressed, no new or increased risks. | Risks addressed. |
| Load-Bearing Characteristics | Pass specified tests. | Passed. | Passed. |
| Protection against ingress of water | Pass specified tests. | Passed. | Passed. |
| Biocompatibility | Demonstrated through ISO 10993 series. | Demonstrated. | Demonstrated. |
Summary of Device Performance vs. Acceptance:
The subject device demonstrates improved DQE, superior noise performance, and smoother images compared to the predicate device (specifically, CsI models), while maintaining comparable MTF and meeting all other safety and performance standards. The slight reduction in MTF compared to the highest performing predicate CsI model (0.69 vs 0.64 at 1 lp/mm) is likely considered an acceptable trade-off given the improvements in DQE and noise, and it is still significantly higher than GOS models.
2. Sample Size Used for the Test Set and Data Provenance
The document does not explicitly state the numerical sample size for the test set used for the performance evaluation of the image quality metrics (DQE, MTF, Sensitivity, noise, smoothness). These metrics are typically derived from physical measurements on a controlled test setup rather than a clinical image dataset.
Data Provenance: Not explicitly stated regarding country of origin or retrospective/prospective nature. However, the evaluation results for image quality metrics, noise, and smoothness are generated internally by the manufacturer during design verification and validation activities.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
Not applicable. The ground truth for DQE, MTF, and Sensitivity measurements is established through standardized physical phantom measurements (e.g., using RQA5 beam quality) rather than expert consensus on clinical images. These are quantifiable engineering parameters.
4. Adjudication Method for the Test Set
Not applicable. The evaluation of DQE, MTF, and Sensitivity is based on objective instrumental measurements, not on reader interpretations or consensus methods.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly mentioned or performed as part of this 510(k) submission. The submission focuses on demonstrating substantial equivalence based on technical specifications and physical performance measurements rather than a clinical trial assessing reader performance.
6. Standalone Performance Study
Yes, a standalone performance evaluation was conducted for the device. The reported DQE, MTF, and Sensitivity values, as well as the assessments of noise performance and image smoothness, are measures of the algorithm's (and the underlying detector hardware's) intrinsic performance without human-in-the-loop assistance. The comparison of these metrics between the subject device and the predicate device forms the basis of the standalone performance study.
7. Type of Ground Truth Used
The ground truth used for the performance evaluations (DQE, MTF, Sensitivity, noise, smoothness) is based on objective physical measurements and standardized phantom evaluations. These are quantitative technical specifications derived under controlled laboratory conditions, not expert consensus on pathology, clinical outcomes, or interpretations of patient images.
8. Sample Size for the Training Set
Not applicable. This device is an X-ray flat panel detector, a hardware component that captures images. While it includes embedded software (firmware, image processing algorithms), the document does not indicate that these algorithms rely on a "training set" in the context of machine learning. The image processing algorithms are likely deterministic or parameter-tuned, not learned from a large dataset like an AI model for diagnosis.
9. How the Ground Truth for the Training Set Was Established
Not applicable, as there is no indication of a machine learning "training set" as described in the context of AI models. The ground truth for the development and validation of the detector's physical performance characteristics is established through established metrology and engineering testing protocols.
Ask a specific question about this device
(131 days)
Stationary x-ray system
Classification Panel: Radiology
Classification Regulation: 21 CFR §892.1680
LUMINOS Q.namix T and LUMINOS Q.namix R are devices intended to visualize anatomical structures by converting an X-ray pattern into a visible image. It is a multifunctional, general R/F system, suitable for routine radiography and fluoroscopy examinations, including gastrointestinal- and urogenital examinations and specialist areas like arthrography, angiography and pediatrics.
LUMINOS Q.namix T and LUMINOS Q.namix R are not intended to be used for mammography examinations.
The LUMINOS Q.namix T is an under-table fluoroscopy system and the LUMINOS Q.namix R is an over-table fluoroscopy system. Both systems are multifunctional, general R/F systems, suitable for routine radiography and fluoroscopy examinations, including gastrointestinal- and urogenital examinations and specialist areas like arthrography, angiography and pediatrics. They are designed as modular systems with components such as main fluoro table including fixed fluoroscopy detector and X-ray tube, a ceiling suspension with X-ray tube, Bucky wall stand, X-ray generator, monitors, a bucky tray in the table as well as portable wireless and fixed integrated detectors that may be combined into different configurations to meet specific customer needs.
This FDA 510(k) clearance letter and summary discuss the LUMINOS Q.namix T and LUMINOS Q.namix R X-ray systems. The provided documentation does not include specific acceptance criteria (e.g., numerical thresholds for image quality, diagnostic accuracy, or performance metrics) in the same way an AI/ML device often would. Instead, it relies on demonstrating substantial equivalence to predicate devices and adherence to recognized standards.
The study presented focuses primarily on image quality evaluation for the new detectors (X.fluoro and X.wi-D24) for diagnostic acceptability, rather than establishing acceptance criteria for the entire system's overall performance.
Here's an attempt to extract and present the requested information based on the provided document:
1. Table of Acceptance Criteria and Reported Device Performance
As explicit quantitative acceptance criteria for the overall device performance are not stated in the provided 510(k) summary, this section will reflect the available qualitative performance assessment for the new detectors. The primary "acceptance criterion" implied for the overall device is substantial equivalence to predicate devices and acceptability for diagnostic use.
| Feature/Metric | Acceptance Criteria (Implied/Direct) | Reported Device Performance (LUMINOS Q.namix T/R with new detectors) |
|---|---|---|
| Overall Device Equivalence | Substantially equivalent to predicate devices (Luminos Agile Max, Luminos dRF Max) in indications for use, design, material, functionality, technology, and energy source. | Systems are comparable and substantially equivalent to predicate devices. Test results show comparability. |
| New Detector Image Quality (X.fluoro, X.wi-D24) | Acceptable for diagnostic use in radiography & fluoroscopy. | Evaluated images and fluorography studies from different body regions were qualified for proper diagnosis by a US board-certified radiologist and by expert evaluations. |
| Compliance with Standards | Compliance with relevant medical electrical safety, performance, and software standards (e.g., IEC 60601 series, ISO 14971, IEC 62304, DICOM). | The LUMINOS Q.namix T/LUMINOS Q.namix R systems were tested and comply with the listed voluntary standards. |
| Risk Management | Application of risk management process (per ISO 14971). | Risk Analysis was applied. |
| Software Life Cycle | Application of software life cycle processes (per IEC 62304). | IEC 62304 (Medical device software - Software life cycle processes) was applied. |
| Usability | Compliance with usability engineering standards (per IEC 60601-1-6, IEC 62366-1). | IEC 60601-1-6 and IEC 62366-1 were applied. |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Description: "expert evaluations" for the new detectors X.fluoro and X.wi-D24.
- Sample Size: The exact number of images or fluorography studies evaluated is not specified. The document mentions "multiple images and fluorography studies from different body regions" for the US board-certified radiologist's evaluation.
- Data Provenance:
- Countries of Origin: Germany (University Hospital Augsburg, Klinikum rechts der Isar Munich, Herz-Jesu-Krankenhaus Münster/Hiltrup) and Belgium (ZAS Jan Palfijn Hospital of Merksem).
- Retrospective or Prospective: Not explicitly stated, but clinical image quality evaluations often involve prospective data collection or a mix with retrospective cases. Given they are evaluating "new detectors" and "clinical image quality evaluation", it implies real or simulated clinical scenarios.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- Number of Experts:
- Initial Evaluations: Multiple "expert evaluations" (implies more than one) were conducted across the listed hospitals. The exact number of individual experts is not specified.
- Specific Evaluation: One "US board-certified radiologist" performed a dedicated clinical image quality evaluation.
- Qualifications of Experts:
- For the general "expert evaluations": Not specified beyond being "experts."
- For the specific evaluation: "US board-certified radiologist." No mention of years of experience is provided.
4. Adjudication Method for the Test Set
The document does not specify any formal adjudication method (e.g., 2+1, 3+1 consensus voting) for establishing ground truth or evaluating the image quality. The evaluations appear to be individual or group assessments leading to a conclusion of "acceptability for diagnostic use."
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was an MRMC study done? The document does not describe a formal MRMC comparative effectiveness study designed to quantify the improvement of human readers with AI vs. without AI assistance.
- Effect Size of Human Reader Improvement: Therefore, no effect size is reported.
- Note: While the device includes "AI-based Auto Cropping" and "AI based Automatic collimation," the study described is an evaluation of the detectors' image quality and the overall system's substantial equivalence, not the clinical impact of these specific AI features on human reader performance.
6. Standalone Performance Study (Algorithm Only)
- The document primarily describes an evaluation of the new detectors within the LUMINOS Q.namix T/R systems and the overall system's substantial equivalence.
- While the device includes "AI-based Auto Cropping" and "AI based Automatic collimation," the document does not report on a standalone performance study specifically for these AI algorithms in isolation from the human-in-the-loop system. The AI features are listed as technological characteristics that contribute to the device's overall updated design.
7. Type of Ground Truth Used
For the detector image quality evaluation, the ground truth was based on expert assessment ("qualified for proper diagnosis"). This falls under expert consensus or expert judgment regarding diagnostic acceptability.
8. Sample Size for the Training Set
The document does not provide any information regarding the sample size used for the training set for any AI components. The focus of this 510(k) summary is on substantiating equivalence and safety/effectiveness of the entire X-ray system, not on the development of individual AI algorithms within it.
9. How the Ground Truth for the Training Set Was Established
Since no information is provided about a training set, the method for establishing its ground truth is not mentioned in the document.
Ask a specific question about this device
(104 days)
Turnpike
WAYNE, NJ 07470
Re: K250665
Trade/Device Name: SKR 3000
Regulation Number: 21 CFR 892.1680
Turnpike
WAYNE, NJ 07470
Re: K250665
Trade/Device Name: SKR 3000
Regulation Number: 21 CFR 892.1680
3000
Common Name: Digital Radiography
Classification Name: Stationary x-ray system (21 CFR 892.1680
Regulation Number: 21 CFR 892.1680
Regulation Name: Stationary x-ray system
**Regulatory
Regulation Number: 21 CFR 892.1680
Regulation Name: Stationary x-ray system
**Regulatory
This device is indicated for use in generating radiographic images of human anatomy. It is intended to a replace radiographic film/screen system in general-purpose diagnostic procedures. This device is not indicated for use in mammography, fluoroscopy, and angiography applications.
The digital radiography SKR 3000 performs X-ray imaging of the human body using an X-ray planar detector that outputs a digital signal, which is then input into an image processing device, and the acquired image is then transmitted to a filing system, printer, and image display device as diagnostic image data.
- This device is not intended for use in mammography
- This device is also used for carrying out exposures on children.
The Console CS-7, which controls the receiving, processing, and output of image data, is required for operation. The CS-7 is a software with basic documentation level. CS-7 implements the following image processing; gradation processing, frequency processing, dynamic range compression, smoothing, rotation, reversing, zooming, and grid removal process/scattered radiation correction (Intelligent-Grid). The Intelligent-Grid is cleared in K151465.
The FPDs used in SKR 3000 can communicate with the image processing device through the wired Ethernet and/or the Wireless LAN (IEEE802.11a/n and FCC compliant). The WPA2-PSK (AES) encryption is adopted for a security of wireless connection.
The SKR 3000 is distributed under a commercial name AeroDR 3.
The purpose of the current premarket submission is to add pediatric use indications for the SKR 3000 imaging system.
The provided FDA 510(k) clearance letter and summary for the SKR 3000 device focuses on adding a pediatric use indication. However, it does not contain the detailed performance data, acceptance criteria, or study specifics typically found in a clinical study report. The document states that "image quality evaluation was conducted in accordance with the 'Guidance for the Submission of 510(k)s for Solid State X-ray Imaging Devices'" and that "pediatric image evaluation using small-size phantoms was performed on the P-53." It also mentions that "The comparative image evaluation demonstrated that the SKR 3000 with P-53 provides substantially equivalent image performance to the comparative device, AeroDR System 2 with P-52, for pediatric use."
Based on the information provided, it's not possible to fully detail the acceptance criteria and the study that proves the device meets them according to your requested format. The document implies that the "acceptance criteria" likely revolved around demonstrating "substantially equivalent image performance" to a predicate device (AeroDR System 2 with P-52) for pediatric use, primarily through phantom studies, rather than a clinical study with human patients and detailed diagnostic performance metrics.
Therefore, many of the requested fields cannot be filled directly from the provided text. I will provide the information that can be inferred or directly stated from the document and explicitly state when information is not available.
Disclaimer: The information below is based solely on the provided 510(k) clearance letter and summary. For a comprehensive understanding, one would typically need access to the full 510(k) submission, which includes the detailed performance data and study reports.
Acceptance Criteria and Device Performance Study for SKR 3000 (Pediatric Use Indication)
The primary objective of the study mentioned in the 510(k) summary was to demonstrate substantial equivalence for the SKR 3000 (specifically with detector P-53) for pediatric use, compared to a predicate device (AeroDR System 2 with P-52).
1. Table of Acceptance Criteria and Reported Device Performance
Given the nature of the submission (adding a pediatric indication based on substantial equivalence), the acceptance criteria are not explicitly quantifiable metrics like sensitivity/specificity for a specific condition. Instead, the focus was on demonstrating "substantially equivalent image performance" through phantom studies.
| Acceptance Criteria (Inferred from Document) | Reported Device Performance (Inferred/Stated) |
|---|---|
| Image quality of SKR 3000 with P-53 for pediatric applications to be "substantially equivalent" to predicate device (AeroDR System 2 with P-52). | "The comparative image evaluation demonstrated that the SKR 3000 with P-53 provides substantially equivalent image performance to the comparative device, AeroDR System 2 with P-52, for pediatric use." |
| Compliance with "Guidance for the Submission of 510(k)s for Solid State X-ray Imaging Devices" for pediatric image evaluation using small-size phantoms. | "image quality evaluation was conducted in accordance with the 'Guidance for the Submission of 510(k)s for Solid State X-ray Imaging Devices'. Pediatric image evaluation using small-size phantoms was performed on the P-53." |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size (Test Set): Not specified. The document indicates "small-size phantoms" were used, implying a phantom study, not a human clinical trial. The number of phantom images or specific phantom configurations is not detailed.
- Data Provenance: Not specified. Given it's a phantom study, geographical origin is less relevant than for patient data. It's an internal study conducted to support the 510(k) submission. Retrospective or prospective status is not applicable as it's a phantom study.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Number of Experts: Not specified. Given this was a phantom study, ground truth would likely be based on physical measurements of the phantoms and expected image quality metrics, rather than expert interpretation of pathology or disease. If human evaluation was part of the "comparative image evaluation," the number and qualifications of evaluators are not provided.
- Qualifications: Not specified.
4. Adjudication Method for the Test Set
- Adjudication Method: Not specified. For a phantom study demonstrating "substantially equivalent image performance," adjudication methods like 2+1 or 3+1 (common in clinical reader studies) are generally not applicable. The comparison would likely involve quantitative metrics from the generated images.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done
- MRMC Study: No. The document states "comparative image evaluation" and "pediatric image evaluation using small-size phantoms." This strongly implies a technical performance assessment using phantoms, rather than a clinical MRMC study with human readers interpreting patient cases. Therefore, no effect size of human readers improving with AI vs. without AI assistance can be reported, as AI assistance in image interpretation (e.g., CAD) is not the focus of this submission; it's about the imaging system's ability to produce quality images for diagnosis.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done
- Standalone Performance: Not applicable in the traditional sense of an AI algorithm's diagnostic performance. The device is an X-ray imaging system. The "performance" being evaluated is its ability to generate images, not to provide an automated diagnosis. The "Intelligent-Grid" feature mentioned is an image processing algorithm (scattered radiation correction), but its standalone diagnostic performance is not the subject of this specific submission; its prior clearance (K151465) is referenced.
7. The Type of Ground Truth Used
- Ground Truth Type: For the pediatric image evaluation, the ground truth was based on phantom characteristics and expected image quality metrics. This is inferred from the statement "pediatric image evaluation using small-size phantoms was performed."
8. The Sample Size for the Training Set
- Training Set Sample Size: Not applicable. The SKR 3000 is an X-ray imaging system, not an AI model that requires a "training set" in the machine learning sense for its primary function of image acquisition. While image processing algorithms (like Intelligent-Grid) integrated into the system might have been developed using training data, the submission focuses on the imaging system's performance for pediatric use.
9. How the Ground Truth for the Training Set Was Established
- Ground Truth for Training Set: Not applicable, as no training set (in the context of an AI model's image interpretation learning) is explicitly mentioned or relevant for the scope of this 510(k) submission for an X-ray system.
Ask a specific question about this device
(135 days)
Trade/Device Name: Wireless/ Wired X-Ray Flat Panel Detectors
Regulation Number: 21 CFR 892.1680
** | Stationary x-ray system. |
| Review Panel: | Radiology |
| Regulation Number: | 21 CFR 892.1680
| 21 CFR 892.1680 | 21 CFR 892.1680 |
| Product name | | | |
| Product name | Wireless/ Wired
x-ray system ISO 13485 ISO 14971 ANSI/AAMI ES60601-1 IEC 62220-1-1 ISO 20417 | FDA Standards 21 CFR 892.1680
x-ray system ISO 13485 ISO 14971 ANSI/AAMI ES60601-1 IEC 62220-1-1 ISO 20417 | FDA Standards 21 CFR 892.1680
Allengers Wireless/ Wired X-Ray Flat Panel Detectors used with AWS (Acquisition Workstation Software) Synergy DR FDX/Synergy DR is used to acquire/Process/Display/Store/Export radiographic images of all body parts using Radiographic techniques. It is intended for use in general radiographic applications wherever a conventional film/screener CR system is used.
Allengers Wireless/Wired X-ray Flat Panel Detectors are not intended for mammography applications.
The Wireless/ Wired X-Ray Flat Panel Detectors are designed to be used in any environment that would typically use a radiographic cassette for examinations. Detectors can be placed in a wall bucky for upright exams, a table bucky for recumbent exams, or removed from the bucky for non-grid or free cassette exams. These medical devices have memory exposure mode, and extended image readout feature. Additionally, rounded-edge design for easy handling, image compression algorithm for faster image transfer, LED design for easy detector identification, extra protection against ingress of water. This Device is currently indicated for general projection radiographic applications and the scintillator material is using cesium iodide (CsI). The Wireless/ Wired X-Ray Flat Panel Detectors sensor can automatically collect x-ray from an x-ray source. It collects the x-ray and converts it into digital image and transfers it to Desktop computer / Laptop/ Tablet for image display. The x-ray generator (an integral part of a complete x-ray system), is not part of the submission. The sensor includes a flat panel for x-ray acquisition and digitization and a computer (including proprietary processing software) for processing, annotating and storing x-ray images, the personal computer is not part of this submission.
Wireless/ Wired X-Ray Flat Panel Detectors used with Accessory: "AWS (Acquisition Workstation Software) Synergy DR FDX/ Synergy DR", runs on a Windows based Desktop computer/ Laptop/ Tablet as a user interface for radiologist to perform a general radiography exam. The function includes:
- User Login
- Display Connectivity status of hardware devices like detector
- Patient entry (Manual, Emergency and Worklist)
- Exam entry
- Image processing
- Search patient Data
- Print DICOM Image
- Exit
This document describes the 510(k) clearance for Allengers Wireless/Wired X-Ray Flat Panel Detectors (K243734). The core of the submission revolves around demonstrating substantial equivalence to a predicate device (K223009) and several reference devices (K201528, K210988, K220510). The key modification in the subject device compared to the predicate is an increased scintillator thickness from 400µm to 600µm, which consequently impacts the Modulating Transfer Function (MTF) and Detective Quantum Efficiency (DQE) of the device.
Based on the provided text, the 510(k) relies on non-clinical performance data (bench testing and adherence to voluntary standards) to demonstrate substantial equivalence, rather than extensive clinical studies involving human subjects or AI-assisted human reading.
Here's a breakdown of the requested information based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are implicitly defined by the comparison to the predicate device's performance, particularly for image quality metrics (MTF and DQE). The goal is to demonstrate that despite changes, the device maintains diagnostic image quality and does not raise new safety or effectiveness concerns.
| Metric (Units) | Acceptance Criteria (Implicit - Maintain Diagnostic Image Quality | Reported Device Performance (Subject Device) | Comments/Relation to Predicate |
|---|---|---|---|
| DQE @ 0.5 lp/mm (Max.) | $\ge$ Predicate: 0.78 (for Glass) / 0.79 (for Non-Glass) | 0.85 (for G4343RC, G4343RWC, G4336RWC - Glass) 0.79 (for T4336RWC - Non-Glass) | Meets/Exceeds predicate values. Improves for Glass substrate models. Matches for Non-Glass substrate model. |
| DQE @ 1 lp/mm (Max.) | $\ge$ Predicate: 0.55 (for Glass) / 0.58 (for Non-Glass) | 0.69 (for G4343RWC, G4336RWC, G4343RC - Glass) 0.58 (for T4336RWC - Non-Glass) | Meets/Exceeds predicate values. Improves for Glass substrate models. Matches for Non-Glass substrate model. |
| DQE @ 2 lp/mm (Max.) | $\ge$ Predicate: 0.47 (for Glass) / 0.49 (for Non-Glass) | 0.54 (for G4343RC, G4343RWC, G4336RWC - Glass) 0.49 (for T4336RWC - Non-Glass) | Meets/Exceeds predicate values. Improves for Glass substrate models. Matches for Non-Glass substrate model. |
| MTF @ 0.5 lp/mm (Max.) | $\sim$ Predicate: 0.90 (for Glass) / 0.85 (for Non-Glass) | 0.95 (for G4343RC, G4343RWC, G4336RWC - Glass) 0.90 (for T4336RWC - Non-Glass) | Meets/Exceeds predicate values. Improves for Glass substrate models. Improves for Non-Glass substrate model. |
| MTF @ 1 lp/mm (Max.) | $\sim$ Predicate: 0.76 (for Glass) / 0.69 (for Non-Glass) | 0.70 (for G4343RWC, G4336RWC, G4343RC - Glass) 0.69 (for T4336RWC - Non-Glass) | Slightly lower for Glass substrate models (0.70 vs 0.76). Matches for Non-Glass substrate model. The submission claims this does not lead to "clinically significant degradation of details or edges." |
| MTF @ 2 lp/mm (Max.) | $\sim$ Predicate: 0.47 (for Glass) / 0.42 (for Non-Glass) | 0.41 (for G4343RC, G4343RWC, G4336RWC - Glass) 0.42 (for T4336RWC - Non-Glass) | Slightly lower for Glass substrate models (0.41 vs 0.47). Matches for Non-Glass substrate model. The submission claims this does not lead to "clinically significant degradation of details or edges." |
| Thickness of Scintillator | Not an acceptance criterion in itself, but a design change. | 600 µm | Increased from predicate (400 µm). |
| Sensitivity (Typ.) | $\sim$ Predicate: 574 LSB/uGy | 715 LSB/uGy | Increased from predicate. |
| Max. Resolution | 3.57 lp/mm (Matches predicate) | 3.57 lp/mm | Matches predicate. |
| General Safety and Effectiveness | No new safety and effectiveness issues raised compared to predicate. | Verified by adherence to voluntary standards and risk analysis. | Claimed to be met. The increased scintillator thickness is "deemed acceptable" and experimental results confirm "superior noise performance and smoother image quality compared to the 400μm CsI, without clinically significant degradation of details or edges." |
2. Sample Size Used for the Test Set and Data Provenance
The document explicitly states that the submission relies on "Non-clinical Performance Data" and "Bench testing". There is no mention of a clinical test set involving human subjects or patient imaging data with a specified sample size. The data provenance would be laboratory bench testing results. The country of origin of the data is not explicitly stated beyond the company being in India, but it's performance data, not patient data. The testing is described as functional testing to evaluate the impact of different scintillator thicknesses.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
This information is not applicable as the clearance is based on non-clinical, bench testing data (physical performance characteristics like MTF and DQE) rather than clinical image interpretation or diagnostic performance that would require human expert ground truth.
4. Adjudication Method for the Test Set
Not applicable, as there is no mention of a human-read test set or ground truth adjudication process.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
No. The document does not mention an MRMC study or any study involving human readers, with or without AI assistance. The device is an X-ray detector, not an AI software.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) study was done
Not applicable in the context of an AI algorithm, as this device is an X-ray detector and associated acquisition software. However, the "standalone" performance of the detector itself (MTF, DQE, sensitivity) was assessed through bench testing and measurements, which can be considered its "standalone" performance.
7. The Type of Ground Truth Used
The "ground truth" for the performance claims (MTF, DQE, sensitivity) is based on physical phantom measurements and engineering specifications obtained through controlled bench testing following recognized industry standards (e.g., IEC 62220-1-1). It is not based on expert consensus, pathology, or outcomes data from patient studies.
8. The Sample Size for the Training Set
Not applicable. This submission is for an X-ray flat panel detector, not an AI/ML model that would require a "training set" of data.
9. How the Ground Truth for the Training Set was Established
Not applicable. As stated above, this device does not involve an AI/ML model with a training set.
Ask a specific question about this device
Page 1 of 69