Search Filters

Search Results

Found 420 results

510(k) Data Aggregation

    K Number
    K250211
    Date Cleared
    2025-07-22

    (179 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    MQB

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Wireless and Wired Yushan X-Ray Flat Panel Detector is intended to capture for display radiographic images of human anatomy. It is intended for use in general projection radiographic applications wherever conventional film/screen or CR systems may be used. The Yushan X-Ray Flat Panel Detector is not intended for mammography, fluoroscopy, tomography, and angiography applications. The use of this product is not recommended for pregnant women and the risk of radioactivity must be evaluated by a physician.

    Device Description

    The Subject Device Yushan X-Ray Flat Panel Detector is static digital x-ray detector, model V14C PLUS, F14C PLUS, V17C PLUS are portable (wireless/ wired) detectors, while V17Ce PLUS is a non-portable (wired) detector. The Subject Device is equivalent to it's predicate device K243171, K201528, K210988, and K220510.

    The Subject Device is designed to be used in any environment that would typically use a radiographic cassette for examinations. Detectors can be placed in a wall bucky for upright exams, a table bucky for recumbent exams, or removed from the bucky for non-grid or free cassette exams. The Subject Device has memory exposure mode, and extended image readout feature. Additionally, rounded-edge design for easy handling, image compression algorithm for faster image transfer, LED design for easy detector identification, extra protection against ingress of water.The Detector is currently indicated for general projection radiographic applications and the scintillator material is cesium iodide (CsI).

    The Subject Device can automatically collect x-ray images from an x-ray source. It collects x-rays and digitizes the images for their transfer and display to a computer. The x-ray generator (an integral part of a fully-functional diagnostic system) is not part of the device. The sensor includes a flat panel for x-ray acquisition and digitization and a computer (including proprietary processing software) for processing, annotating and storing x-ray images.

    The Subject Device is working by using DROC (Digital Radiography Operating Console), Xresta or DR console, which are unchanged from the predicate device, cleared under K201528 for DROC and K243171 for Xresta and DR console. The DROC or Xresta is a software running on a Windows PC/Laptop as a user interface for radiologist to perform a general radiography exam. The function includes:

    1. Detector status update
    2. Xray exposure workflow
    3. Image viewer and measurement
    4. Post image process and DICOM file I/O
    5. Image database: DROC or Xresta supports the necessary DICOM Services to allow a smooth integration into the clinical network

    The DR Console is a software/app-based device, which is a software itself. When this app is operating the OTS can be considered as the iOS system (iOS 16 or above), the safety and effectiveness of this OTS has been assessed and evaluated through the software testing (compatibility) action and also the usability test (summative evaluation). All the functions operate normally and successfully under this OTS framework. The function includes:

    1. Imaging procedure review
    2. Worklist settings
    3. Detector connection settings
    4. Calibration
    5. Image processing

    The software level of concern for the Yushan X-Ray Flat Panel Detector with DROC, Xresta, or DR Console has been determined to be basic based on the "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices"; and the cybersecurity risks of the Yushan X-Ray Flat Panel Detector with DROC, Xresta, or DR Console have also been addressed to assure that no new or increased cybersecurity risks were introduced as a part of device risk analysis. These risks are defined as sequence of events leading to a hazardous situation, and the controls for these risks were treated and implemented as proposed in the risk analysis (e.g., requirements, verification).

    AI/ML Overview

    Acceptance Criteria and Study for Yushan X-Ray Flat Panel Detector (K250211)

    This documentation describes the acceptance criteria and the study conducted for the Yushan X-Ray Flat Panel Detector (models V14C PLUS, F14C PLUS, V17C PLUS, V17Ce PLUS). The device has received 510(k) clearance (K250211) based on substantial equivalence to predicate devices (K243171, K201528, K210988, K220510).

    The primary change in the subject device compared to its predicates is an increase in the CsI scintillator thickness from 400µm (in some predicate CsI models) to 600µm. This change impacts image quality metrics but, according to the manufacturer, does not introduce new safety or effectiveness concerns.

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria for this device are implicitly tied to demonstrating that the changes in scintillator thickness do not negatively impact safety or effectiveness, and ideally, improve image quality. The primary performance metrics affected by the scintillator change are DQE, MTF, and Sensitivity.

    Performance MetricAcceptance Criteria (Implicit: No degradation in clinical utility compared to predicate, ideally improvement)Reported Device Performance (Subject Device - 600µm CsI)Predicate Device (CsI Models - 400µm CsI) Performance
    DQE (Detective Quantum Efficiency) @ 1 lp/mm, RQA5Maintain or improve upon predicate's CsI DQE value.0.60 (Typical)0.48 - 0.50
    DQE (Detective Quantum Efficiency) @ 2 lp/mm(Not explicitly stated for acceptance, but shown for performance)0.45 (Typical)Not explicitly listed for predicate
    MTF (Modulation Transfer Function) @ 1 lp/mm, RQA5Maintain comparable MTF to predicate's CsI MTF (acknowledging potential trade-offs for improved DQE).0.64 (Typical)0.63 - 0.69
    MTF (Modulation Transfer Function) @ 2 lp/mm(Not explicitly stated for acceptance, but shown for performance)0.34 (Typical)Not explicitly listed for predicate
    Sensitivity(Not explicitly stated for acceptance, but shown for performance)715 lsb/uGyNot explicitly listed for predicate
    Noise PerformanceSuperior noise performance compared to predicate.Superior noise performanceInferior to subject device
    Image SmoothnessSmoother image quality compared to predicate.Smoother image qualityInferior to subject device
    Compliance with StandardsConformance to relevant safety and performance standards (e.g., IEC 60601 series, ISO 10993).All specified standards met.All specified standards met.
    Basic Software Level of ConcernMaintained as basic.Level of concern remains basic.Level of concern remains basic.
    Cybersecurity RisksNo new or increased cybersecurity risks introduced.Risks addressed, no new or increased risks.Risks addressed.
    Load-Bearing CharacteristicsPass specified tests.Passed.Passed.
    Protection against ingress of waterPass specified tests.Passed.Passed.
    BiocompatibilityDemonstrated through ISO 10993 series.Demonstrated.Demonstrated.

    Summary of Device Performance vs. Acceptance:
    The subject device demonstrates improved DQE, superior noise performance, and smoother images compared to the predicate device (specifically, CsI models), while maintaining comparable MTF and meeting all other safety and performance standards. The slight reduction in MTF compared to the highest performing predicate CsI model (0.69 vs 0.64 at 1 lp/mm) is likely considered an acceptable trade-off given the improvements in DQE and noise, and it is still significantly higher than GOS models.

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not explicitly state the numerical sample size for the test set used for the performance evaluation of the image quality metrics (DQE, MTF, Sensitivity, noise, smoothness). These metrics are typically derived from physical measurements on a controlled test setup rather than a clinical image dataset.

    Data Provenance: Not explicitly stated regarding country of origin or retrospective/prospective nature. However, the evaluation results for image quality metrics, noise, and smoothness are generated internally by the manufacturer during design verification and validation activities.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    Not applicable. The ground truth for DQE, MTF, and Sensitivity measurements is established through standardized physical phantom measurements (e.g., using RQA5 beam quality) rather than expert consensus on clinical images. These are quantifiable engineering parameters.

    4. Adjudication Method for the Test Set

    Not applicable. The evaluation of DQE, MTF, and Sensitivity is based on objective instrumental measurements, not on reader interpretations or consensus methods.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly mentioned or performed as part of this 510(k) submission. The submission focuses on demonstrating substantial equivalence based on technical specifications and physical performance measurements rather than a clinical trial assessing reader performance.

    6. Standalone Performance Study

    Yes, a standalone performance evaluation was conducted for the device. The reported DQE, MTF, and Sensitivity values, as well as the assessments of noise performance and image smoothness, are measures of the algorithm's (and the underlying detector hardware's) intrinsic performance without human-in-the-loop assistance. The comparison of these metrics between the subject device and the predicate device forms the basis of the standalone performance study.

    7. Type of Ground Truth Used

    The ground truth used for the performance evaluations (DQE, MTF, Sensitivity, noise, smoothness) is based on objective physical measurements and standardized phantom evaluations. These are quantitative technical specifications derived under controlled laboratory conditions, not expert consensus on pathology, clinical outcomes, or interpretations of patient images.

    8. Sample Size for the Training Set

    Not applicable. This device is an X-ray flat panel detector, a hardware component that captures images. While it includes embedded software (firmware, image processing algorithms), the document does not indicate that these algorithms rely on a "training set" in the context of machine learning. The image processing algorithms are likely deterministic or parameter-tuned, not learned from a large dataset like an AI model for diagnosis.

    9. How the Ground Truth for the Training Set Was Established

    Not applicable, as there is no indication of a machine learning "training set" as described in the context of AI models. The ground truth for the development and validation of the detector's physical performance characteristics is established through established metrology and engineering testing protocols.

    Ask a Question

    Ask a specific question about this device

    K Number
    K250665
    Device Name
    SKR 3000
    Date Cleared
    2025-06-17

    (104 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    MQB

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    This device is indicated for use in generating radiographic images of human anatomy. It is intended to a replace radiographic film/screen system in general-purpose diagnostic procedures. This device is not indicated for use in mammography, fluoroscopy, and angiography applications.

    Device Description

    The digital radiography SKR 3000 performs X-ray imaging of the human body using an X-ray planar detector that outputs a digital signal, which is then input into an image processing device, and the acquired image is then transmitted to a filing system, printer, and image display device as diagnostic image data.

    • This device is not intended for use in mammography
    • This device is also used for carrying out exposures on children.
      The Console CS-7, which controls the receiving, processing, and output of image data, is required for operation. The CS-7 is a software with basic documentation level. CS-7 implements the following image processing; gradation processing, frequency processing, dynamic range compression, smoothing, rotation, reversing, zooming, and grid removal process/scattered radiation correction (Intelligent-Grid). The Intelligent-Grid is cleared in K151465.
      The FPDs used in SKR 3000 can communicate with the image processing device through the wired Ethernet and/or the Wireless LAN (IEEE802.11a/n and FCC compliant). The WPA2-PSK (AES) encryption is adopted for a security of wireless connection.
      The SKR 3000 is distributed under a commercial name AeroDR 3.
      The purpose of the current premarket submission is to add pediatric use indications for the SKR 3000 imaging system.
    AI/ML Overview

    The provided FDA 510(k) clearance letter and summary for the SKR 3000 device focuses on adding a pediatric use indication. However, it does not contain the detailed performance data, acceptance criteria, or study specifics typically found in a clinical study report. The document states that "image quality evaluation was conducted in accordance with the 'Guidance for the Submission of 510(k)s for Solid State X-ray Imaging Devices'" and that "pediatric image evaluation using small-size phantoms was performed on the P-53." It also mentions that "The comparative image evaluation demonstrated that the SKR 3000 with P-53 provides substantially equivalent image performance to the comparative device, AeroDR System 2 with P-52, for pediatric use."

    Based on the information provided, it's not possible to fully detail the acceptance criteria and the study that proves the device meets them according to your requested format. The document implies that the "acceptance criteria" likely revolved around demonstrating "substantially equivalent image performance" to a predicate device (AeroDR System 2 with P-52) for pediatric use, primarily through phantom studies, rather than a clinical study with human patients and detailed diagnostic performance metrics.

    Therefore, many of the requested fields cannot be filled directly from the provided text. I will provide the information that can be inferred or directly stated from the document and explicitly state when information is not available.

    Disclaimer: The information below is based solely on the provided 510(k) clearance letter and summary. For a comprehensive understanding, one would typically need access to the full 510(k) submission, which includes the detailed performance data and study reports.


    Acceptance Criteria and Device Performance Study for SKR 3000 (Pediatric Use Indication)

    The primary objective of the study mentioned in the 510(k) summary was to demonstrate substantial equivalence for the SKR 3000 (specifically with detector P-53) for pediatric use, compared to a predicate device (AeroDR System 2 with P-52).

    1. Table of Acceptance Criteria and Reported Device Performance

    Given the nature of the submission (adding a pediatric indication based on substantial equivalence), the acceptance criteria are not explicitly quantifiable metrics like sensitivity/specificity for a specific condition. Instead, the focus was on demonstrating "substantially equivalent image performance" through phantom studies.

    Acceptance Criteria (Inferred from Document)Reported Device Performance (Inferred/Stated)
    Image quality of SKR 3000 with P-53 for pediatric applications to be "substantially equivalent" to predicate device (AeroDR System 2 with P-52)."The comparative image evaluation demonstrated that the SKR 3000 with P-53 provides substantially equivalent image performance to the comparative device, AeroDR System 2 with P-52, for pediatric use."
    Compliance with "Guidance for the Submission of 510(k)s for Solid State X-ray Imaging Devices" for pediatric image evaluation using small-size phantoms."image quality evaluation was conducted in accordance with the 'Guidance for the Submission of 510(k)s for Solid State X-ray Imaging Devices'. Pediatric image evaluation using small-size phantoms was performed on the P-53."

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size (Test Set): Not specified. The document indicates "small-size phantoms" were used, implying a phantom study, not a human clinical trial. The number of phantom images or specific phantom configurations is not detailed.
    • Data Provenance: Not specified. Given it's a phantom study, geographical origin is less relevant than for patient data. It's an internal study conducted to support the 510(k) submission. Retrospective or prospective status is not applicable as it's a phantom study.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Number of Experts: Not specified. Given this was a phantom study, ground truth would likely be based on physical measurements of the phantoms and expected image quality metrics, rather than expert interpretation of pathology or disease. If human evaluation was part of the "comparative image evaluation," the number and qualifications of evaluators are not provided.
    • Qualifications: Not specified.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not specified. For a phantom study demonstrating "substantially equivalent image performance," adjudication methods like 2+1 or 3+1 (common in clinical reader studies) are generally not applicable. The comparison would likely involve quantitative metrics from the generated images.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done

    • MRMC Study: No. The document states "comparative image evaluation" and "pediatric image evaluation using small-size phantoms." This strongly implies a technical performance assessment using phantoms, rather than a clinical MRMC study with human readers interpreting patient cases. Therefore, no effect size of human readers improving with AI vs. without AI assistance can be reported, as AI assistance in image interpretation (e.g., CAD) is not the focus of this submission; it's about the imaging system's ability to produce quality images for diagnosis.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done

    • Standalone Performance: Not applicable in the traditional sense of an AI algorithm's diagnostic performance. The device is an X-ray imaging system. The "performance" being evaluated is its ability to generate images, not to provide an automated diagnosis. The "Intelligent-Grid" feature mentioned is an image processing algorithm (scattered radiation correction), but its standalone diagnostic performance is not the subject of this specific submission; its prior clearance (K151465) is referenced.

    7. The Type of Ground Truth Used

    • Ground Truth Type: For the pediatric image evaluation, the ground truth was based on phantom characteristics and expected image quality metrics. This is inferred from the statement "pediatric image evaluation using small-size phantoms was performed."

    8. The Sample Size for the Training Set

    • Training Set Sample Size: Not applicable. The SKR 3000 is an X-ray imaging system, not an AI model that requires a "training set" in the machine learning sense for its primary function of image acquisition. While image processing algorithms (like Intelligent-Grid) integrated into the system might have been developed using training data, the submission focuses on the imaging system's performance for pediatric use.

    9. How the Ground Truth for the Training Set Was Established

    • Ground Truth for Training Set: Not applicable, as no training set (in the context of an AI model's image interpretation learning) is explicitly mentioned or relevant for the scope of this 510(k) submission for an X-ray system.
    Ask a Question

    Ask a specific question about this device

    K Number
    K243734
    Date Cleared
    2025-04-18

    (135 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    MQB

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Allengers Wireless/ Wired X-Ray Flat Panel Detectors used with AWS (Acquisition Workstation Software) Synergy DR FDX/Synergy DR is used to acquire/Process/Display/Store/Export radiographic images of all body parts using Radiographic techniques. It is intended for use in general radiographic applications wherever a conventional film/screener CR system is used.

    Allengers Wireless/Wired X-ray Flat Panel Detectors are not intended for mammography applications.

    Device Description

    The Wireless/ Wired X-Ray Flat Panel Detectors are designed to be used in any environment that would typically use a radiographic cassette for examinations. Detectors can be placed in a wall bucky for upright exams, a table bucky for recumbent exams, or removed from the bucky for non-grid or free cassette exams. These medical devices have memory exposure mode, and extended image readout feature. Additionally, rounded-edge design for easy handling, image compression algorithm for faster image transfer, LED design for easy detector identification, extra protection against ingress of water. This Device is currently indicated for general projection radiographic applications and the scintillator material is using cesium iodide (CsI). The Wireless/ Wired X-Ray Flat Panel Detectors sensor can automatically collect x-ray from an x-ray source. It collects the x-ray and converts it into digital image and transfers it to Desktop computer / Laptop/ Tablet for image display. The x-ray generator (an integral part of a complete x-ray system), is not part of the submission. The sensor includes a flat panel for x-ray acquisition and digitization and a computer (including proprietary processing software) for processing, annotating and storing x-ray images, the personal computer is not part of this submission.

    Wireless/ Wired X-Ray Flat Panel Detectors used with Accessory: "AWS (Acquisition Workstation Software) Synergy DR FDX/ Synergy DR", runs on a Windows based Desktop computer/ Laptop/ Tablet as a user interface for radiologist to perform a general radiography exam. The function includes:

    1. User Login
    2. Display Connectivity status of hardware devices like detector
    3. Patient entry (Manual, Emergency and Worklist)
    4. Exam entry
    5. Image processing
    6. Search patient Data
    7. Print DICOM Image
    8. Exit
    AI/ML Overview

    This document describes the 510(k) clearance for Allengers Wireless/Wired X-Ray Flat Panel Detectors (K243734). The core of the submission revolves around demonstrating substantial equivalence to a predicate device (K223009) and several reference devices (K201528, K210988, K220510). The key modification in the subject device compared to the predicate is an increased scintillator thickness from 400µm to 600µm, which consequently impacts the Modulating Transfer Function (MTF) and Detective Quantum Efficiency (DQE) of the device.

    Based on the provided text, the 510(k) relies on non-clinical performance data (bench testing and adherence to voluntary standards) to demonstrate substantial equivalence, rather than extensive clinical studies involving human subjects or AI-assisted human reading.

    Here's a breakdown of the requested information based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implicitly defined by the comparison to the predicate device's performance, particularly for image quality metrics (MTF and DQE). The goal is to demonstrate that despite changes, the device maintains diagnostic image quality and does not raise new safety or effectiveness concerns.

    Metric (Units)Acceptance Criteria (Implicit - Maintain Diagnostic Image QualityReported Device Performance (Subject Device)Comments/Relation to Predicate
    DQE @ 0.5 lp/mm (Max.)$\ge$ Predicate: 0.78 (for Glass) / 0.79 (for Non-Glass)0.85 (for G4343RC, G4343RWC, G4336RWC - Glass)
    0.79 (for T4336RWC - Non-Glass)Meets/Exceeds predicate values. Improves for Glass substrate models. Matches for Non-Glass substrate model.
    DQE @ 1 lp/mm (Max.)$\ge$ Predicate: 0.55 (for Glass) / 0.58 (for Non-Glass)0.69 (for G4343RWC, G4336RWC, G4343RC - Glass)
    0.58 (for T4336RWC - Non-Glass)Meets/Exceeds predicate values. Improves for Glass substrate models. Matches for Non-Glass substrate model.
    DQE @ 2 lp/mm (Max.)$\ge$ Predicate: 0.47 (for Glass) / 0.49 (for Non-Glass)0.54 (for G4343RC, G4343RWC, G4336RWC - Glass)
    0.49 (for T4336RWC - Non-Glass)Meets/Exceeds predicate values. Improves for Glass substrate models. Matches for Non-Glass substrate model.
    MTF @ 0.5 lp/mm (Max.)$\sim$ Predicate: 0.90 (for Glass) / 0.85 (for Non-Glass)0.95 (for G4343RC, G4343RWC, G4336RWC - Glass)
    0.90 (for T4336RWC - Non-Glass)Meets/Exceeds predicate values. Improves for Glass substrate models. Improves for Non-Glass substrate model.
    MTF @ 1 lp/mm (Max.)$\sim$ Predicate: 0.76 (for Glass) / 0.69 (for Non-Glass)0.70 (for G4343RWC, G4336RWC, G4343RC - Glass)
    0.69 (for T4336RWC - Non-Glass)Slightly lower for Glass substrate models (0.70 vs 0.76). Matches for Non-Glass substrate model. The submission claims this does not lead to "clinically significant degradation of details or edges."
    MTF @ 2 lp/mm (Max.)$\sim$ Predicate: 0.47 (for Glass) / 0.42 (for Non-Glass)0.41 (for G4343RC, G4343RWC, G4336RWC - Glass)
    0.42 (for T4336RWC - Non-Glass)Slightly lower for Glass substrate models (0.41 vs 0.47). Matches for Non-Glass substrate model. The submission claims this does not lead to "clinically significant degradation of details or edges."
    Thickness of ScintillatorNot an acceptance criterion in itself, but a design change.600 µmIncreased from predicate (400 µm).
    Sensitivity (Typ.)$\sim$ Predicate: 574 LSB/uGy715 LSB/uGyIncreased from predicate.
    Max. Resolution3.57 lp/mm (Matches predicate)3.57 lp/mmMatches predicate.
    General Safety and EffectivenessNo new safety and effectiveness issues raised compared to predicate.Verified by adherence to voluntary standards and risk analysis.Claimed to be met. The increased scintillator thickness is "deemed acceptable" and experimental results confirm "superior noise performance and smoother image quality compared to the 400μm CsI, without clinically significant degradation of details or edges."

    2. Sample Size Used for the Test Set and Data Provenance

    The document explicitly states that the submission relies on "Non-clinical Performance Data" and "Bench testing". There is no mention of a clinical test set involving human subjects or patient imaging data with a specified sample size. The data provenance would be laboratory bench testing results. The country of origin of the data is not explicitly stated beyond the company being in India, but it's performance data, not patient data. The testing is described as functional testing to evaluate the impact of different scintillator thicknesses.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    This information is not applicable as the clearance is based on non-clinical, bench testing data (physical performance characteristics like MTF and DQE) rather than clinical image interpretation or diagnostic performance that would require human expert ground truth.

    4. Adjudication Method for the Test Set

    Not applicable, as there is no mention of a human-read test set or ground truth adjudication process.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done

    No. The document does not mention an MRMC study or any study involving human readers, with or without AI assistance. The device is an X-ray detector, not an AI software.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) study was done

    Not applicable in the context of an AI algorithm, as this device is an X-ray detector and associated acquisition software. However, the "standalone" performance of the detector itself (MTF, DQE, sensitivity) was assessed through bench testing and measurements, which can be considered its "standalone" performance.

    7. The Type of Ground Truth Used

    The "ground truth" for the performance claims (MTF, DQE, sensitivity) is based on physical phantom measurements and engineering specifications obtained through controlled bench testing following recognized industry standards (e.g., IEC 62220-1-1). It is not based on expert consensus, pathology, or outcomes data from patient studies.

    8. The Sample Size for the Training Set

    Not applicable. This submission is for an X-ray flat panel detector, not an AI/ML model that would require a "training set" of data.

    9. How the Ground Truth for the Training Set was Established

    Not applicable. As stated above, this device does not involve an AI/ML model with a training set.

    Ask a Question

    Ask a specific question about this device

    K Number
    K242770
    Manufacturer
    Date Cleared
    2025-03-20

    (188 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    MQB

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EXPD 114, EXPD 114P, EXPD 114G, EXPD 114PG Digital X-ray detector is indicated for digital imaging solution designed for providing general radiographic diagnosis of human anatomy. This device is intended to replace film or screen based radiographic systems in all general purpose diagnostic procedures. This device is not intended for mammography applications. It is intended for both adult and pediatric populations.

    Device Description

    EXPD 114, EXPD 114G, EXPD 114P, EXPD 114PG are flat-panel type digital X-ray detector that captures projection radiographic images in digital format within seconds, eliminating the need for an entire x-ray film or an image plate as an image capture medium. EXPD 114, EXPD 114G, EXPD 114P, EXPD 114PG differs from traditional X-ray systems in that, instead of exposing a film and chemically processing it to create a hard copy image, a device called a Detector is used to capture the image in electronic form.

    EXPD 114, EXPD 114G, EXPD 114P, EXPD 114PG are indirect conversion devices in the form of a square plate in which converts the incoming X-rays into visible light. This visible light is then collected by an optical sensor, which generates an electric charges representation of the spatial distribution of the incoming X-ray quanta.

    The charges are converted to a modulated electrical signal thin film transistors. The amplified signal is converted to a voltage signal and is then converted from an analog to digital signal which can be transmitted to a viewed image print out, transmitted to remote viewing or stored as an electronic data file for later viewing.

    AI/ML Overview

    The DRTECH Corporation's EXPD 114, EXPD 114G, EXPD 114P, EXPD 114PG Digital X-ray detectors were assessed for substantial equivalence to a predicate device (K223124). The company conducted non-clinical performance testing (bench tests) and a "Concurrence Study" for image quality to demonstrate this.

    Here's a breakdown of the requested information based on the provided document:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state "acceptance criteria" in a numerical or pass/fail format for the Concurrence Study beyond broad equivalence. Instead, it compares the performance of the subject device to the predicate device. The performance data mentioned primarily relates to technical specifications and general image quality assessment.

    ParameterAcceptance Criteria (Implied: Equivalent to or comparable to predicate)Reported Device Performance (Subject Device)Reported Predicate Device Performance (K223124)
    DQEEquivalent to or comparable to predicateEXPD 114: 45% @0.5lp/mm
    EXPD 114G: 25% @0.5lp/mm
    EXPD 114P: 45% @0.5lp/mm
    EXPD 114PG: 25% @0.5lp/mmEXPD 129P, EXPD 86P: 50.0 % at 0.5 lp/mm
    EXPD 129PG, EXPD 86PG: 25.0 % at 0.5 lp/mm
    MTFEquivalent to or comparable to predicateEXPD 114: 40% @2.0lp/mm
    EXPD 114G: 40% @2.0lp/mm
    EXPD 114P: 40% @2.0lp/mm
    EXPD 114PG: 40% @2.0lp/mmEXPD 129P, EXPD 86P: 45.0 % at 2.0 lp/mm
    EXPD 129PG, EXPD 86PG: 45.0 % at 2.0 lp/mm
    ResolutionEquivalent to or comparable to predicate3.5 lp/mm3.5 lp/mm
    Image Quality (Clinical Assessment)Equivalent to predicate device."the image quality of the subject device is equivalent to that of the predicate device"Standard established by predicate device.

    Note on DQE and MTF: The subject device's DQE for EXPD 114/P is slightly lower than the predicate EXPD 129P/86P (45% vs 50%). Similarly, the MTF for all subject devices is lower than the predicate (40% vs 45%). Despite these numerical differences, the overall conclusion states "basically equal or worth the predicate device" and that the device meets acceptance criteria. This suggests that the measured differences were considered clinically acceptable within the context of substantial equivalence.

    2. Sample Size Used for the Test Set and Data Provenance

    The document states: "Our Concurrence Study for Image Quality was based on body parts (Chest, C-spine AP, L-spine AP, Shoulder AP, Pelvis AP, Extremity) to compare subject device and predicate device(K223124)."

    • Sample Size: The exact number of images or cases analyzed in the Concurrence Study is not specified in the provided text. It only lists the anatomical sites included.
    • Data Provenance: The document does not specify the country of origin of the data or whether the study was retrospective or prospective. It is a "Concurrence Study" which implies a direct comparison, likely of newly acquired images, but this is not explicitly stated.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    • Number of Experts: "a qualified clinical expert" (singular) is mentioned.
    • Qualifications of Experts: The expert is described as "qualified clinical expert." No further details on their specific qualifications (e.g., radiologist, years of experience, board certification) are provided.

    4. Adjudication Method for the Test Set

    • Adjudication Method: The document only mentions "a qualified clinical expert confirmed" the image quality. This strongly suggests a single-reader assessment without any explicit adjudication method (e.g., 2+1, 3+1 consensus).

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • MRMC Study: Based on the description of "a qualified clinical expert confirmed," it appears that a formal MRMC comparative effectiveness study was not conducted. The assessment seems to be a qualitative comparison of image quality by a single expert.
    • Effect Size of Human Reader Improvement: As an MRMC study was not indicated, there is no information on the effect size of how much human readers improve with AI vs. without AI assistance. The device is a digital X-ray detector, not an AI software intended for interpretation assistance.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    • The device is a digital X-ray detector, not an AI algorithm. Therefore, the concept of "standalone performance" of an AI algorithm is not applicable in this context. The "performance" described relates to the physical characteristics of the detector (DQE, MTF, Resolution) and its ability to produce images of diagnostic quality.

    7. Type of Ground Truth Used

    • Type of Ground Truth: The ground truth for the Concurrence Study was based on the expert's subjective assessment of image quality compared to the predicate device, stating that "the image quality of the subject device is equivalent to that of the predicate device." This is a form of expert consensus, albeit with only one expert explicitly mentioned. It's not based on pathology or outcomes data.

    8. Sample Size for the Training Set

    • The document describes a device (digital X-ray detector), not a machine learning model. Therefore, the concept of a "training set" in the context of an AI/ML algorithm is not applicable.

    9. How the Ground Truth for the Training Set Was Established

    • As the device is not an AI/ML algorithm, the concept of "training set ground truth" is not applicable.
    Ask a Question

    Ask a specific question about this device

    K Number
    K243443
    Manufacturer
    Date Cleared
    2025-03-19

    (133 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    MQB

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Digital X-ray detector, EXPD-N Series, is designed for use in digital imaging solutions for general radiographic diagnosis of human anatomy. This device is intended for use in all general diagnostic procedures, replacing film or screen-based radiographic systems for both adult and paediatric patients. It is not intended for use in mammography.

    Device Description

    In comparison to existing devices, the new detectors incorporate a Flexible a-Si in the TFT material within the panel. The primary difference from the conventional glass a-Si panel is that the electronic circuits, such as silicon, are deposited on a plastic substrate instead of a glass substrate during the manufacturing of the TFT panel. Since only the material of the substrate on which the silicon is deposited changes, the overall image performance remains unaffected. Another difference is the pixel pitch. While existing products feature only a pixel pitch of 140μm, the new models include an option with a pixel pitch of 100um. The resolution of an X-ray detector has a significant impact on MTF (Modulation Transfer Function) and sensitivity.

    AI/ML Overview

    This medical device submission is for an X-ray detector, not an AI/ML device. Therefore, the typical acceptance criteria and study requirements for AI/ML devices, such as those related to multi-reader multi-case studies, standalone performance, and ground truth establishment with expert consensus or pathology, are not applicable here.

    The submission focuses on establishing substantial equivalence to a predicate device based on technical characteristics and physical performance, confirming it is suitable for general radiographic diagnosis.

    Here's a breakdown of the provided information, tailored to the context of a non-AI X-ray detector:

    1. Table of Acceptance Criteria and the Reported Device Performance

    The acceptance criteria are implicitly defined by demonstrating substantial equivalence to the predicate device in terms of technical characteristics and performance metrics relevant to X-ray image quality. The table below compares the subject device's performance to the predicate device, highlighting where performance is similar or improved.

    ItemAcceptance Criteria (Implied by Predicate Device K193017 Performance)Subject Device (EXPD-N Series) Reported Performance
    Intended UseGeneral radiographic diagnosis, replaces film/screen-based systems, adult & pediatric, Not for mammography.General radiographic diagnosis, replaces film/screen-based systems, adult & pediatric, Not for mammography. (Same)
    Anatomical SitesGeneral RadiographyGeneral Radiography (Same)
    Dimensions (mm)EVS 3643W/WG/WP: 460(W) x 386(L) x 15(H)
    EVS 4343W/WG/WP: 460(W) x 460(L) x 15(H)EXPD 3643N/NP/NU/N1/U1: 460(W) x 386(L) x 15.5(H)
    EXPD 4343N/NP/NU/N1/U1: 460(W) x 460(L) x 15.5(H) (Slight difference in thickness, otherwise similar)
    Pixel Pitch140 μm140 μm (for N/NP/NU models)
    100 μm (for N1/U1 models) (Improved resolution option added)
    Image Size (pixels)EVS 3643W/WG/WP: 2,560 x 3,072
    EVS 4343W/WG/WP: 3,072 x 3,072EXPD 4343N/NP/NU: 3,072 × 3,072 (Same)
    EXPD 3643N/NP/NU: 2,560 × 3,072 (Same)
    EXPD 4343N1/U1: 4,302 × 4,302 (Improved with 100μm pixel pitch)
    EXPD 3643N1/U1: 3,534 × 4,302 (Improved with 100μm pixel pitch)
    Active Area (mm)EVS 3643W/WG/WP: 430 x 358
    EVS 4343W/WG/WP: 430 x 430EXPD 4343N/NP/NU: 430.2mm × 430.2mm (Similar)
    EXPD 3643N/NP/NU: 353.4mm × 430.2mm (Similar)
    EXPD 4343N1/U1: 430.08mm × 430.08mm (Similar, adapted for 100μm pixel pitch)
    EXPD 3643N1/U1: 358.4mm × 430.08mm (Similar, adapted for 100μm pixel pitch)
    TFT Materiala-Si, IGZOa-Si, Flexible a-Si, IGZO (New Flexible a-Si material introduced, otherwise similar)
    Cycle Time
    Ask a Question

    Ask a specific question about this device

    K Number
    K243556
    Date Cleared
    2025-03-18

    (120 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    MQB

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Lux HD 35 Detector and Lux HD 43 Detector are indicated for digital imaging solutions designed to provide general radiographic diagnosis for human anatomy including both adult and pediatric patients. They are intended to replace film/screen systems in all general-purpose diagnostic procedures. Lux HD 35 Detector and Lux HD 43 Detector are not intended for mammography or dental applications.

    Device Description

    Lux HD 35 Detector and Lux HD 43 Detector are digital flat panel detector. They support the single frame mode, with the key component of TFT/PD image sensor flat panel of active area: 35cm×43cm (Lux HD 35 Detector)/42.67cm × 42.67cm (Lux HD 43 Detector) .The differences between two models are overall change in the dimensions of the image receptor. The sensor plate of Lux HD 35 Detector and Lux HD 43 Detector is direct-deposited with CsI scintillator to achieve the conversion from X-ray to visible photon. The visible photons are transformed to electron signals by diode capacitor array within TFT panel, which are composed and processed by connecting to scanning and readout electronics, consequently to form a panel image by transmitting to PC through the user interface. The major function of the Lux HD 35 Detector and Lux HD 43 Detector is to convert the X-ray to digital image, with the application of high resolution X-ray imaging. Both kinds of detectors are the key component of DR system. The Digital Radiographic Imaging Acquisition Software Platform - DR is part of the system, it is used to acquire, enhance, view image from Lux HD 35 Detector and Lux HD 43 Detector.

    AI/ML Overview

    The provided text is a 510(k) summary for a medical device (Lux HD 35 Detector and Lux HD 43 Detector). It details the device's characteristics, intended use, and comparison to a predicate device to demonstrate substantial equivalence.

    However, the provided document does not contain information about the acceptance criteria nor a study that proves the device meets specific performance criteria through a clinical evaluation involving human readers, ground truth establishment, or sample sizes related to AI/algorithm performance.

    The "Non-clinical study" section primarily discusses:

    • Electrical Safety and EMC testing: Adherence to IEC/ES 60601-1 and IEC 60601-1-2.
    • Biological Evaluation: Evaluation of materials contacting operators/patients based on FDA guidance.
    • Non-clinical Considerations: Stating substantial equivalence to the predicate device for non-clinical aspects mentioned in FDA guidance for solid-state X-ray imaging devices.
    • Clinical Consideration: Stating that intended use, fundamental scientific technology, regulatory requirements, non-clinical performance, labeling, and quality-assurance programs are the same as the predicate device. It explicitly mentions: "There is no any negative change about clinical performance from predicate device."
    • Wireless testing: Compliance with ANSI IEEE C63.27-2017.
    • Cybersecurity testing: Compliance with section 524B(b)(2) of the Federal Food, Drug, and Cosmetics Act.

    This submission clearly relies on demonstrating substantial equivalence to a predicate device (Carestream Health, Inc. Focus HD 43 Detector, K213529) through technical and safety comparisons, rather than presenting a performance study with acceptance criteria and results for an AI/algorithm.

    Therefore, I cannot populate the table or answer the specific questions about "acceptance criteria and the study that proves the device meets the acceptance criteria" as they pertain to AI/algorithm performance, ground truth, expert adjudication, or MRMC studies. The document does not describe such a study.

    The crucial information regarding acceptance criteria and performance data for an AI/algorithm, as requested in your prompt, is absent from the provided text. The document is a 510(k) a summary for a hardware device (X-ray detector) and focuses on the safety and performance equivalency to a predicate hardware device. It does not provide data on AI/algorithm performance against a clinical ground truth.

    If this device were to include an AI component that required such a study for its clearance, that information would typically be detailed in a separate section of the 510(k) submission, outlining the AI's intended use, performance metrics, validation strategy, and results against a defined ground truth. This document does not contain that.

    Ask a Question

    Ask a specific question about this device

    K Number
    K244010
    Device Name
    ExamVue Apex
    Date Cleared
    2025-02-24

    (60 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    MQB

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The ExamVue Apex flat panel x-ray detector system is indicated for use in general radiology including podiatry, orthopedic, and other specialties, and in mobile x-ray systems. The Exam Vue Apex flat panel x-ray detector system is not indicated for use in mammography.

    Device Description

    The ExamVue Apex flat panel x-ray detector system consists of a line of 3 different models of solid state x-ray detectors, of differing size and characteristics, combined with a single controlling software designed for use by radiologists and radiology technicians for the acquisition of digital x-ray images. The ExamVue Apex flat panel x-ray detector system captures digital images of anatomy through the conversion of x-rays to electronic signals, eliminating the need for film or chemical processing to create a hard copy image. ExamVue Apex flat panel x-ray detector system incorporates the ExamVue Duo software, which performs the processing, presentation and storage of the image in DICOM format. All models of the ExamVue Apex flat panel x-ray detector system use a Si TFTD for the collection of light generated by a CsI scintillator, for the purpose of creating a digital x-ray image.

    The three available models are:

    EVA 14W, with a 14x17in (35x43cm) wireless cassette sized panel
    EVA 17W, with a 17x17in (43x43cm) wireless cassette sized panel
    EVA 10W, with a 10x12 (24x30cm) wireless cassette sized panel

    AI/ML Overview

    The provided text describes the regulatory clearance of a digital X-ray detector system, the "ExamVue Apex," and its substantial equivalence to a predicate device. However, it does not contain specific acceptance criteria or an analytical study proving the device meets those criteria, as one would typically find for an AI/ML medical device submission with defined performance metrics (e.g., sensitivity, specificity, AUC).

    Instead, the submission focuses on demonstrating substantial equivalence through:

    • Bench Testing: Comparing engineering specifications like resolution, sensitivity, and dynamic range to the predicate device.
    • Software Validation: Ensuring the software adheres to relevant standards (IEC 62304) and performs expected functions.
    • Clinical Testing: An ABR-certified radiologist visually evaluating image quality as equivalent or better than the predicate device.

    Therefore, many of the requested fields regarding a detailed statistical study (sample size, ground truth, expert adjudication, MRMC study, standalone performance) cannot be filled from the provided text because such a study, with quantitative acceptance criteria, does not appear to have been performed or reported in this 510(k) summary. The submission relies more on demonstrating equivalence through technical specifications and expert opinion on image quality rather than rigorous statistical performance criteria for an AI algorithm.

    Here's a breakdown of what can be extracted and what cannot:

    1. A table of acceptance criteria and the reported device performance

    The provided text does not define explicit quantitative acceptance criteria for device performance in the typical AI/ML sense (e.g., a target sensitivity of X% or specificity of Y%). Instead, the "acceptance criteria" are implied by demonstrating "similar or greater imaging characteristics" compared to the predicate device and that the software "performs the same required basic functions."

    Acceptance Criteria (Implied)Reported Device Performance
    Bench Testing (Comparison to Predicate):
    a. Resolution equivalent or greater"product performs with similar or greater imaging characteristics" (general statement). Specific comparison metrics for ExamVue Apex vs. Predicate: Pixel Pitch (99um vs 143/140/143um), DQE @ 0 lp/mm (73% @ 6 μGy / 70% @ 2 μGy vs 57%/60%/58%), MTF @ 1 lp/mm (68% vs 63%/68%/65%) - all indicate equal or improved performance for Apex.
    b. Sensitivity equivalent or greater"product performs with similar or greater imaging characteristics" (general statement).
    c. Dynamic range in image acquisition equivalent or greater"product performs with similar or greater imaging characteristics" (general statement).
    d. Software performs the same basic functions"software performs the same required basic functions as the predicate device."
    Software Validation:
    a. Designed and developed according to IEC 62304"The software was designed and developed according to a software development process in compliance with IEC 62304."
    b. Performs all functions of the predicate's software"tested to show that it performs all the functions of the software in the predicate device." The software "performs the same functions as the software for the predicate device with some additional features."
    Clinical Testing:
    a. Image quality equivalent or better than predicate device."The images were evaluated by an ABR certified radiologist who evaluated the image quality to be of equivalent or better to the predicate device."

    2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)

    • Sample Size: Not specified. The clinical testing merely states "Clinical data was provided with the submission to demonstrate equivalence with the predicate device. This data includes images of all the relevant ROI." It doesn't quantify the number of images or patients.
    • Data Provenance: Not specified (country, retrospective/prospective).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)

    • Number of Experts: "an ABR certified radiologist" (singular, implied to be one).
    • Qualifications: "ABR certified radiologist." No mention of years of experience.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

    • Adjudication Method: Not specified. Since only one radiologist is mentioned, it's likely "none" in the sense of a consensus or adjudication process among multiple readers. The single ABR-certified radiologist provided the evaluation.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC Study: No, an MRMC comparative effectiveness study was not explicitly stated or described. This submission is for a general X-ray detector system, not specifically an AI-powered diagnostic algorithm designed to assist human readers. The "AI" mentioned is the software components related to image acquisition, processing, and management, not a diagnostic AI.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Standalone Performance: Not applicable in the context of an AI diagnostic algorithm. This device is a digital X-ray detector system. Its "performance" refers to image quality and functionality, not a diagnostic output from an algorithm that would then require standalone performance metrics (e.g., sensitivity/specificity for disease detection). The software handles image processing, presentation, and storage, not automated diagnosis.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    • Type of Ground Truth: The "ground truth" for the clinical testing was the visual evaluation of image quality by an ABR-certified radiologist. It's not a diagnostic ground truth (like pathology for cancer detection) but rather an assessment of whether the images produced by the new device are diagnostically acceptable and equivalent/superior to those from the predicate device.

    8. The sample size for the training set

    • Sample Size for Training Set: Not applicable/not specified. This device is an X-ray detector and associated software for image acquisition and processing, not a deep learning AI model that requires a "training set" to learn features for interpretation. The software's development (as per IEC 62304) involves validation and verification, but not "training" in the machine learning sense.

    9. How the ground truth for the training set was established

    • Ground Truth for Training Set: Not applicable. (See #8).
    Ask a Question

    Ask a specific question about this device

    K Number
    K243171
    Date Cleared
    2024-12-19

    (80 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    MQB

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Wireless (V14C, V14G, F14C, F14G, V17C, V17G)/Wired (V14C, V14G, F14C, F14G, V17C, V17G, V17Ge, V17Ce) Yushan X-Ray Flat Panel Detector is intended to capture for display radiographic images of human anatomy. It is intended for use in general projection radiographic applications wherever conventional film/screen or CR systems may be used. The Yushan X-Ray Flat Panel Detector is not intended for mammography, fluoroscopy, tomography, and angiography applications. The use of this product is not recommended for pregnant women and the risk of radioactivity must be evaluated by a physician.

    Device Description

    The subject device Yushan X-Ray Flat Panel Detector, model V14C, V14G, V17C, V17G, F14C, F14G are portable(wireless/wired) detectors, while V17Ce, V17Ge are a non-portable(wired) detector. The Yushan X-Ray Flat Panel Detector is designed to be used in any environment that would typically use a radiographic cassette for examinations. Detectors can be placed in a wall bucky for upright exams, a table bucky for recumbent exams, or removed from the bucky for non-grid or free cassette exams. Additionally, rounded-edge design for easy handling, image compression algorithm for faster image transfer, LED design for easy detector identification, extra protection against ingress of water.

    Yushan series is working by using Xresta and DR console.

    The Xresta is a software running on a Windows PC as an user interface for radiologist to perform a general radiography exam. The function includes:

      1. Detector status update
      1. Xray exposure workflow
      1. Image viewer and measurement.
      1. Post image process and DICOM file I/O
      1. Image database: DROC support the necessary DICOM Services to allow a smooth integration into the clinical network

    The DR Console is a software/app-based device, which is a software itself. When this app is operating the OTS can be considered as the iOS system (iOS 16 or above), the safety and effectiveness of this OTS has been assessed and evaluated through the software testing (compatibility) action and also the usability test (summative evaluation). All the functions operate normally and successfully under this OTS framework.

    The function includes:

      1. Imaging procedure review
      1. Worklist settings
      1. Detector connection settings
      1. Calibration
      1. Image processing
    AI/ML Overview

    The provided document is a 510(k) summary for the Yushan X-Ray Flat Panel Detector. It outlines the device's technical characteristics and compares it to predicate devices to establish substantial equivalence. However, it does not describe a clinical study that proves the device meets specific acceptance criteria in terms of diagnostic performance (e.g., sensitivity, specificity for a particular condition).

    Instead, the document focuses on non-clinical performance data to demonstrate substantial equivalence, primarily by showing that the device adheres to recognized voluntary standards and exhibits comparable physical and image quality characteristics to previously cleared devices.

    Here's an analysis based on the provided text, addressing your points where possible:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly present a table of acceptance criteria for diagnostic performance (e.g., a specific sensitivity or specificity target for a clinical task) and the device's performance against those criteria. It focuses on technical specifications and compliance with standards.

    The closest to "acceptance criteria" are the technical specifications listed for the subject device and the predicate devices, implying that meeting or being comparable to these specifications is considered acceptable for substantial equivalence.

    CharacteristicAcceptance Criteria (Implied by Predicate)Reported Device Performance (Subject Device)
    Indications for Use"The Wireless (V14C, V14G, V17C, V17G)/Wired (V14C, V14G, V17C, V17G, V17Ge) Yushan X-Ray Flat Panel Detector with DROC is intended to capture for display radiographic images of human anatomy. It is intended for use in general projection radiographic applications wherever conventional film/screen or CR systems may be used. The Yushan X-Ray Flat Panel Detector with DROC is not intended for mammography, fluoroscopy, tomography, and angiography applications." (Similar for all predicates)"The Wireless (V14C, V14G, F14C, F14G, V17C, V17G)/Wired (V14C, V14G, F14C, F14G, V17C, V17G, V17Ge, V17Ce) Yushan X-Ray Flat Panel Detector is intended to capture for display radiographic images of human anatomy. It is intended for use in general projection radiographic applications wherever conventional film/screen or CR systems may be used. The Yushan X-Ray Flat Panel Detector is not intended for mammography, fluoroscopy, tomography, and angiography applications."
    Pixel Pitch140 μm140 μm
    DQE (at 1 lp/mm, RQA5)GOS: 0.27, CsI: 0.48 (predicate K201528) / GOS: 0.27, CsI: 0.50 (predicate K210988) / CsI: 0.48 (predicate K220510)V series: GOS: 0.27, CsI: 0.48; F series: GOS: 0.27, CsI: 0.50
    MTF (at 1 lp/mm, RQA5)GOS: 0.52, CsI: 0.69 (predicate K201528) / GOS: 0.52, CsI: 0.63 (predicate K210988) / CsI: 0.69 (predicate K220510)CsI: 0.64 (V series); GOS: 0.52, CsI: 0.63 (F series)
    Max. Resolution3.57 lp/mm (for both GOS and CsI in predicates)3.57 lp/mm (for both GOS and CsI in subject device)
    A/D Conversion16 bit16 bit
    Biological SafetyAll material contact with patients are in accordance with ISO 10993.All material contact with patients are in accordance with ISO 10993.
    EMC EmissionSatisfactory results from IEC60601-1-2 testing (implied by non-clinical performance section)Results were satisfactory as per IEC60601-1-2 testing.
    Image QualitySubstantially equivalent to predicate device (implied by non-clinical performance section)Confirmed to be substantially equivalent to that of the predicate device.

    2. Sample size used for the test set and the data provenance

    • Test Set Sample Size: Not applicable in the context of a clinical performance study. The document states: "No clinical study has been performed."
    • Data Provenance: Not applicable. The evaluation relies on non-clinical (laboratory/technical) testing.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    Not applicable, as no clinical study with expert ground truth was performed.

    4. Adjudication method for the test set

    Not applicable, as no clinical study with a test set requiring adjudication was performed.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No MRMC comparative effectiveness study was done. This device is an X-ray flat panel detector, not an AI software intended to assist human readers.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Not applicable. This is a hardware device (X-ray detector) with associated software for image processing and display, not an AI algorithm. Its performance is inherent in its image acquisition capabilities. The "standalone" performance here refers to the detector's physical performance metrics (DQE, MTF, resolution).

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    For the image quality evaluation mentioned: The ground truth implicitly refers to the ideal image quality parameters as defined by industry standards (e.g., IEC standards for DQE, MTF) and the performance of the legally marketed predicate devices. It is not a clinical ground truth for diagnostic accuracy.

    8. The sample size for the training set

    Not applicable. This device is not an AI/machine learning model that requires a distinct training set in the typical sense. Its "training" would be the engineering and manufacturing processes to ensure it meets its design specifications.

    9. How the ground truth for the training set was established

    Not applicable, as there is no training set in the context of an AI model for this device. The "ground truth" for the device's development would be its design requirements and engineering specifications validated through non-clinical testing.

    Summary of the Study that Proves the Device Meets Acceptance Criteria:

    The "study" referenced in the document is a compilation of non-clinical performance tests and adherence to voluntary standards.

    • Non-clinical Performance Data: The device conforms to voluntary standards such as AAMI/ANSI ES60601-1, IEC 60601-1, IEC 60601-1-2, IEC 62304, IEC 60601-1-6, ANSI AAMI IEC 62366-1, and ANSI/AAMI HE75.
    • FDA Guidance adherence: The "FDA's Guidance for the Submission of 510(k)s for Solid State X-ray Imaging Devices" (September 1, 2016) was followed to describe detector characteristics. Guidance documents for software (June 14, 2023) and cybersecurity (September 27, 2023) were also followed.
    • Specific Tests Conducted:
      • Risk analysis, verification, and validation activities.
      • Load-bearing characteristics and protection against ingress of water (passed).
      • EMC emission testing (IEC60601-1-2) – results satisfactory.
      • Biocompatibility testing (ISO 10993 series) for materials in contact with patients.
      • Image Quality Evaluation: Confirmed that the image quality of the Yushan X-Ray Flat Panel Detector is substantially equivalent to that of the predicate device. This is the key "performance" study, demonstrating that the new device's images are comparable to those produced by already-cleared devices, implying acceptable clinical usability for its stated indications.

    Conclusion stated in the document: Based on these non-clinical studies and comparisons, the manufacturer concluded that the device is "as safe and effective" as the legally marketed predicate devices and does not raise "different questions of safety and effectiveness."

    Ask a Question

    Ask a specific question about this device

    K Number
    K241319
    Device Name
    SKR 3000
    Date Cleared
    2024-11-21

    (195 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    MQB

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The SKR 3000 is indicated for use in generating radiographic images of human anatomy. It is intended to replace radiographic film/screen system in general-purpose diagnostic procedures.

    The SKR 3000 is not indicated for use in mammography, fluoroscopy and angiography applications. The P-53 is for adult use only.

    Device Description

    The digital radiography SKR 3000 performs X-ray imaging of the human body using an Xray planar detector that outputs a digital signal, which is then input into an image processing device, and the acquired image is then transmitted to a filing system, printer, and image display device as diagnostic image data.

    • This device is not intended for use in mammography
    • This device is also used for carrying out exposures on children.

    The Console CS-7, which controls the receiving, processing, and output of image data, is required for operation. The CS-7 is a software with basic documentation level. CS-7 implements the following image processing; gradation processing, frequency processing, dynamic range compression, smoothing, rotation, reversing, zooming, and grid removal process/scattered radiation correction - (Intelligent-Grid). The Intelligent-Grid is cleared in K151465.

    This submission is to add the new flat-panel x-ray detector (FPD) P-53 to the SKR 3000. The new P-53 panel shows improved performance compared to the predicate device. The P-53 employs the same surface material infused with Silver ions (antibacterial properties) as the reference device.

    The FPDs used in SKR 3000 can communicate with the image processing device through the wired Ethernet and/or the Wireless LAN (IEEE802.11a/n and FCC compliant). The WPA2-PSK (AES) encryption is adopted for a security of wireless connection.

    The SKR 3000 is distributed under a commercial name AeroDR 3.

    AI/ML Overview

    The provided text does not contain detailed information about specific acceptance criteria or the study used to prove the device meets those criteria in the typical format of a clinical trial or performance study report. Instead, it is an FDA 510(k) clearance letter and a 510(k) Summary for the Konica Minolta SKR 3000 (K241319).

    This document focuses on demonstrating substantial equivalence to a predicate device (K151465 - AeroDR System2) rather than providing a detailed report of a performance study with specific acceptance criteria, sample sizes, expert involvement, and ground truth establishment, as one might find for a novel AI/software medical device.

    The "Performance Data" section mentions "comparative image testing was conducted to demonstrate substantially equivalent image performance for the subject device" and "the predetermined acceptance criteria were met." However, it does not specify what those acceptance criteria were, what the reported performance was against those criteria, or the methodology of the "comparative image testing."

    Therefore, I cannot populate the table or answer most of your specific questions based on the provided text.

    Here's what can be extracted and inferred from the text:

    1. A table of acceptance criteria and the reported device performance:

    This information is not explicitly provided in the document. The document states: "The performance testing was conducted according to the 'Guidance for the Submission of 510(k)s for Solid State X-ray Imaging Devices.' The comparative image testing was conducted to demonstrate substantially equivalent image performance for the subject device. The other verification and validation including the items required by the risk analysis for the SKR 3000 were performed and the results showed that the predetermined acceptance criteria were met. The results of risk management did not require clinical studies to demonstrate the substantial equivalence of the proposed device."

    The comparison table on page 6 provides a comparison of specifications between the subject device (SKR 3000 with P-53) and the predicate device (AeroDR System2 with P-52), which might imply performance improvements that were part of the "acceptance criteria" for demonstrating substantial equivalence:

    FeatureSubject Device (SKR 3000 / P-53)Predicate Device (AeroDR System2 / P-52)Implication (Potential "Performance")
    Pixel size150 µm175 µmImproved spatial resolution
    Max. Resolution2.5 lp/mm2.0 lp/mmHigher resolution
    DQE (1.0 lp/mm)40% @ 1mR35% @ 1mRImproved detective quantum efficiency

    2. Sample sized used for the test set and the data provenance:

    • Sample Size for Test Set: Not specified. The document mentions "comparative image testing" but does not detail the number of images or patients in the test set.
    • Data Provenance: Not specified (e.g., country of origin, retrospective or prospective).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Not specified. Given that it's a 510(k) for an X-ray system rather than an AI diagnostic algorithm, the "ground truth" for image quality assessment would likely be based on physical phantom measurements and potentially visual assessment by qualified individuals, but the details are not provided.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

    • Not specified.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • Not done/Not specified. This is not an AI-assisted device subject to typical MRMC studies. The device is a digital radiography system itself. The document states, "The results of risk management did not require clinical studies to demonstrate the substantial equivalence of the proposed device."

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • Not applicable. This is a hardware device (X-ray detector and system), not a standalone algorithm.

    7. The type of ground truth used:

    • Inferred based on context: Likely objective physical measurements (e.g., resolution, DQE) and potentially qualitative image assessment against a known reference (predicate device or established norms). The phrase "comparative image testing" suggests direct comparison of images produced by the subject device vs. predicate device. Not explicitly stated to be expert consensus, pathology, or outcomes data.

    8. The sample size for the training set:

    • Not applicable / Not specified. This is a hardware device; typical "training sets" are associated with machine learning algorithms. Its design and manufacturing would be based on engineering principles and quality control, not a data-driven training set in the AI sense.

    9. How the ground truth for the training set was established:

    • Not applicable / Not specified. (See point 8)

    Summary of what is known/inferred:

    • Acceptance Criteria: "Predetermined acceptance criteria were met" for performance parameters related to image quality and safety. Specific numerical criteria are not detailed, but improved resolution and DQE over the predicate are highlighted.
    • Study Design: "Comparative image testing" and general "performance testing" were conducted according to FDA guidance for solid-state X-ray imaging devices.
    • Sample Size/Provenance/Experts/Adjudication/MRMC: Not specified, expected as this is a hardware 510(k) for substantial equivalence demonstrating non-inferiority/improvement in physical specifications rather than a diagnostic AI/CADe study.
    • Ground Truth: Likely objective physical performance metrics and visual comparison with a predicate, not clinical diagnoses or outcomes.
    • Training Set: Not applicable for a hardware device in the context of AI.
    Ask a Question

    Ask a specific question about this device

    K Number
    K241125
    Device Name
    VIVIX-S 1751S
    Manufacturer
    Date Cleared
    2024-11-15

    (206 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    MQB

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    VIVIX-S 1751S series is used for the general-purpose diagnostic procedures, and as well as intended to replace radiographic film/ screen systems. The VIVIX-S 1751S series is not intended for mammography applications.

    Device Description

    The VIVIX-S 1751S, available in models FXRD-1751SA and FXRD-1751SB, features a 17x51 inch imaging area. This device intercepts x-ray photons and uses a scintillator to emit visible spectrum photons. The FXRD-1751SA model uses a Csl:Tl (Thallium doped Caesium lodide) scintillator, while the FXRD-1751SB model uses a Gadox (Gadolinium Oxysulfide) scintillator. These photons illuminate an array of photo (a-SI) detectors, creating electrical signals are then converted to digital values, which are processed by software to produce digital images displayed on monitors. The VIVIX-S 1751S must be integrated with an operating PC and an X-ray generator, and it can communicate with the generator via cable. It is designed for capturing and transferring digital x-ray images for radiography diagnosis. Note that the X-ray generator and imaging software are not included with the VIVIX-S 1751S.

    AI/ML Overview

    The document describes a 510(k) submission for the VIVIX-S 1751S digital X-ray detector. The acceptance criteria and the study proving the device meets these criteria are primarily demonstrated through a comparison to a predicate device (K190611) and performance testing based on established standards.

    Here's a breakdown of the requested information:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implicitly defined by demonstrating substantial equivalence to the predicate device in terms of technological characteristics and performance metrics, as well as compliance with recognized standards. The "performance" column shows the subject device's reported values relative to the predicate device.

    ParameterAcceptance Criteria (Predicate Device K190611, FXRD-1751SB)Reported Device Performance (Subject Device K241125, FXRD-1751SA)Equivalence
    Technological CharacteristicsSame as Predicate deviceSame as Predicate deviceSubstantially Equivalent
    Intended UseVIVIX-S 1751S series is used for general-purpose diagnostic procedures, and to replace radiographic film/screen systems. Not for mammography.VIVIX-S 1751S series is used for general-purpose diagnostic procedures, and to replace radiographic film/screen systems. Not for mammography.Equivalent
    Operating PrincipleSame as Predicate deviceSame as Predicate deviceEquivalent
    Design FeaturesSame as Predicate deviceSame as Predicate deviceEquivalent
    Communication MethodSame as Predicate deviceSame as Predicate deviceEquivalent
    ResolutionSame as Predicate deviceSame as Predicate deviceEquivalent
    Scintillator TypeGadoxCsI (FXRD-1751SA), Gadox (FXRD-1751SB)Different scintillator for FXRD-1751SA model, but performance shown to be comparable. FXRD-1751SB is identical.
    Performance (Optical / Imaging)
    MTF (0.5 lp/mm)≥ 81≥ 83Similar
    MTF (1 lp/mm)≥ 56≥ 63Similar
    MTF (2 lp/mm)≥ 22≥ 30Similar
    MTF (3 lp/mm)≥ 9≥ 14Similar
    DQE (0.5 lp/mm)≥ 29≥ 38Similar
    DQE (1 lp/mm)≥ 22≥ 33Similar
    DQE (2 lp/mm)≥ 11≥ 23Similar
    DQE (3 lp/mm)≥ 4≥ 14Similar
    Compliance with StandardsCompliance with 21CFR1020.30, 21CFR1020.31, IEC 60601-1, CAN/CSA-C22.2 No. 60601-1, ANSI/AAMI ES60601-1, IEC 60601-1-2Complies with all listed standardsCompliant
    Diagnostic CapabilityEquivalent to predicate deviceDemonstrated equivalent diagnostic capabilityEquivalent

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document states, "A-Qualified Expert Evaluation study according to CDRH's Guidance for the Submission of 510(k)'s for Solid State X-ray Imaging Devices was conducted..." However, it does not specify the sample size of cases/images used in this clinical evaluation study.
    The data provenance (country of origin, retrospective/prospective) is not explicitly stated in the provided text.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    The document mentions an "Expert Evaluation study" but does not specify the number of experts or their exact qualifications (e.g., "radiologist with 10 years of experience"). It only indicates that it was a "Qualified Expert Evaluation study."

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    The document does not describe the adjudication method used for the expert evaluation study.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    A multi-reader multi-case (MRMC) study was conducted as part of the "Qualified Expert Evaluation study." The primary goal of this study was to confirm that the subject device (VIVIX-S 1751S - FXRD-1751SA) provides images of equivalent diagnostic capability to the predicate device (VIVIX-S 1751S - FXRD-1751SB).

    The document does not mention the involvement of AI in this study, nor does it quantify any improvement of human readers with AI assistance. The study described is a comparison of two different X-ray detector technologies, not an AI-assisted reading study.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    This device is an X-ray detector, not an AI algorithm. Therefore, the concept of "standalone (algorithm only)" performance without a human-in-the-loop is not applicable in the same way it would be for an AI diagnostic software. The performance metrics reported (MTF, DQE) are physical image quality parameters of the detector itself, which could be considered "standalone" in this context as they characterize the device's inherent imaging capability.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    The ground truth for the "clinical test" (Expert Evaluation study) was established by "Qualified Expert Evaluation" to assess "equivalent diagnostic capability" of the images. This suggests a form of expert consensus or individual expert readings to determine the diagnostic quality of the images produced by the subject device compared to the predicate. It does not mention pathology or outcomes data as the primary ground truth.

    8. The sample size for the training set

    The document does not mention a training set in the context of this device because it is a hardware device (X-ray detector), not an AI algorithm that requires a training phase.

    9. How the ground truth for the training set was established

    Since there is no mention of a training set for an AI algorithm, the question of how its ground truth was established is not applicable to this submission.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 42