Search Filters

Search Results

Found 256 results

510(k) Data Aggregation

    K Number
    K250788
    Date Cleared
    2025-08-28

    (167 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    KPR

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Definium Tempo Select is intended to generate digital radiographic images of the skull, spinal column, chest, abdomen, extremities, and other body parts in patients of all ages. Applications can be performed with the patient sitting, standing, or lying in the prone or supine position and the system is intended for use in all routine radiography exams. Optional image pasting function enables the operator to stitch sequentially acquired radiographs into a single image.

    This device is not intended for mammographic applications.

    Device Description

    The Definium Tempo Select Radiography X-ray System is designed as a modular system with components that include an Overhead Tube Suspension (OTS) with a tube, an auto collimator and a depth camera, an elevating table, a motorized wall stand, a cabinet with X-ray high voltage generator, a wireless access point and wireless detectors in exam room and PC, monitor and control box with hand-switch in control room. The system generates diagnostic radiographic images which can be reviewed or managed locally and sent through a DICOM network for applications including reviewing, storage and printing.

    By leveraging platform components/ design, Definium Tempo Select is similar to the predicate device Discovery XR656 HD (K191699) and the reference device Definium Pace Select (K231892) with regards to the user interface layout, patient worklist refresh and selection, protocol selection, image acquisition, and image processing based on the raw image. This product introduces a new high voltage generator which has the same key specifications as the predicate. A wireless detector used in referenced product Definium Pace Select is introduced. Image Pasting is improved with individual exposure parameter adjustable on images on both Table and Wall Stand Mode. Tube auto angulation is added for better auto positioning based on current auto-positioning. Camera Workflow is introduced based on existing depth camera. OTS is changed with 4 axis motorizations. An update was made to the previously cleared Tissue Equalization feature under K013481 to introduce a Deep Learning AI model that provides more consistent image presentations to the user which reduces additional workflow to adjust the image display parameters. The other minor changes including PC change, Wall Stand change and Table change.

    AI/ML Overview

    The provided FDA 510(k) clearance letter and summary for the Definium Tempo Select offers some, but not all, of the requested information regarding the acceptance criteria and the study proving the device meets them. Notably, specific quantitative acceptance criteria for the AI Tissue Equalization feature are not explicitly stated.


    Here's a breakdown of the available information and the identified gaps:

    1. Table of Acceptance Criteria and Reported Device Performance

    Note: The 510(k) summary does not explicitly list quantitative acceptance criteria for the AI Tissue Equalization algorithm. Instead, it states that "The verification tests confirmed that the algorithm meets the performance criteria, and the safety and efficacy of the device has not been affected." Without specific performance metrics or thresholds, a direct comparison in a table format is not possible for the AI component.

    For the overall device, the acceptance criteria are implicitly performance metrics that ensure it functions comparably to the predicate device, as indicated by the "Equivalent" and "Identical" discussions in Table 1 (pages 7-11). However, these are primarily functional and technical equivalency statements rather than performance metrics for the AI feature.

    Therefore, this section will focus on the AI Tissue Equalization feature as it's the part that underwent specific verification using a clinical image dataset.

    AI Tissue Equalization Feature:

    Acceptance Criteria (Implied)Reported Device Performance
    Provides more consistent image presentations to the user."The verification tests confirmed that the algorithm meets the performance criteria, and the safety and efficacy of the device has not been affected."
    "The image processing algorithm uses artificial intelligence to dynamically estimate thick and thin regions to improve contrast and visibility in over-penetrated and under-penetrated regions."
    "The algorithm is the same but parameters per anatomy/view are determined by artificial intelligence to provide better consistence and easier user interface in the proposed device."
    Reduces additional workflow to adjust image display parameters.Achieved (stated as a benefit of the AI model).
    Safety and efficacy are not affected.Confirmed through verification tests.

    Missing Information:

    • Specific quantitative metrics (e.g., AUC, sensitivity, specificity, image quality scores, expert rating differences) that define "more consistent image presentations" are not provided.
    • The exact thresholds or target values for these metrics are not stated.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: Not explicitly stated as a number of images or cases. The document refers to "clinical images retrospectively collected across various anatomies...and Patient Sizes."
    • Data Provenance: Retrospective collection from locations in the US, Europe, and Asia.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    Missing Information. The document does not specify:

    • The number of experts involved in establishing ground truth.
    • Their qualifications (e.g., specific subspecialty, years of experience, board certification).
    • Whether experts were even used to establish ground truth for this verification dataset, as the purpose was to confirm the AI met performance criteria rather than to directly compare its diagnostic accuracy against human readers or a different ground truth standard.

    4. Adjudication Method for the Test Set

    Missing Information. No adjudication method (e.g., 2+1, 3+1) is described for the test set.


    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    No. A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not explicitly mentioned or described in the provided document. The verification tests focused on the algorithm meeting performance criteria, not on comparing human reader performance with or without AI assistance.

    • Effect Size: Not applicable, as no MRMC study was described.

    6. If a Standalone Study (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    Yes, implicitly. The "AI Tissue Equalization algorithms verification dataset" was used to perform "verification tests" to confirm that "the algorithm meets the performance criteria, and the safety and efficacy of the device has not been affected." This suggests a standalone evaluation of the algorithm's output (image presentation consistency) against specific, albeit unstated, criteria. While human review of the output images was likely involved, the study's stated purpose was to verify the algorithm itself.


    7. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, etc.)

    Implied through image processing improvement, not diagnostic ground truth. For the AI Tissue Equalization feature, the "ground truth" is not in the traditional clinical diagnostic sense (e.g., disease presence confirmed by pathology). Instead, it appears to be related to the goal of "more consistent image presentations" and improving "contrast and visibility in over-penetrated and under-penetrated regions." This suggests the ground truth was an ideal or desired image presentation quality rather than a disease state. It's likely based on existing best practices for image processing and subjective assessment of image quality by experts, or perhaps a comparative assessment against the predicate's tissue equalization.

    Missing Information: The precise method or criteria for this ground truth (e.g., a panel of radiologists rating image quality, a quantitative metric for contrast/visibility) is not specified.


    8. The Sample Size for the Training Set

    Missing Information. The document describes the "verification dataset" (test set) but does not provide any information on the sample size or composition of the training set used to develop the Deep Learning AI model for Tissue Equalization.


    9. How the Ground Truth for the Training Set Was Established

    Missing Information. As the training set size and composition are not mentioned, neither is the method for establishing its ground truth. It can be inferred that the training process involved data labeled or optimized to achieve "more consistent image presentations" by dynamically estimating thick and thin regions, likely through expert-guided optimization or predefined image processing targets.

    Ask a Question

    Ask a specific question about this device

    K Number
    K250790
    Device Name
    INNOVISION-DXII
    Date Cleared
    2025-08-01

    (140 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    KPR

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    INNOVISION-DXII is a stationery X-ray system intended for obtaining radiographic images of various anatomical parts of the human body, both pediatrics and adults, in a clinical environment. INNOVISION-DXII is not intended for mammography, angiography, interventional, or fluoroscopy use.

    Device Description

    INNOVISION-DXII is a stationary X-ray system using single and three phase power and consists of Tube, HVG(High voltage generator), Ceiling suspended X-ray tube support, Floor to Ceiling X-ray tube support, patient table, detector stand, and X-ray control console. The X-ray control console is a window-based software that can view X-ray images and a mobile console mounted on an Android-based board that only controls X-rays without viewer function.

    After turning on the control unit, it irradiates the set X-ray on the exposure position properly generating X-ray in the inverter generator using IGBT. The compositions like supporters for X-ray tube and tables are used to supply power from the High Voltage generator. When inverter type of Generator creates X-ray irradiation by certain exposure conditions, and X-ray penetrates the patient's body. X-ray information is transferred to a visible ray by a sensor's scintillator, and it turns to an electric signal through A-Si after transmitting photodiode to a TFT Array. This X-ray system is used with FDA cleared X-ray detectors. The electric signal is magnified and turned into a digital signal to create image data. The image is transferred to the PC display by an Ethernet Interface, and it can be adjusted.

    AI/ML Overview

    The FDA 510(k) clearance letter for INNOVISION-DXII explicitly states that clinical testing was not performed for this device. Therefore, there is no study described within this document that proves the device meets acceptance criteria related to clinical performance or human reader studies.

    The provided document focuses on non-clinical performance tests to demonstrate substantial equivalence to the predicate device.

    Here's an analysis based on the information provided, outlining what is and isn't available regarding acceptance criteria and studies:


    Acceptance Criteria and Device Performance (Non-Clinical)

    The acceptance criteria for the INNOVISION-DXII are implicitly the successful completion of the bench tests according to recognized international standards and demonstration that the differences from the predicate device do not raise new safety or effectiveness concerns. The "reported device performance" is the successful passing of these tests, indicating the device is safe and effective in its essential functions.

    Table 1: Acceptance Criteria and Reported Device Performance (Non-Clinical Bench Testing)

    Test CategorySpecific TestAcceptance Criteria (Implicit)Reported Device Performance
    X-ray Tube, Collimator, HVGTube Voltage accuracyMeet specified accuracy standards (e.g., within a tolerance)Passed
    Accuracy of X-RAY TUBE CURRENTMeet specified accuracy standardsPassed
    Reproducibility of the RADIATION outputMeet specified reproducibility standardsPassed
    Linearity and constancy in RADIOGRAPHYMeet specified linearity and constancy standardsPassed
    Half Value Layer (HVL) / Total filtrationMeet specified HVL/filtration standardsPassed
    Accuracy of Loading TimeMeet specified loading time accuracyPassed
    DetectorSystem InstabilityNo unacceptable system instability observedPassed
    Installation errorNo unacceptable installation errorsPassed
    System ErrorNo unacceptable system errorsPassed
    Image Loss, Deletion, and RestorationProper handling of image loss, deletion, and restorationPassed
    Image Save ErrorNo unacceptable image save errorsPassed
    Image Information ErrorNo unacceptable image information errorsPassed
    Image Transmission and ReceptionReliable image transmission and receptionPassed
    Header VerificationCorrect header verificationPassed
    SecurityMeet specified security requirementsPassed
    Image Acquisition TestSuccessful image acquisitionPassed
    Search FunctionFunctional search capabilityPassed
    Application Function (ELUI S/W)Functional application softwarePassed
    ResolutionMeet specified resolution standardsPassed
    Mechanical Components (Support, Table)Moving distanceAccurate and controlled movement within specificationsPassed

    Study Details for Demonstrating Substantial Equivalence (Non-Clinical)

    The study described is a series of bench tests (functional tests) conducted to ensure the safety and essential performance effectiveness of the INNOVISION-DXII X-ray system.

    1. Sample size used for the test set and data provenance:

      • Sample Size: Not applicable. These are functional tests of the device itself rather than tests on a dataset. The "sample" refers to the physical device components and the system as a whole.
      • Data Provenance: Not applicable in the context of image data. The tests are performed on the device in a laboratory setting. The standards referenced are international (IEC).
    2. Number of experts used to establish the ground truth for the test set and qualifications of those experts:

      • Not applicable. Ground truth in this context refers to the expected functional performance of the device according to engineering specifications and regulatory standards (IEC 60601 series). These standards define the "ground truth" for electrical safety, mechanical performance, and radiation emission/accuracy. Experts are involved in conducting and interpreting these standardized tests, but there isn't a "ground truth" established by a panel of medical experts as there would be for image interpretation.
    3. Adjudication method for the test set:

      • Not applicable. The tests are typically pass/fail based on objective measurements against predefined thresholds specified in the IEC standards. There is no subjective adjudication process mentioned.
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No, an MRMC comparative effectiveness study was explicitly NOT done. The document states: "Clinical testing is not performed for the subject device as the detectors were already 510(k) cleared and the imaging software (Elui) is the same as the predicate device. There were no significant changes." This device is a stationary X-ray system, not an AI-assisted diagnostic tool for image interpretation.
    5. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:

      • No, a standalone algorithm performance study was not done. This device is an X-ray imaging system; it does not feature a standalone diagnostic algorithm. While it includes an imaging software (Elui), its performance is assessed as part of the overall system's image acquisition and processing capabilities, not as an independent diagnostic algorithm.
    6. The type of ground truth used:

      • For the non-clinical bench tests, the "ground truth" is defined by the engineering specifications and the requirements of the referenced international standards (IEC 60601-1-3, IEC 60601-2-28, IEC 60601-2-54). These standards specify acceptable ranges for parameters like tube voltage accuracy, radiation output linearity, image resolution, and system stability.
    7. The sample size for the training set:

      • Not applicable. This device is an X-ray system, not a machine learning algorithm that requires a training set of data.
    8. How the ground truth for the training set was established:

      • Not applicable, as there is no training set for this device.

    Summary of Clinical/AI-related information:
    The FDA 510(k) clearance for INNOVISION-DXII does not include any clinical studies or evaluations of AI performance, human reader performance, or diagnostic accuracy. The clearance is based purely on the non-clinical bench testing demonstrating that the device meets safety and essential performance standards and is substantially equivalent to its predicate device for obtaining radiographic images.

    Ask a Question

    Ask a specific question about this device

    K Number
    K250738
    Device Name
    YSIO X.pree
    Date Cleared
    2025-07-31

    (142 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    KPR

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The intended use of the device YSIO X.pree is to visualize anatomical structures of human beings by converting an X-ray pattern into a visible image.

    The device is a digital X-ray system to generate X-ray images from the whole body including the skull, chest, abdomen, and extremities. The acquired images support medical professionals to make diagnostic and/or therapeutic decisions.

    YSIO X.pree is not for mammography examinations.

    Device Description

    The YSIO X.pree is a radiography X-ray system. It is designed as a modular system with components such as a ceiling suspension with an X-ray tube, Bucky wall stand, Bucky table, X-ray generator, portable wireless, and fixed integrated detectors that may be combined into different configurations to meet specific customer needs.

    The following modifications have been made to the cleared predicate device:

    • Updated generator
    • Updated collimator
    • Updated patient table
    • Updated Bucky Wall Stand
    • New X.wi-D 24 portable wireless detector
    • New virtual AEC selection
    • New status indicator lights
    AI/ML Overview

    The provided 510(k) clearance letter and summary for the YSIO X.pree device (K250738) indicate that the device is substantially equivalent to a predicate device (K233543). The submission primarily focuses on hardware and minor software updates, asserting that these changes do not impact the device's fundamental safety and effectiveness.

    However, the provided text does not contain the detailed information typically found in a clinical study report regarding acceptance criteria, sample sizes, ground truth establishment, or expert adjudication for an AI-enabled medical device. This submission appears to be for a conventional X-ray system with some "AI-based" features like auto-cropping and auto-collimation, which are presented as functionalities that assist the user rather than standalone diagnostic algorithms requiring extensive efficacy studies for regulatory clearance.

    Based on the provided document, here's an attempt to answer your questions, highlighting where information is absent or inferred:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state quantitative acceptance criteria in terms of performance metrics (e.g., sensitivity, specificity, or image quality scores) with corresponding reported device performance values for the AI features. The "acceptance" appears to be qualitative and based on demonstrating equivalence to the predicate device and satisfactory usability/image quality.

    If we infer acceptance criteria from the "Summary of Clinical Tests" and "Conclusion as to Substantial Equivalence," the criteria seem to be:

    Acceptance Criteria (Inferred)Reported Device Performance (as stated in document)
    Overall System: Intended use met, clinical needs covered, stability, usability, performance, and image quality are satisfactory."The clinical test results stated that the system's intended use was met, and the clinical needs were covered."
    New Wireless Detector (X.wi-D24): Images acquired are of adequate radiographic quality and sufficiently acceptable for radiographic usage."All images acquired with the new detector were adequate and considered to be of adequate radiographic quality." and "All images acquired with the new detector were sufficiently acceptable for radiographic usage."
    Substantial Equivalence: Safety and effectiveness are not affected by changes."The subject device's technological characteristics are same as the predicate device, with modifications to hardware and software features that do not impact the safety and effectiveness of the device." and "The YSIO X.pree, the subject of this 510(k), is similar to the predicate device. The operating environment is the same, and the changes do not affect safety and effectiveness."

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: Not explicitly stated as a number of cases or images. The "Customer Use Test (CUT)" was performed at two university hospitals.
    • Data Provenance: The Customer Use Test (CUT) was performed at "Universitätsklinikum Augsburg" in Augsburg, Germany, and "Klinikum rechts der Isar, Technische Universität München" in Munich, Germany. The document states "clinical image quality evaluation by a US board-certified radiologist" for the new detector, implying that the images themselves might have originated from the German sites but were reviewed by a US expert. The study design appears to be prospective in the sense that the new device was evaluated in a clinical setting in use rather than historical data being analyzed.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Experts

    • Number of Experts: For the overall system testing (CUT), it's not specified how many clinicians/radiologists were involved in assessing "usability," "performance," and "image quality." For the new wireless detector (X.wi-D24), it states "a US board-certified radiologist."
    • Qualifications of Experts: For the new wireless detector's image quality evaluation, the expert was a "US board-certified radiologist." No specific experience level (e.g., years of experience) is provided.

    4. Adjudication Method for the Test Set

    No explicit adjudication method (e.g., 2+1, 3+1 consensus) is described for the clinical evaluation or image quality assessment. The review of the new detector was done by a single US board-certified radiologist, not multiple independent readers with adjudication.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and what was the effect size of how much human readers improve with AI vs. without AI assistance.

    • MRMC Study: No MRMC comparative effectiveness study is described where human readers' performance with and without AI assistance was evaluated. The AI features mentioned (Auto Cropping, Auto Thorax Collimation, Auto Long-Leg/Full-Spine collimation) appear to be automatic workflow enhancements rather than diagnostic AI intended to directly influence reader diagnostic accuracy.
    • Effect Size: Not applicable, as no such study was conducted or reported.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done.

    The document does not describe any standalone performance metrics for the AI-based features (Auto Cropping, Auto Collimation). These features seem to be integrated into the device's operation to assist the user, rather than providing a diagnostic output that would typically be evaluated in a standalone study. The performance of these AI functions would likely be assessed as part of the overall "usability" and "performance" checks.

    7. The Type of Ground Truth Used

    • For the overall system and the new detector, the "ground truth" seems to be expert opinion/consensus (qualitative clinical assessment) on the system's performance, usability, and the adequacy of image quality for radiographic use. There is no mention of pathology, outcomes data, or other definitive "true" states related to findings on the images.

    8. The Sample Size for the Training Set

    The document does not provide any information about a training set size for the AI-based auto-cropping and auto-collimation features. This is typical for 510(k) submissions of X-ray systems where such AI features are considered ancillary workflow tools rather than primary diagnostic aids.

    9. How the Ground Truth for the Training Set was Established

    Since no training set information is provided, there is no information on how ground truth was established for any training data.


    In summary: The 510(k) submission for the YSIO X.pree focuses on demonstrating substantial equivalence for an updated X-ray system. The "AI-based" features appear to be workflow automation tools that were assessed as part of general system usability and image quality in a "Customer Use Test" and a limited clinical image quality evaluation for the new detector. It does not contain the rigorous quantitative performance evaluation data for AI software as might be seen for a diagnostic AI algorithm that requires a detailed clinical study for clearance.

    Ask a Question

    Ask a specific question about this device

    K Number
    K242019
    Manufacturer
    Date Cleared
    2025-01-07

    (181 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    KPR

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The GXR-Series Diagnostic X-ray System is intended for use in obtaining human anatomical images for medical diagnostic by using X-rays.

    Device Description

    The GXR Series Diagnostic X-ray System consists of a combination of an x-ray generator, and associated equipment such as tube stand, patient table, and, digital imaging system.

    AI/ML Overview

    Here's an analysis of the provided text regarding acceptance criteria and supporting studies for the GXR-series diagnostic x-ray system:

    It's important to note that the provided document is a 510(k) Summary, which is a regulatory filing for a medical device seeking clearance from the FDA based on substantial equivalence to a predicate device. It typically focuses on demonstrating that the new device is as safe and effective as a legally marketed device, rather than proving absolute performance against specific clinical acceptance criteria in a comprehensive clinical study. Therefore, the details provided often lean towards non-clinical testing and comparison with established standards.

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly present a table of "acceptance criteria" for clinical performance. Instead, it relies on demonstrating adherence to recognized safety and performance standards, and comparison of technical characteristics to a predicate device. The "performance" is implicitly deemed acceptable if the device meets these standards and is substantially equivalent to the predicate.

    Here's a generalized interpretation based on the document's content, focusing on what would typically be implied performance requirements for an X-ray system:

    Acceptance Criteria (Implied)Reported Device Performance
    Safety (Electrical, Mechanical, Radiation)Meets international safety and EMC standards (IEC 60601-1, IEC 60601-1-3, IEC 60601-2-28, IEC 60601-2-54, IEC 60601-1-2), and 21CFR 1020.30. "No negative impact on safety or effectiveness" reported for differences.
    Essential Performance (X-ray Generation Parameters)Output Power Rating (32kW-82kW), Line Voltage (220-230VAC, 380/400/480VAC) are equivalent or within acceptable ranges of predicate.
    Image Quality (Digital Diagnostic X-ray System)"State-of-the-art image quality," "excellent spatial resolution, MTF, DQE and stability based on fine pixel pitch" reported. Non-clinical performance data for new flat panel detectors.
    Software Functionality (Image Processing, User Interface)"System imaging software 'RADMAX'" updated GUI for "better visibility & faster workflow." Image Processing Module 4 added; "performance verification...concluded no impact on safety and effectiveness."
    UsabilityAdheres to IEC 60601-1-6 (Usability). Operator control console designed to be "simple and user-friendly."
    Risk ManagementAdheres to ISO 14971 (Risk Management).
    Software Life Cycle ProcessesAdheres to IEC 62304 (Software Life Cycle Processes).
    Compliance with DICOM and Image Compression StandardsAdheres to NEMA PS 3.1-3.20 (DICOM) and ISO IEC10918-1 (Image Compression).
    Exposure Index of Digital X-ray Imaging SystemsAdheres to IEC 62494-1 (Exposure Index).
    Substantial Equivalence to Predicate Device (Overall)"Substantially equivalent in the areas of technical characteristics, general function, application, and intended use," and "does not raise any new potential safety risks and is equivalent in performance to existing legally marketed devices."

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: The document does not specify a "test set" sample size in terms of clinical images or patient cases for performance evaluation against specific acceptance criteria. The testing discussed is primarily non-clinical, related to hardware and software verification and validation.
    • Data Provenance: The document implies that the testing data is generated from laboratory testing and verification during the development and modification of the device. There is no mention of clinical data or patient data being used for the performance evaluation in this 510(k) summary. Given the context of a 510(k), particularly for an X-ray system, the primary focus is on engineering and performance testing against standards, rather than large-scale clinical studies. The data is thus likely prospective in terms of being generated specifically for this submission but is non-clinical in nature.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    This information is not provided in the document. As the evaluation is non-clinical, there is no mention of "ground truth established by experts" in the context of diagnostic performance evaluation. The "ground truth" for non-clinical testing would typically be the expected technical output or adherence to a standard, rather than expert interpretation of images.

    4. Adjudication Method for the Test Set

    This information is not provided as the testing described and implied is non-clinical and does not involve expert adjudication of diagnostic findings.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and effect size

    No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly mentioned or performed as part of this 510(k) submission. The document focuses on demonstrating substantial equivalence through technical comparisons and compliance with standards, not on proving improved reader performance with or without AI assistance.

    6. If a Standalone (algorithm only without human-in-the-loop performance) was done

    This is not applicable in the sense of an AI algorithm's standalone performance. The device is a diagnostic X-ray system, which is inherently designed to be used with a human interpreter (a medical professional). While it has image processing software ("RADMAX"), this software enhances images for human diagnosis, not to provide an automated diagnosis itself.

    7. The Type of Ground Truth Used

    The "ground truth" for the non-clinical testing is adherence to technical specifications and international standards. For example, for radiation output, the ground truth is that the device delivers the specified kVp and mA, and for electrical safety, that it meets the requirements of IEC 60601-1. For image quality, it refers to intrinsic properties like spatial resolution, MTF, and DQE, which are measured objectively, not subjective expert consensus on diagnostic findings.

    8. The Sample Size for the Training Set

    This information is not applicable and therefore not provided. The device is an X-ray imaging system, not an AI/ML diagnostic algorithm that requires a training set of medical images in the conventional sense. The "training" for the device would involve calibration and configuration during manufacturing and installation to ensure it meets its technical specifications. The "RADMAX" software has image processing modules, but the document does not suggest these are deep learning models trained on vast datasets.

    9. How the Ground Truth for the Training Set Was Established

    This information is not applicable for the same reasons as #8. If any parameters for the image processing modules are "learned" or optimized, the document does not elaborate on this process or the ground truth used for such optimization.

    Ask a Question

    Ask a specific question about this device

    K Number
    K242499
    Date Cleared
    2025-01-06

    (137 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    KPR

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Digital radiography X-ray system is used in hospitals, clinics, and medical practices.

    The Digital radiography X-ray system enables radiographic exposures of the whole body including: skull, chest, abdomen, and extremities and may be used on adult and bariatric patients. Exposures may be taken with the patient sitting, standing, or in the prone position. It is not intended for mammographic applications.

    The Digital radiography X-ray system uses digital detectors for generating diagnostic images by converting X-rays into image signals.

    Device Description

    The Digital Radiography X-ray System is a radiography X-ray system. It is designed as a modular system with components such as a ceiling suspension with an X-ray tube, Bucky wall stand, Bucky table, X-ray generator, portable wireless, and fixed integrated detectors that may be combined into different configurations to meet specific customer needs. Software is of Basic level of concern, which is also based on a predicate device.

    AI/ML Overview

    The provided documents describe the Digital Radiography X-ray System (Sontu100-Rad(E), Sontu300-Mars(E)) and its substantial equivalence to predicate devices (K220919 and K213700). However, the document does not contain specific acceptance criteria, reported device performance metrics in numerical form, details about a clinical study for comparative effectiveness, or information regarding ground truth establishment beyond comparison to predicate device characteristics and phantom/patient images.

    Based on the information provided, here's what can be extracted:

    1. A table of acceptance criteria and the reported device performance

    The document mentions that "all the performance parameters of flat panel detector meet the requirements, and the performance parameters of proposed detector and predicate detector are the same." It also states that a "concurrence evaluation on a certain number of clinical images from the same phantom or patients" was performed, and the results "show the ability of the device to provide images with equivalent diagnostic capability to those of the cleared predicate devices."
    However, specific quantitative acceptance criteria and their corresponding reported performance values are not provided in the document. The document primarily focuses on demonstrating substantial equivalence through component and technical characteristic comparisons with predicate devices and compliance with relevant IEC and ASTM standards.

    2. Sample size used for the test set and the data provenance

    The document mentions "a certain number of clinical images from the same phantom or patients" for the concurrence evaluation.

    • Sample size: "a certain number" (specific number not provided).
    • Data provenance: "clinical images from the same phantom or patients". The country of origin is not specified, nor is whether the data is retrospective or prospective.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    This information is not provided in the document.

    4. Adjudication method for the test set

    This information is not provided in the document.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC study: The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study.
    • AI assistance: The device described is a Digital Radiography X-ray System, which is hardware for generating images. There is no mention of AI assistance or an AI component in this device.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Since there is no mention of an AI algorithm or software component for image analysis (beyond the basic level of concern software managing the X-ray system), a standalone algorithm-only performance study like this is not applicable and not mentioned. The "performance data" refers to the overall X-ray system and its detector's image quality.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    The document indicates that the performance of the flat panel detectors was evaluated based on standard image quality metrics (Effect Image Area, Linear Dose Range, Linear Dynamic Range, Spatial Resolution, Low Contrast Resolution, Flat Uniformity, Modulation Transfer Function, Detective Quantum Efficiency, Artifact, Erasure Thoroughness) and comparison to predicate devices. For clinical images, a "concurrence evaluation" was performed to show "equivalent diagnostic capability to those of the cleared predicate devices." This suggests the ground truth was implied to be equivalent to the diagnostic output of the predicate devices, likely through comparison of images by human readers, but the specifics of how this diagnostic capability was established as ground truth (e.g., expert consensus) are not detailed.

    8. The sample size for the training set

    The document describes an X-ray imaging system, not a device that uses a training set for machine learning. Therefore, a "training set" in this context is not applicable and not mentioned.

    9. How the ground truth for the training set was established

    As there is no training set for this type of device, this information is not applicable.

    Ask a Question

    Ask a specific question about this device

    K Number
    K242119
    Device Name
    INNOVISION-EXII
    Manufacturer
    Date Cleared
    2025-01-03

    (168 days)

    Product Code
    Regulation Number
    892.1680
    Why did this record match?
    Product Code :

    KPR

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    INNOVISION-EXII is a stationery X-ray system intended for obtaining radiographic images of various anatomical parts of the human body, both pediatrics and adults, in a clinical environment. INNOVISION-EXII is not intended for mammography, angiography, interventional, or fluoroscopy use.

    Device Description

    INNOVISION-EXII can receive X-ray signals from X-ray irradiation and digitize them into X-ray images by converting digital images to DICOM image format using Elui imaging software. INNOVISION-EXII is a general radiography X-ray system and not for mammography nor fluoroscopy. In addition, the system must be operated by a user who is trained and licensed to handle a general radiography X-ray system to meet the regulatory requirements of a Radiologic Technologist. Target areas for examinations include the head, spine, chest, and abdomen for diagnostic screening of orthopedic, respiratory, or vertebral discs. The system can capture a patient's postures, such as sitting, standing, or lying. This system can be used for patients of all ages, but it should be used with care for pregnant women and infants. The INNOVISION-EXII system has no part directly touching the patient's body.

    AI/ML Overview

    The provided text describes a 510(k) summary for the INNOVISION-EXII stationary X-ray system, asserting its substantial equivalence to a predicate device (GXR-Series Diagnostic X-Ray System). However, the document does not contain information about acceptance criteria or a detailed study proving the device meets specific acceptance criteria related to its performance metrics for diagnostic imaging or AI assistance.

    The "Clinical testing" section on page 9 merely states: "Clinical image evaluation of INNOVISION-EXII has been performed. The evaluation results demonstrated that INNOVISION-EXII generated images are adequate and suitable for expressing contour and outlines. The image quality including contrast and density are appropriate and acceptable for diagnostic exams." This is a very general statement and does not provide specific acceptance criteria or detailed study results.

    Similarly, there are no details regarding AI performance (standalone or human-in-the-loop), sample sizes, ground truth establishment, or expert qualifications for such studies. The document focuses on establishing substantial equivalence based on intended use, technological characteristics, and compliance with various safety and performance standards (electrical safety, EMC, software validation, risk analysis).

    Therefore, based solely on the provided text, the requested information about acceptance criteria and a study proving the device meets these criteria cannot be extracted or inferred. The document is a 510(k) summary focused on demonstrating substantial equivalence, not a detailed clinical performance study report.

    Here is a breakdown of why each requested point cannot be addressed from the given text:

    1. A table of acceptance criteria and the reported device performance: Not present. The "clinical testing" section is too vague.
    2. Sample sized used for the test set and the data provenance: Not present. No specific test set for clinical performance is detailed.
    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not present. No ground truth establishment process is described beyond a general "clinical image evaluation."
    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set: Not present.
    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: Not present. The document does not mention any AI component or MRMC study.
    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Not present. No mention of an algorithm or standalone performance.
    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.): Not detailed. Only a general "clinical image evaluation" is mentioned.
    8. The sample size for the training set: Not present. The document describes a medical imaging device, not a machine learning model requiring a training set.
    9. How the ground truth for the training set was established: Not applicable, as there's no mention of a training set or machine learning components.

    In summary, the provided FDA 510(k) summary largely focuses on engineering and regulatory compliance (electrical safety, EMC, software validation, comparison of technical specifications to a predicate device) to establish substantial equivalence, rather than detailed clinical performance metrics derived from a study with specific acceptance criteria and ground truth for diagnostic accuracy.

    Ask a Question

    Ask a specific question about this device

    K Number
    K241068
    Device Name
    uDR 780i
    Date Cleared
    2024-11-01

    (196 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    KPR

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The uDR 780i Digital Medical X-ray system is intended for use by a qualified/trained doctor or technician on both adult and pediatric subjects for taking diagnostic exposures of the skull, spinal column, chest, abdomen, extremities, and other anatomic sites. Applications can be performed with the subject sitting, standing, or lying in the prone or supine position. Not for mammography.

    Device Description

    The uDR 780i is a digital radiography (DR) system that is designed to provide radiography examinations of sitting, standing or lying patients. It consists of the following components: Tube Ceiling Suspension with tube and collimator, Bucky Wall Stand, Elevating Table, High Voltage Generator, wireless flat panel detectors and an acquisition workstation. The system generates images which can be transferred through DICOM network for printing, review and storage.

    AI/ML Overview

    The provided text describes the 510(k) summary for the uDR 780i Digital Medical X-ray system. This document outlines the device's technical specifications and compares it to a predicate device (also named uDR 780i, K173953) to demonstrate substantial equivalence, rather than providing a detailed study proving the device meets specific acceptance criteria with quantifiable metrics.

    Therefore, many of the requested details about acceptance criteria, specific performance metrics, sample sizes, expert qualifications, and ground truth establishment from a clinical trial or performance study cannot be found in this document. This summary focuses on demonstrating that the new device is functionally identical or improved in a way that doesn't raise new safety or effectiveness concerns compared to a previously cleared device.

    However, I can extract information related to the "Clinical Image Evaluation" which serves as the closest equivalent to a performance study in this regulatory context:


    1. A table of acceptance criteria and the reported device performance

    The document does not provide a table of specific numerical acceptance criteria (e.g., sensitivity, specificity, accuracy) for a diagnostic AI device, nor does it report specific device performance metrics against such criteria. The clinical image evaluation is qualitative.

    Acceptance Criteria (Implied)Reported Device Performance (Qualitative)
    Image Quality sufficient for clinical diagnosis"Each image was reviewed with a statement indicating that image quality is sufficient for clinical diagnosis."

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document states: "Sample images of chest, abdomen, spine, pelvis, upper extremity and lower extremity were provided..."

    • Sample Size: Not specified beyond "Sample images." The exact number of images for each body part or the total number of images is not given.
    • Data Provenance: Not specified (e.g., country of origin, retrospective/prospective). While the applicant is based in Shanghai, China, it's not stated where the clinical images originated.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    The document states: "...provided with a board certified radiologist to evaluate the image quality in this submission."

    • Number of Experts: One
    • Qualifications of Expert(s): "board certified radiologist." Specific experience level (e.g., years) is not provided.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    • Adjudication Method: Not applicable. Only one radiologist evaluated the images; therefore, no adjudication method was used.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC Study: No, an MRMC comparative effectiveness study was not done. The evaluation was a qualitative assessment of image quality by a single radiologist, not a study of human reader performance with or without AI assistance.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Standalone Performance: No, a standalone algorithm-only performance study was not done. The evaluation focused on the clinical image quality of the output from the "uDR 780i Digital Medical X-ray system," not an AI algorithm within it. The system itself is a hardware device for capturing X-ray images.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • Type of Ground Truth: The "ground truth" concerning image quality was established by the qualitative assessment of a "board certified radiologist" who determined if the image quality was "sufficient for clinical diagnosis." This is a form of expert opinion/assessment rather than a definitive medical ground truth like pathology or patient outcomes.

    8. The sample size for the training set

    • Training Set Sample Size: Not applicable/not mentioned. The document describes an X-ray imaging system, not an AI software component that would typically involve a "training set."

    9. How the ground truth for the training set was established

    • Ground Truth for Training Set Establishment: Not applicable/not mentioned, as there is no mention of an AI component with a training set.
    Ask a Question

    Ask a specific question about this device

    Why did this record match?
    Product Code :

    KPR

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The EXSYS DEXi is a diagnostic X-ray system intended for use in generating radiographic images of human anatomy for general purpose. The system obtains necessary information of patient's anatomical structure by an image processing (workstation) after process of examination using radiation exposure with DR. This system is not intended for mammography applications.

    Device Description

    The EXSYS DEXi is a diagnostic X-ray system intended for use in generating radiographic images of human anatomy for general purpose. The system obtains necessary information of patient's anatomical structure by an image processing (workstation) after process of examination using radiation exposure with DR. This system is not intended for mammography applications.

    The EXSYS DEXi composed of a x-ray generator, tube, collimator, tube stand, bucky stand, patient table, flat panel detector and console.

    AI/ML Overview

    This FDA 510(k) summary does not contain information about acceptance criteria and device performance as it pertains to AI/ML or image analysis aspects. The document focuses on the substantial equivalence of the modified EXSYS DEXi diagnostic X-ray system to a previously cleared predicate device (K233530) based on hardware and software updates, and compliance with general safety and performance standards for X-ray systems.

    Specifically, the document refers to non-clinical data and verification and validation testing demonstrating compliance with various international and FDA-recognized consensus standards (e.g., IEC 60601 series for medical electrical equipment, ISO 14971 for risk management, IEC 62304 for medical device software, UL ANSI 2900-1 and IEC 81001-5-1 for cybersecurity). It states, "The test results support that all the specifications have met the acceptance criteria. Verification and validation testing were found acceptable to support the claim of substantial equivalence." However, it does not provide a table of acceptance criteria with reported device performance metrics in an AI/ML context, nor does it describe specific studies that would typically prove such performance (e.g., standalone performance studies, MRMC studies, details on ground truth establishment for a diagnostic algorithm, sample sizes for test/training sets relevant to AI performance).

    The "technological characteristics" table (Table 1) compares design parameters of the subject device (new models of EXSYS DEXi) with the predicate device, highlighting additions like new collimators, mechanical parts, detectors, and software (EConsole2). The discussion column for these additions generally states, "The system has been tested and there is 'No negative impact on safety or efficacy' and there are no new potential or increased safety risks concerning this difference." This refers to overall system safety and performance in line with a general X-ray system, not specific AI/ML diagnostic performance.

    Therefore, based on the provided document, the following information cannot be extracted:

    1. A table of acceptance criteria and the reported device performance (for AI-specific functions): Not provided. The document states general compliance with standards and "test results support that all the specifications have met the acceptance criteria," but does not detail these criteria or performance metrics specific to an AI component's diagnostic accuracy, sensitivity, specificity, etc.
    2. Sample size used for the test set and the data provenance: Not provided for AI/ML performance evaluation.
    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not provided for AI/ML performance evaluation.
    4. Adjudication method for the test set: Not provided for AI/ML performance evaluation.
    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and the effect size: Not mentioned.
    6. If a standalone (algorithm only without human-in-the-loop performance) was done: Not mentioned.
    7. The type of ground truth used: Not specified for AI/ML performance evaluation.
    8. The sample size for the training set: Not provided for AI/ML performance evaluation.
    9. How the ground truth for the training set was established: Not provided for AI/ML performance evaluation.

    The document states, "Clinical studies are unnecessary to validate the safety and effectiveness of the Stationary x-ray system, EXSYS DEXi, the subject of this 510(k) notification," further indicating that specific performance data from clinical trials or detailed AI algorithm validation studies (which typically involve such criteria) are not included in this submission summary. The software updates mentioned (EConsole2) were previously cleared via K240243, suggesting that any specific performance data for that software might be found in its own 510(k) submission, but not in this document.

    Ask a Question

    Ask a specific question about this device

    K Number
    K242678
    Date Cleared
    2024-10-01

    (25 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    KPR

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Definium Pace Select ET is intended to generate digital radiographic images of the skull, spinal column, chest, abdomen, extremities, and other body parts in patients of all ages. Applications can be performed with the patient sitting, standing, or lying in the prone or supine position and the system is intended for use in all routine radiography exams. Optional image pasting function enables the operator to stitch sequentially acquired radiographs into a single image. This device is not intended for mammographic applications.

    Device Description

    The Definium Pace Select ET Radiography X-ray System is designed as a modular system with components that include a 2-axis motorized tube stand with tube and auto collimator assembled on an elevating table, a motorized wall stand, a cabinet with X-ray high voltage generator, a wireless access point, wireless detectors, an acquisition workstation including a monitor and control box with hand-switch. The system generates diagnostic radiographic images which can be reviewed or managed locally and sent through a DICOM network for reviewing, storage and printing.

    By leveraging platform components / design, Definium Pace Select ET is similar to the predicate Definium Pace Select (K231892) and the reference Discovery XR656 HD (K191699) with regards to the user interface layout, patient worklist refresh and selection, protocol selection, image acquisition, and image processing based on the raw image. This product introduces motorized tube stand (vertical and tube angulation) instead of manual tube stand of the predicate. The high voltage generator is new and is backwards compatible to the predicates high voltage generator. This product also introduced Image Pasting on Table and Wall Stand Mode, Auto tracking for Wall Stand, Auto Angulation, Camera Workflow, DAP software calculation, Siemens LED collimator and LCD touch screen console. The other minor changes include updates to components due to obsolescence.

    AI/ML Overview

    The provided document is a 510(k) premarket notification for a medical device called "Definium Pace Select ET." It describes the device, its intended use, and compares its technological characteristics to a predicate device ("Definium Pace Select") and a reference device ("Discovery XR656 HD").

    However, this document does not contain any performance data or details of a clinical study that proves the device meets specific acceptance criteria based on AI or human reading performance.

    The "PERFORMANCE DATA" section explicitly states: "The Definium Pace Select ET does not contain clinical testing data." Instead, it lists non-clinical tests performed, such as Risk Analysis, Requirements Reviews, Design Reviews, and various levels of verification testing (unit, integration, performance, safety, simulated use). These non-clinical tests are aimed at confirming the safety and effectiveness of the device as it relates to changes from the predicate, rather than evaluating specific clinical diagnostic performance metrics with a test set, ground truth, and human readers.

    Therefore, I cannot fulfill your request to describe the acceptance criteria and the study that proves the device meets the acceptance criteria, as the provided input does not contain this information.

    Ask a Question

    Ask a specific question about this device

    K Number
    K242478
    Date Cleared
    2024-09-19

    (29 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    KPR

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The GF85 Digital X-ray Imaging System is intended for use in generating radiographic images of human anatomy by a qualified/trained doctor or technician. This device is not intended for mammographic applications.

    Device Description

    The GF85 digital X-ray imaging system is used to capture images by transmitting X-ray to a patient's body. The X-ray passing through a patient's body is sent to the detector and then converted into electrical signals. These signals go through the process of amplification and digital data conversion in the signal process on the S-station, which is the Operation Software (OS) of Samsung Digital Diagnostic X-ray System, and save in DICOM file, a standard for medical imaging. The captured images are tuned up by an Image Post-processing Engine (IPE) which is exclusively installed in S-station, and sent to the Picture Archiving & Communication System (PACS) sever for reading images. The GF85 Digital X-ray Imaging System includes two models, namely the GF85-3P and GF85-SP. These two models differ primarily in terms of their respective configurations of High Voltage Generators (HVGs). Specifically, the GF85-3P features capacities of 80 kW, 65 kW and 50 kW with 3 phases, whereas the GF85-SP provides a capacity of 40 kW with a single phase. However, all other specifications remain consistent across both models.

    AI/ML Overview

    The provided text describes the GF85 Digital X-ray Imaging System and its substantial equivalence to predicate devices (GC85A and GM85). The key study mentioned for demonstrating equivalence is a non-clinical phantom image evaluation.

    Here's a breakdown of the requested information based on the provided text, with "N/A" for information not present:


    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state quantitative acceptance criteria in a table format. Instead, it relies on a comparison with predicate devices and a general statement that "All test results were satisfying the standards" and that the phantom image evaluations "found to be equivalent to the predicate device."

    Criterion TypeAcceptance Criteria (from document)Reported Device Performance (from document)
    Dosimetric PerformanceSimilar characteristics in exposure output and Half Value Layer compared to predicate devices."the proposed device has the similar characteristics in the exposure output and Half Value Layer even for different tube combinations."
    Phantom Image EvaluationImage quality equivalent to the predicate device."The phantom image evaluations were performed in accordance with the FDA guidance for the submission of 510(k)'s for Solid State X-ray Image Devices and were evaluated by three different professional radiologists and found to be equivalent to the predicate device." "These reports show that the proposed device is substantially equivalent to the proposed devices."
    Electrical SafetyCompliance with ANSI AAMI ES60601-1."All test results were satisfying the standards."
    EMCCompliance with IEC 60601-1-2."EMC testing was conducted in accordance with standard IEC 60601-1-2. All test results were satisfying the standards."
    Radiation ProtectionCompliance with IEC 60601-1-3, IEC 60601-2-28, IEC 60601-2-54, 21 CFR1020.30 and 21 CFR1020.31."All test results were satisfying the standards."
    Software/CybersecurityCompliance with "Cybersecurity in Medical Device" and "Content of Premarket Submissions for Device software Functions" guidances.While compliance with guidances is listed, specific performance results for cybersecurity or software functions are not detailed, beyond stating "All test results were satisfying the standards."
    Wireless FunctionVerified followed by guidance, Radio frequency Wireless Technology in Medical Devices."Wireless function was tested and verified followed by guidance, Radio frequency Wireless Technology in Medical Devices. All test results were satisfying the standards."

    2. Sample size used for the test set and the data provenance

    The document states "The phantom image evaluations were performed...".

    • Sample size for test set: The document does not specify the number of phantom images used in the evaluation.
    • Data provenance: Phantom images are synthetic data, not from human subjects. The country of origin for the data is not specified, but the submission is from Samsung Electronics Co., Ltd. in South Korea.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    The document states, "...and were evaluated by three different professional radiologists..."

    • Number of experts: Three.
    • Qualifications of experts: "professional radiologists." Specific experience levels (e.g., "10 years of experience") are not provided.

    4. Adjudication method for the test set

    The document states the images "were evaluated by three different professional radiologists and found to be equivalent to the predicate device." It does not specify a formal adjudication method (e.g., 2+1, 3+1 consensus). It implies a collective agreement on equivalence.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC study: No. The study conducted was a non-clinical phantom image evaluation to compare the device's image quality to a predicate device, as evaluated by radiologists. It was not a comparative effectiveness study involving human readers' performance with and without AI assistance.
    • Effect size of human reader improvement with AI: N/A, as no such study was conducted or reported.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

    • The primary evaluation described (phantom image reviews) involved human radiologists assessing images generated by the device. The device itself is an X-ray imaging system, not an AI algorithm performing diagnostic interpretation. While the software features list "Lunit INSIGHT CXR Triage" as a new feature for the GF85, the provided text does not detail any standalone performance study specifically for this AI component or any other algorithmic component of the GF85. The "phantom image evaluations" are about the device's image quality, not an algorithm's diagnostic performance.

    7. The type of ground truth used

    For the phantom image evaluation, the ground truth is implicitly defined by the known characteristics and standards of the phantom images themselves, as well as the comparison against the predicate device. It is not expert consensus on pathology, or outcomes data.

    8. The sample size for the training set

    • Training set sample size: N/A. The document describes a traditional X-ray imaging system and its non-clinical evaluation for substantial equivalence. It does not refer to a machine learning model's training set. While there are "Software Features" like "Lunit INSIGHT CXR Triage" which would involve AI, the document does not discuss their training data.

    9. How the ground truth for the training set was established

    • Ground truth for training set: N/A, as no training set for a machine learning model is directly discussed for the GF85 system itself in the context of its substantial equivalence.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 26