Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K243081
    Manufacturer
    Date Cleared
    2025-02-14

    (137 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K231796

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Green X 21 (Model: PHT-90CHO) is intended to produce panoramic, cephalometric, or 3D digital Xray images. It provides diagnostic details of the dento-maxillofacial, sinus, TMJ, and ENT for adult and pediatric patients. The system also utilizes carpal images for orthodontic treatment. The device is to be operated by healthcare professionals.

    Device Description

    Green X 21 (Model : PHT-90CHO) is an advanced 6-in-1 digital X-ray system specifically designed for 2D and 3D dental radiography. This system features six imaging modalities: PANO, CEPH (optional), DENTAL CT, ENT CT, MODEL, and FACE SCAN, all integrated into a single unit.

    Green X 21's imaging system is based on digital TFT detectors, accompanied by imaging viewers and an X-ray generator.

    The materials, safety characteristics, X-ray source, indications for use and image reconstruction/MAR(Metal Artifact Reduction) algorithm of the subject device are same as the predicate device (PHT-75CHS (K210329)). The subject device differs from the predicate device in the following ways: It is equipped with new X-ray detectors for CT/PANO and CEPH modification results in a different maximum FOV provided in a single scan for CT mode compared to the predicate device. For CEPH modality, the subject device utilizes a one- shot imaging capture method.

    Additionally, the subject device includes new modalities such as ENT CT and FACE SCAN with a face scanner, along with new software functions, including Auto Pano, Smart Focus, and Scout.

    AI/ML Overview

    The provided document describes a 510(k) premarket notification for the "Green X 21 (PHT-90CHO)" digital X-ray system. The core of the submission is to demonstrate substantial equivalence to a predicate device, the "Green X 18 (PHT-75CHS)". The document focuses heavily on comparing technical specifications and performance data to support this claim, particularly for the new components and features of the Green X 21.

    However, it's crucial to note that this document does not describe a study that proves the device meets specific acceptance criteria in the context of an AI/human-in-the-loop performance study. Instead, it focuses on engineering performance criteria related to image quality and safety for an X-ray imaging device, and comparing these to a predicate device. The "acceptance criteria" discussed are primarily technical specifications and performance benchmarks for the X-ray system components (detectors, imaging quality metrics), rather than clinical performance metrics in disease detection with AI assistance.

    Therefore, many of the requested items (e.g., sample size for test set, number of experts for ground truth, adjudication method, MRMC study, standalone performance) are not applicable or not detailed in this submission because the device in question is an X-ray imaging system, not an AI-based diagnostic tool. The "new software functions" mentioned (Auto Pano, Smart Focus, Scout) are described as image reconstruction or manipulation tools, not AI algorithms for clinical diagnosis.

    Here's a breakdown based on the information available, addressing the points where possible and noting when information is absent or not relevant to the provided text:

    Acceptance Criteria and Reported Device Performance

    Given that this is an X-ray imaging device and not an AI diagnostic tool, the acceptance criteria are generally related to image quality, safety, and functional performance, benchmarked against standards and the predicate device.

    Table of Acceptance Criteria and Reported Device Performance (as inferred from the document):

    Acceptance Criteria CategorySpecific Criteria (Inferred)Reported Device Performance (Green X 21)
    I. Imaging Performance (New X-ray Detectors)
    1. CT/PANO Detector (Jupi1012X)- Modulation Transfer Function (MTF) & Detective Quantum Efficiency (DQE) & Noise to Power Spectrum (NPS): Performance comparable or superior to predicate (Xmaru1524CF Master Plus OP).- MTF (CT/PANO): Jupi1012X showed more stable or superior performance for DQE, MTF, and NPS, particularly better stability in high-frequency regions. Jupi1012X could distinguish up to 3.5 line pairs (MTF 10% criterion), compared to 2.5 line pairs for predicate.
    - Pixel Size: Similar to predicate device.- Pixel Size (CT/PANO): "Very similar" to predicate. Image test patterns demonstrated test objects across the same spatial frequency range without aliasing.
    2. CEPH Detector (Venu1012VD)- MTF, DQE, NPS: Performance comparable or superior to predicate (Xmaru2602CF), despite predicate having lower NPS (noise).- MTF (CEPH): Venu1012VD exhibited superior performance in DQE, MTF, and NPS (except predicate's better NPS due to lower noise). Higher MTF values indicate sharper images.
    - Pixel Size: Similar to predicate device.- Pixel Size (CEPH): Similar to predicate's 100 µm (non-binning). Image test patterns demonstrated test objects across the same spatial frequency range without aliasing.
    3. Overall Diagnostic Image Quality- Equivalent to or better than predicate device.- "Equivalently or better than the predicate device in overall image quality."
    II. Compliance with Standards and Guidelines
    1. IEC 61223-3-5 (CT)- Quantitative testing for noise, contrast, CNR, MTF 10%.- All parameters met the standards specified.
    2. IEC 61223-3-4 (Dental X-ray)- Quantitative assessment for line pair resolution and low contrast performance (PANO/CEPH).- All parameters met the standards specified.
    3. Software/Firmware (Basic Documentation Level)- Adherence to FDA Guidance "Content of Premarket Submissions for Device Software Functions."- Software verification and validation conducted and documented.
    4. Safety & EMC- Compliance with IEC 60601-1, IEC 60601-1-3, IEC 60601-2-63 standards for electrical, mechanical, environmental safety and performance; IEC 60601-1-2 for EMC.- Testing performed and results were satisfactory.
    III. Functional Equivalence / Performance of New Modalities/Functions
    1. ENT CT Modality- Meet image quality standards of Dental CT; limit radiation exposure to ENT region.- "Adheres to the same image quality standards as the Dental CT modality." Specifically designed to limit radiation exposure to ENT region.
    2. FACE SCAN Modality- Intended for aesthetic consultations, not clinical diagnostic; meets internal criteria.- "Not designed for clinical diagnostic purposes." "Meets the internally established criteria and has been designed to perform as intended."
    3. Auto Pano, Smart Focus, Scout Modes- Should function as intended, similar to previously cleared devices (Green X 12, K231796).- Evaluated according to IEC 61223-3-4 and IEC 61223-3-5; "both standard requirements were met."
    IV. Dosimetric Performance (DAP)
    1. PANO Modality DAP- Similar DAP to predicate device under same exposure conditions.- "DAP measurement results are similar" when tested under same exposure conditions (High Resolution Mode).
    2. CEPH Modality DAP- Performance balance between increased DAP (due to one-shot type) and reduced exposure time/motion artifacts.- DAP "more than twice that of the predicate device" (due to one-shot vs. scan-type), but "utilizes a one-shot type, operating with approximately one-fourth the exposure time... This reduces motion artifacts."
    3. CT Modality DAP- Overall DAP performance balanced with FOV and image quality.- "Slight increase in DAP for the subject device" for most comparable/equivalent FOVs. However, "maximum FOV provided by the subject device demonstrated a reduced radiation dose compared to the predicate device."

    Study Details (as far as applicable and available)

    1. Sample size used for the test set and the data provenance:

      • Test Set Description: The "test set" in this context is referring to data collected through bench testing using phantoms for image quality assessment, and potentially clinical images for subjective comparison, rather than a clinical trial cohort.
      • Sample Size: Not specified in terms of number of patient images. The testing was conducted on the device itself using phantom studies (e.g., test patterns for MTF, line pairs for resolution, phantoms for low contrast).
      • Data Provenance: The bench tests were conducted in a laboratory ("in a laboratory using the same test protocol as the predicate device"). The "Clinical consideration" section for image quality evaluation implies some clinical image review, but the origin (e.g., country, retrospective/prospective) of these potential clinical images is not detailed.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Not Applicable in the traditional sense of an AI study. Ground truth for an X-ray imaging device's performance is typically established through quantitative measurements using phantoms and technical specifications, not expert consensus on medical findings.
      • The document mentions "Image Quality Evaluation Report and Clinical consideration" and concludes "the subject device performed equivalently or better than the predicate device in overall image quality." This implies some form of expert review for subjective image quality, but the number or qualifications of these "experts" are not detailed.
    3. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

      • Not Applicable. As noted above, this is not an AI diagnostic study with a human-in-the-loop component requiring adjudication of disease findings. The evaluations are primarily technical and quantitative measurements comparing physical properties and image output.
    4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No. An MRMC study was not described. This is an X-ray imaging system, not an AI diagnostic software that assists human readers. The new software functions (Auto Pano, Smart Focus, Scout) described are image manipulation/reconstruction features, not AI for diagnostic assistance.
    5. If a standalone (i.e. algorithm only, without human-in-the-loop performance) was done:

      • Not applicable in the context of an AI algorithm. The device is an X-ray system; its "standalone" performance refers to its ability to produce images according to technical specifications, which was assessed through bench testing (MTF, DQE, NPS, etc.). There is no mention of a diagnostic AI algorithm that operates standalone.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • Quantitative Phantom Measurements and Technical Specifications: For image quality (MTF, DQE, NPS, line pair resolution, contrast, noise, CNR), the ground truth is established by physical measurements using standardized phantoms and reference values as per IEC standards.
      • Predicate Device Performance: A key "ground truth" for substantial equivalence is the established performance of the predicate device. The new device's performance is compared against this benchmark.
      • Internal Criteria: For functionalities like FACE SCAN, "internally established criteria" were used as a benchmark.
    7. The sample size for the training set:

      • Not Applicable. This document describes an X-ray imaging device, not an AI model that requires a training set. The software functions mentioned (Auto Pano, Smart Focus, Scout) are described as computational algorithms for image reconstruction or enhancement, not machine learning models that learn from a training dataset.
    8. How the ground truth for the training set was established:

      • Not Applicable. No training set for an AI model is described.
    Ask a Question

    Ask a specific question about this device

    Why did this record match?
    Reference Devices :

    K231796

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The unit is intended to produce panoramic or cephalometric digital x-ray images. It provides diagnostic details of the dento-maxillofacial, sinus and TMJ for adult and pediatric patients. The system also utilizes carpal images for orthodontic treatment. The device is to be operated by physicians, dentists, and x-ray technicians.

    Device Description

    The device is an advanced 2-in-1 digital X-ray imaging system that incorporates PANO and CEPH (Optional) imaging capabilities into a single system and acquires 2D diagnostic image data in conventional panoramic and cephalometric modes.
    The device is not intended for CBCT imaging.
    VistaPano S is identified as panoramic-only models for VistaPano S Ceph.
    ProVecta S-Pan Ceph and ProVecta S-Pan are alternative model for VistaPano S Ceph and VistaPano S respectively.
    The subject device has different model names designated for different US distributors:

    • -VistaPano S Ceph, VistaPano S: DÜRR DENTAL
    • ProVecta S-Pan Ceph, ProVecta S-Pan: AIR TECHNIQUES ।
      Key components of the device
      1. VistaPano S Ceph 2.0 (Model: VistaPano S Ceph), VistaPano S 2.0 (Model: VistaPano S) digital x-ray equipment (Alternate: ProVecta S-Pan Ceph 2.0 (Model: ProVecta S-Pan Ceph), ProVecta S-Pan 2.0 (Model: ProVecta S-Pan))
      1. SSXI detector: Xmaru1501CF-PLUS, Xmaru2602CF
      1. X-ray generator
      1. PC system
      1. Imaging software
    AI/ML Overview

    The provided text describes the substantial equivalence of the new VATECH X-ray imaging systems (VistaPano S Ceph 2.0, VistaPano S 2.0, ProVecta S-Pan Ceph 2.0, ProVecta S-Pan 2.0) to their predicate device (PaX-i Plus/PaX-i Insight, K170731).

    Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    Acceptance Criteria and Reported Device Performance

    The acceptance criteria are not explicitly stated as numerical metrics in a table. Instead, the document focuses on demonstrating substantial equivalence to a predicate device. This is a common approach for medical device clearance, where the new device is shown to be as safe and effective as a legally marketed device. The "performance" in this context refers to demonstrating that the new device functions similarly or better than the predicate, especially for the new "non-binning" mode.

    The study aims to show that the new device's performance is equivalent or better than the predicate device, particularly for the new "HD mode (non-binning)" in CEPH imaging. The primary comparison points are:

    Acceptance Criteria (Implied by Substantial Equivalence Goal)Reported Device Performance (vs. Predicate)
    PANO Mode Image Quality: Equivalent to predicate."similar" (implied "equivalent")
    CEPH Mode Image Quality (SD/2x2 binning): Equivalent to predicate."same" (as predicate's Fast mode)
    CEPH Mode Image Quality (HD/non-binning): Equivalent or better than predicate."better performance" and "performed better or equivalent in line pair resolution than the predicate device."
    Dosimetric Performance (DAP): Similar to predicate."DAP measurement in the PANO mode of each device under the same X-ray exposure conditions... was similar." and "SD mode... same X-ray exposure conditions (exposure time, tube voltage, tube current) are the same with the Fast mode of the predicate device."
    Biocompatibility of Components: Meets ISO 10993-1 standard."biocompatibility testing results showed that the device's accessory part are biocompatible and safe for its intended use."
    Software Functionality and Safety: Meets FDA guidance for "moderate" level of concern."Software verification and validation were conducted and documented... The software for this device was considered as a 'moderate' level of concern." Cybersecurity guidance was also applied.
    Electrical, Mechanical, Environmental Safety & EMC: Conforms to relevant IEC standards."Electrical, mechanical, environmental safety and performance testing according to standard IEC 60601-1... IEC 60601-1-3... IEC 60601-2-63... EMC testing were conducted in accordance with standard IEC 60601-1-2... All test results were satisfactory."
    Conformity to EPRC standards:"The manufacturing facility is in conformance with the relevant EPRC standards... and the records are available for review."
    DICOM Conformity:"The device conforms to the provisions of NEMA PS 3.1-3.18, Digital Imaging and Communications in Medicine (DICOM) Set."

    Study Details:

    The provided document is a 510(k) summary, not a detailed study report. Therefore, some specific details about the study methodology (like expert qualifications or full sample sizes for clinical images) are not granularly described. However, we can infer some information:

    1. Sample sizes used for the test set and the data provenance:

      • The document states "Clinical images obtained from the subject and predicate devices are evaluated and compared." However, the exact sample size for this clinical image evaluation (the "test set" in AI/ML terms) is not specified.
      • The data provenance is implied to be from a retrospective collection of images, likely from VATECH's own testing/development or existing clinical sites that used the predicate device and potentially early versions of the subject device. The country of origin for the clinical images is not explicitly stated, but given the manufacturer is based in Korea ("VATECH Co., Ltd. Address: 13, Samsung 1-ro 2-gil, Hwaseong-si, Gyeonggi-do, 18449, Korea"), it's reasonable to infer some data might originate from there.
      • For the bench testing, the sample size is also not specified, but it would involve phantoms.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • The document mentions "Clinical images obtained from the subject and predicate devices are evaluated and compared." However, it does not specify the number of experts, their qualifications, or how they established "ground truth" for these clinical images. The evaluation is described in general terms, implying a qualitative assessment of general image quality ("general image quality of the subject device is equivalent or better than the predicate device").
    3. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

      • The document does not describe any formal adjudication method for the clinical image evaluation. It simply states "evaluated and compared."
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No, an MRMC study was NOT done. This device is an X-ray imaging system, not an AI-assisted diagnostic tool for interpretation. The study focused on demonstrating the image quality of the system itself (hardware and associated basic image processing software) as being substantially equivalent or better than a predicate system, not on improving human reader performance with AI assistance. The "VisionX 3.0" software is an image viewing program, not an AI interpretation tool.
    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • This is not applicable in the sense of a diagnostic AI algorithm. The performance evaluation is inherently about the "algorithm" and physics of the X-ray system itself (detector, X-ray generator, image processing pipeline) without human interaction for image generation, but humans are integral for image interpretation. The device's performance (image quality, resolution, DAP) is measured directly, similar to a standalone evaluation of a sensor's capabilities.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • For the bench testing, the ground truth was based on physical phantom measurements (e.g., line pair resolution, contrast using phantoms).
      • For the clinical image evaluation, the "ground truth" or reference was implicitly the subjective assessment of "general image quality" by unspecified evaluators, compared to images from the predicate device. There is no mention of an objective clinical ground truth like pathology or patient outcomes.
    7. The sample size for the training set:

      • The document describes an X-ray imaging system, not a device incorporating a machine learning model that requires a "training set" in the conventional sense. Therefore, there is no mention of a training set sample size. The software mentioned (VisionX 3.0) is a general image viewing program, not a deep learning model requiring a specific training dataset.
    8. How the ground truth for the training set was established:

      • Not applicable, as no external "training set" for a machine learning model is described.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1