K Number
K162660
Manufacturer
Date Cleared
2016-10-20

(24 days)

Product Code
Regulation Number
892.1750
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

PHT-35LHS is a computed tomography x-ray system intended to produce panoramic, cephalometric or cross-sectional images of the oral anatomy by computer reconstruction of x-ray image data from the same axial plane taken at different angles. It provides diagnostic details of the maxillofacial areas for a dental treatment in adult and pediatric dentistry. The system also utilizes carpal images for orthodontic treatment. The device is operated and used by physicians, dentists and x-ray technicians.

Device Description

Green Smart (PHT-35LHS) is an advanced 5 in 1 digital X-ray imaging system that incorporates PANO, CEPH (Optional), CBCT, MODEL Scan and 3D PHOTO (Optional) imaging capabilities into a single system. Green Smart (PHT-35LHS), a digital radiographic imaging system, acquires and processes multi FOV diagnostic images for dentists. Specifically designed for dental radiography, Green Smart (PHT-35LHS) is a complete digital X-ray system equipped with imaging viewers, X-ray generator and a dedicated SSXI detector. The digital CBCT system is based on a CMOS digital X-ray detector. The CMOS CT detector is used to capture 3D radiographic images of head, neck, oral surgery, implant and orthodontic treatment. With Auto Pano function, It also reconstructs the 3D CT data and produces 2D panoramic images without an additional X-ray scan. Green Smart (PHT-35LHS) can also acquire 2D diagnostic image data in conventional panoramic and cephalometric imaging.

AI/ML Overview

The provided text is a 510(k) Summary for a medical device (Green Smart, Model PHT-35LHS) seeking FDA clearance, demonstrating substantial equivalence to a predicate device. This document focuses on proving performance similarity rather than establishing new clinical effectiveness with human readers. Therefore, several of the requested sections (like MRMC studies, number of experts for ground truth, adjudication methods, and training set details for AI) are not applicable to this type of regulatory submission, as a human-in-the-loop AI model is not the subject of this 510(k). The "device" in question is an X-ray imaging system, not an AI algorithm.

Here's a breakdown of the available information based on your request:

Acceptance Criteria and Device Performance (as demonstrated for Substantial Equivalence)

The document implicitly defines acceptance criteria by comparing the performance parameters of the subject device (Green Smart) to its predicate device (PaX-i3D Smart). The goal is to show the new device is "equivalent or better" than the predicate in key imaging performance metrics.

Table of Acceptance Criteria and Reported Device Performance:

Performance ParameterAcceptance Criteria (Implicit - Equivalent or Better than Predicate)Reported Subject Device PerformanceNotes
Xmaru1404CF-Plus (CBCT/PANO Detector)
Imaging PatternsNo aliasing throughout the same spatial frequency as predicateNo aliasing phenomenonCMOS panel of new detector is "exactly same" as predicate; testing showed similar image patterns.
DQESimilar or better than predicatePerformed similarly to predicate
MTFSimilar or better than predicatePerformed similarly to predicate
NPSSimilar or better than predicatePerformed similarly to predicate
Xmaru2602CF (Cephalometric Detector)
MTFBetter than predicate (Xmary2301CF)Better performance parametersNew CMOS panel generates "better image quality."
DQEBetter than predicate (Xmary2301CF)Better performance parameters
NPSBetter than predicate (Xmary2301CF)Better performance parameters
General CT Image Quality (Iterative Reconstruction)
ContrastEquivalent or better than predicateDemonstrated equivalency/betterMeasured with iterative reconstruction, indicating the overall imaging system performs well.
NoiseEquivalent or better than predicateDemonstrated equivalency/better
CNREquivalent or better than predicateDemonstrated equivalency/better
MTFEquivalent or better than predicateDemonstrated equivalency/better

Study Details:

  1. Sample size used for the test set and the data provenance:

    • The document does not specify a sample size in terms of patient images or subjects for the performance evaluations. Instead, it refers to "Non-Clinical Test results" and reports on the performance parameters of the device's components (detectors) and the overall system.
    • The data provenance is a laboratory setting, as indicated by "The sponsor tested the subject device in a laboratory and provided a non-clinical performance report." The country of origin for the data is not explicitly stated, but the manufacturer (VATECH Co., Ltd.) is based in Korea. This is a part of a regulatory submission to the US FDA. The nature of the studies discussed (device performance parameters) makes them inherently prospective in the sense that the new device was built and then tested against a set of standards and against the predicate.
  2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Not applicable. This submission focuses on the physical performance metrics of an X-ray imaging device (e.g., DQE, MTF, NPS, contrast, noise), not on diagnostic accuracy established by human readers interpreting images. Therefore, expert involvement for ground truth on image interpretation is not a component of this specific type of testing for substantial equivalence.
  3. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    • Not applicable. As the performance data pertains to technical specifications and physical image quality metrics rather than human interpretation accuracy, no adjudication method for diagnostic outcomes is described or required.
  4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No. An MRMC comparative effectiveness study was not conducted as this is a 510(k) submission for a conventional X-ray imaging system, not an AI-based diagnostic tool. The document describes the system and its imaging capabilities, not an AI-assisted interpretation workflow.
  5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • No. This is a hardware device (X-ray system) with associated viewing software. There is no standalone external algorithm being proposed for independent performance evaluation in this submission. The "algorithm" here refers to the iterative reconstruction algorithm within the CT system itself, and its impact is evaluated through standard image quality metrics (Contrast, Noise, CNR, MTF).
  6. The type of ground truth used (expert consensus, pathology, outcomes data, etc):

    • The "ground truth" for this type of technical performance testing is established through physical measurements and phantom studies using established standards and methodologies (e.g., IEC 61223-3-4, IEC 61223-3-5, 21 CFR 1020.33). These standards define how metrics like MTF, DQE, noise, and contrast are objectively measured using specialized test objects and equipment, not clinical data or expert interpretations.
  7. The sample size for the training set:

    • Not applicable. This is a hardware device clearance, not an AI model requiring a training set.
  8. How the ground truth for the training set was established:

    • Not applicable. As this is not an AI model, there is no training set or associated ground truth establishment for it.

§ 892.1750 Computed tomography x-ray system.

(a)
Identification. A computed tomography x-ray system is a diagnostic x-ray system intended to produce cross-sectional images of the body by computer reconstruction of x-ray transmission data from the same axial plane taken at different angles. This generic type of device may include signal analysis and display equipment, patient and equipment supports, component parts, and accessories.(b)
Classification. Class II.