AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The unit is intended to produce panoramic or cephalometric digital x-ray images. It provides diagnostic details of the dento-maxillofacial, sinus and TMJ for adult and pediatric patients. The system also utilizes carpal images for orthodontic treatment. The device is to be operated by physicians, dentists, and x-ray technicians.

Device Description

The device is an advanced 2-in-1 digital X-ray imaging system that incorporates PANO and CEPH (Optional) imaging capabilities into a single system and acquires 2D diagnostic image data in conventional panoramic and cephalometric modes.
The device is not intended for CBCT imaging.
VistaPano S is identified as panoramic-only models for VistaPano S Ceph.
ProVecta S-Pan Ceph and ProVecta S-Pan are alternative model for VistaPano S Ceph and VistaPano S respectively.
The subject device has different model names designated for different US distributors:

  • -VistaPano S Ceph, VistaPano S: DÜRR DENTAL
  • ProVecta S-Pan Ceph, ProVecta S-Pan: AIR TECHNIQUES ।
    Key components of the device
    1. VistaPano S Ceph 2.0 (Model: VistaPano S Ceph), VistaPano S 2.0 (Model: VistaPano S) digital x-ray equipment (Alternate: ProVecta S-Pan Ceph 2.0 (Model: ProVecta S-Pan Ceph), ProVecta S-Pan 2.0 (Model: ProVecta S-Pan))
    1. SSXI detector: Xmaru1501CF-PLUS, Xmaru2602CF
    1. X-ray generator
    1. PC system
    1. Imaging software
AI/ML Overview

The provided text describes the substantial equivalence of the new VATECH X-ray imaging systems (VistaPano S Ceph 2.0, VistaPano S 2.0, ProVecta S-Pan Ceph 2.0, ProVecta S-Pan 2.0) to their predicate device (PaX-i Plus/PaX-i Insight, K170731).

Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:

Acceptance Criteria and Reported Device Performance

The acceptance criteria are not explicitly stated as numerical metrics in a table. Instead, the document focuses on demonstrating substantial equivalence to a predicate device. This is a common approach for medical device clearance, where the new device is shown to be as safe and effective as a legally marketed device. The "performance" in this context refers to demonstrating that the new device functions similarly or better than the predicate, especially for the new "non-binning" mode.

The study aims to show that the new device's performance is equivalent or better than the predicate device, particularly for the new "HD mode (non-binning)" in CEPH imaging. The primary comparison points are:

Acceptance Criteria (Implied by Substantial Equivalence Goal)Reported Device Performance (vs. Predicate)
PANO Mode Image Quality: Equivalent to predicate."similar" (implied "equivalent")
CEPH Mode Image Quality (SD/2x2 binning): Equivalent to predicate."same" (as predicate's Fast mode)
CEPH Mode Image Quality (HD/non-binning): Equivalent or better than predicate."better performance" and "performed better or equivalent in line pair resolution than the predicate device."
Dosimetric Performance (DAP): Similar to predicate."DAP measurement in the PANO mode of each device under the same X-ray exposure conditions... was similar." and "SD mode... same X-ray exposure conditions (exposure time, tube voltage, tube current) are the same with the Fast mode of the predicate device."
Biocompatibility of Components: Meets ISO 10993-1 standard."biocompatibility testing results showed that the device's accessory part are biocompatible and safe for its intended use."
Software Functionality and Safety: Meets FDA guidance for "moderate" level of concern."Software verification and validation were conducted and documented... The software for this device was considered as a 'moderate' level of concern." Cybersecurity guidance was also applied.
Electrical, Mechanical, Environmental Safety & EMC: Conforms to relevant IEC standards."Electrical, mechanical, environmental safety and performance testing according to standard IEC 60601-1... IEC 60601-1-3... IEC 60601-2-63... EMC testing were conducted in accordance with standard IEC 60601-1-2... All test results were satisfactory."
Conformity to EPRC standards:"The manufacturing facility is in conformance with the relevant EPRC standards... and the records are available for review."
DICOM Conformity:"The device conforms to the provisions of NEMA PS 3.1-3.18, Digital Imaging and Communications in Medicine (DICOM) Set."

Study Details:

The provided document is a 510(k) summary, not a detailed study report. Therefore, some specific details about the study methodology (like expert qualifications or full sample sizes for clinical images) are not granularly described. However, we can infer some information:

  1. Sample sizes used for the test set and the data provenance:

    • The document states "Clinical images obtained from the subject and predicate devices are evaluated and compared." However, the exact sample size for this clinical image evaluation (the "test set" in AI/ML terms) is not specified.
    • The data provenance is implied to be from a retrospective collection of images, likely from VATECH's own testing/development or existing clinical sites that used the predicate device and potentially early versions of the subject device. The country of origin for the clinical images is not explicitly stated, but given the manufacturer is based in Korea ("VATECH Co., Ltd. Address: 13, Samsung 1-ro 2-gil, Hwaseong-si, Gyeonggi-do, 18449, Korea"), it's reasonable to infer some data might originate from there.
    • For the bench testing, the sample size is also not specified, but it would involve phantoms.
  2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • The document mentions "Clinical images obtained from the subject and predicate devices are evaluated and compared." However, it does not specify the number of experts, their qualifications, or how they established "ground truth" for these clinical images. The evaluation is described in general terms, implying a qualitative assessment of general image quality ("general image quality of the subject device is equivalent or better than the predicate device").
  3. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

    • The document does not describe any formal adjudication method for the clinical image evaluation. It simply states "evaluated and compared."
  4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No, an MRMC study was NOT done. This device is an X-ray imaging system, not an AI-assisted diagnostic tool for interpretation. The study focused on demonstrating the image quality of the system itself (hardware and associated basic image processing software) as being substantially equivalent or better than a predicate system, not on improving human reader performance with AI assistance. The "VisionX 3.0" software is an image viewing program, not an AI interpretation tool.
  5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • This is not applicable in the sense of a diagnostic AI algorithm. The performance evaluation is inherently about the "algorithm" and physics of the X-ray system itself (detector, X-ray generator, image processing pipeline) without human interaction for image generation, but humans are integral for image interpretation. The device's performance (image quality, resolution, DAP) is measured directly, similar to a standalone evaluation of a sensor's capabilities.
  6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • For the bench testing, the ground truth was based on physical phantom measurements (e.g., line pair resolution, contrast using phantoms).
    • For the clinical image evaluation, the "ground truth" or reference was implicitly the subjective assessment of "general image quality" by unspecified evaluators, compared to images from the predicate device. There is no mention of an objective clinical ground truth like pathology or patient outcomes.
  7. The sample size for the training set:

    • The document describes an X-ray imaging system, not a device incorporating a machine learning model that requires a "training set" in the conventional sense. Therefore, there is no mention of a training set sample size. The software mentioned (VisionX 3.0) is a general image viewing program, not a deep learning model requiring a specific training dataset.
  8. How the ground truth for the training set was established:

    • Not applicable, as no external "training set" for a machine learning model is described.

§ 872.1800 Extraoral source x-ray system.

(a)
Identification. An extraoral source x-ray system is an AC-powered device that produces x-rays and is intended for dental radiographic examination and diagnosis of diseases of the teeth, jaw, and oral structures. The x-ray source (a tube) is located outside the mouth. This generic type of device may include patient and equipment supports and component parts.(b)
Classification. Class II.