Search Results
Found 39 results
510(k) Data Aggregation
(144 days)
VATECH Co., Ltd
Green X 12 SE (Model : PHT-40CHS) is intended to produce panoramic, cephalometric, or 3D digital x-ray images. It provides diagnostic details of the dento-maxillofacial, sinus, and TMJ for adult and pediatric patients. The system also utilizes carpal images for orthodontic treatment. The device is to be operated by healthcare professionals.
Green X 12 SE (Model : PHT-40CHS) is an advanced 4-in-1 digital X-ray imaging system that incorporates PANO, CEPH(optional), CBCT and MODEL Scan imaging capabilities into a single system. Green X 12 SE (Model : PHT-40CHS), a digital radiographic imaging system, acquires and processes multi-FOV diagnostic images for dentists. Designed explicitly for dental radiography. Green X 12 SE (Model : PHT-40CHS) is a complete digital X-ray system equipped with imaging viewers, an X-ray generator and a dedicated SSXI detector. The digital CBCT system is based on a CMOS digital X-ray detector. The CMOS CT detector is used to capture 3D radiographic images of the head, neck, oral surgery, implant and orthodontic treatment. Green X 12 SE (Model : PHT-40CHS) can also acquire 2D diagnostic image data in conventional PANO and CEPH modes.
The provided document does not contain information regarding a study that proves the device meets specific acceptance criteria in the context of AI performance, clinical trials, or comparative effectiveness studies with human readers. The document describes a Computed Tomography X-Ray System named Green X 12 SE (PHT-40CHS) and its substantial equivalence to a predicate device (Green X 12 (PHT-75CHS)).
The performance data section primarily focuses on:
- Substantial Equivalence: Demonstrating that the new device is equivalent to the predicate device despite minor changes in detector technology and some feature deletions.
- Technical Performance Testing: Verification against international standards (IEC 61223-3-5) for general image quality indicators like Contrast, Noise, CNR, and MTF, and dosimetric performance (DAP).
- Safety and EMC: Compliance with relevant IEC standards for electrical, mechanical, environmental safety, and electromagnetic compatibility.
- Software Verification and Validation: Adherence to FDA guidance for device software functions.
Therefore, many of the requested categories for AI-related studies and acceptance criteria are not applicable based on the provided text.
Here's a breakdown of the available information:
1. Table of Acceptance Criteria and Reported Device Performance:
The document does not explicitly state "acceptance criteria" in a table format for AI performance. Instead, it refers to equivalence to a predicate device and compliance with international performance standards for Computed Tomography X-ray systems.
Criterion Type (Inferred from Text) | Acceptance Criteria (Inferred) | Reported Device Performance |
---|---|---|
Image Quality | Performed equivalently to the predicate device according to IEC 61223-3-5. | Contrast, Noise, CNR, and MTF were measured and demonstrated that the subject device performed equivalently to the predicate device in general image quality. |
Dosimetric Performance | DAP measurements should be the same as the predicate device under identical exposure conditions. | DAP measurements in PANO, CEPH, and CBCT modes were the same as the predicate device under the same X-ray exposure conditions (exposure time, tube voltage, tube current). |
Software Functionality | Compliance with FDA guidance for device software functions. Software criticality assessed as "basic documentation." | Software verification and validation were conducted and documented as recommended by FDA guidance. The Green X 12 SE provides EzDent-i (K241114) for 2D viewing and Ez3D-i (K231757) for 3D viewing, both previously cleared. |
Safety and EMC | Compliance with IEC 60601-1:2005+AMD1:2012+AMD2:2020, IEC 60601-1-3:2008+AMD1:2013+AMD2:2021, IEC 60601-2-63:2012+AMD1:2017+AMD2:2021, and IEC 60601-1-2:2014+AMD1:2020. | Electrical, mechanical, environmental safety, and EMC testing were performed per specified IEC standards. All test results were satisfactory. The device also conforms to NEMA PS 3.1-3.18 (DICOM Set). |
Manufacturing Standards | Conformance with 21 CFR Part 1020.30, 1020.31, 1020.33, and 21 CFR 820.30. | The manufacturing facility is in conformance with relevant EPRC standards. Adequate design and development controls (according to 21 CFR 820.30) were in place. |
2. Sample size used for the test set and the data provenance: Not applicable. The document discusses bench testing and comparison to a predicate device, not a test set of patient cases for AI evaluation.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable. This information is relevant for AI (CAD) performance studies, which are not described here.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set: Not applicable.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: No MRMC study was described. The device is an X-ray imaging system, not an AI-assisted diagnostic tool in the sense of a CADe/CADx system.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Not applicable. The device is a physical imaging system.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.): Not applicable in the context of AI performance. For the technical performance aspects, the "ground truth" would be established by the specifications and measurements according to IEC standards.
8. The sample size for the training set: Not applicable. This device is an imaging system, not an AI algorithm requiring a training set in the conventional sense.
9. How the ground truth for the training set was established: Not applicable.
Ask a specific question about this device
(137 days)
VATECH Co., Ltd
Green X 21 (Model: PHT-90CHO) is intended to produce panoramic, cephalometric, or 3D digital Xray images. It provides diagnostic details of the dento-maxillofacial, sinus, TMJ, and ENT for adult and pediatric patients. The system also utilizes carpal images for orthodontic treatment. The device is to be operated by healthcare professionals.
Green X 21 (Model : PHT-90CHO) is an advanced 6-in-1 digital X-ray system specifically designed for 2D and 3D dental radiography. This system features six imaging modalities: PANO, CEPH (optional), DENTAL CT, ENT CT, MODEL, and FACE SCAN, all integrated into a single unit.
Green X 21's imaging system is based on digital TFT detectors, accompanied by imaging viewers and an X-ray generator.
The materials, safety characteristics, X-ray source, indications for use and image reconstruction/MAR(Metal Artifact Reduction) algorithm of the subject device are same as the predicate device (PHT-75CHS (K210329)). The subject device differs from the predicate device in the following ways: It is equipped with new X-ray detectors for CT/PANO and CEPH modification results in a different maximum FOV provided in a single scan for CT mode compared to the predicate device. For CEPH modality, the subject device utilizes a one- shot imaging capture method.
Additionally, the subject device includes new modalities such as ENT CT and FACE SCAN with a face scanner, along with new software functions, including Auto Pano, Smart Focus, and Scout.
The provided document describes a 510(k) premarket notification for the "Green X 21 (PHT-90CHO)" digital X-ray system. The core of the submission is to demonstrate substantial equivalence to a predicate device, the "Green X 18 (PHT-75CHS)". The document focuses heavily on comparing technical specifications and performance data to support this claim, particularly for the new components and features of the Green X 21.
However, it's crucial to note that this document does not describe a study that proves the device meets specific acceptance criteria in the context of an AI/human-in-the-loop performance study. Instead, it focuses on engineering performance criteria related to image quality and safety for an X-ray imaging device, and comparing these to a predicate device. The "acceptance criteria" discussed are primarily technical specifications and performance benchmarks for the X-ray system components (detectors, imaging quality metrics), rather than clinical performance metrics in disease detection with AI assistance.
Therefore, many of the requested items (e.g., sample size for test set, number of experts for ground truth, adjudication method, MRMC study, standalone performance) are not applicable or not detailed in this submission because the device in question is an X-ray imaging system, not an AI-based diagnostic tool. The "new software functions" mentioned (Auto Pano, Smart Focus, Scout) are described as image reconstruction or manipulation tools, not AI algorithms for clinical diagnosis.
Here's a breakdown based on the information available, addressing the points where possible and noting when information is absent or not relevant to the provided text:
Acceptance Criteria and Reported Device Performance
Given that this is an X-ray imaging device and not an AI diagnostic tool, the acceptance criteria are generally related to image quality, safety, and functional performance, benchmarked against standards and the predicate device.
Table of Acceptance Criteria and Reported Device Performance (as inferred from the document):
Acceptance Criteria Category | Specific Criteria (Inferred) | Reported Device Performance (Green X 21) |
---|---|---|
I. Imaging Performance (New X-ray Detectors) | ||
1. CT/PANO Detector (Jupi1012X) | - Modulation Transfer Function (MTF) & Detective Quantum Efficiency (DQE) & Noise to Power Spectrum (NPS): Performance comparable or superior to predicate (Xmaru1524CF Master Plus OP). | - MTF (CT/PANO): Jupi1012X showed more stable or superior performance for DQE, MTF, and NPS, particularly better stability in high-frequency regions. Jupi1012X could distinguish up to 3.5 line pairs (MTF 10% criterion), compared to 2.5 line pairs for predicate. |
- Pixel Size: Similar to predicate device. | - Pixel Size (CT/PANO): "Very similar" to predicate. Image test patterns demonstrated test objects across the same spatial frequency range without aliasing. | |
2. CEPH Detector (Venu1012VD) | - MTF, DQE, NPS: Performance comparable or superior to predicate (Xmaru2602CF), despite predicate having lower NPS (noise). | - MTF (CEPH): Venu1012VD exhibited superior performance in DQE, MTF, and NPS (except predicate's better NPS due to lower noise). Higher MTF values indicate sharper images. |
- Pixel Size: Similar to predicate device. | - Pixel Size (CEPH): Similar to predicate's 100 µm (non-binning). Image test patterns demonstrated test objects across the same spatial frequency range without aliasing. | |
3. Overall Diagnostic Image Quality | - Equivalent to or better than predicate device. | - "Equivalently or better than the predicate device in overall image quality." |
II. Compliance with Standards and Guidelines | ||
1. IEC 61223-3-5 (CT) | - Quantitative testing for noise, contrast, CNR, MTF 10%. | - All parameters met the standards specified. |
2. IEC 61223-3-4 (Dental X-ray) | - Quantitative assessment for line pair resolution and low contrast performance (PANO/CEPH). | - All parameters met the standards specified. |
3. Software/Firmware (Basic Documentation Level) | - Adherence to FDA Guidance "Content of Premarket Submissions for Device Software Functions." | - Software verification and validation conducted and documented. |
4. Safety & EMC | - Compliance with IEC 60601-1, IEC 60601-1-3, IEC 60601-2-63 standards for electrical, mechanical, environmental safety and performance; IEC 60601-1-2 for EMC. | - Testing performed and results were satisfactory. |
III. Functional Equivalence / Performance of New Modalities/Functions | ||
1. ENT CT Modality | - Meet image quality standards of Dental CT; limit radiation exposure to ENT region. | - "Adheres to the same image quality standards as the Dental CT modality." Specifically designed to limit radiation exposure to ENT region. |
2. FACE SCAN Modality | - Intended for aesthetic consultations, not clinical diagnostic; meets internal criteria. | - "Not designed for clinical diagnostic purposes." "Meets the internally established criteria and has been designed to perform as intended." |
3. Auto Pano, Smart Focus, Scout Modes | - Should function as intended, similar to previously cleared devices (Green X 12, K231796). | - Evaluated according to IEC 61223-3-4 and IEC 61223-3-5; "both standard requirements were met." |
IV. Dosimetric Performance (DAP) | ||
1. PANO Modality DAP | - Similar DAP to predicate device under same exposure conditions. | - "DAP measurement results are similar" when tested under same exposure conditions (High Resolution Mode). |
2. CEPH Modality DAP | - Performance balance between increased DAP (due to one-shot type) and reduced exposure time/motion artifacts. | - DAP "more than twice that of the predicate device" (due to one-shot vs. scan-type), but "utilizes a one-shot type, operating with approximately one-fourth the exposure time... This reduces motion artifacts." |
3. CT Modality DAP | - Overall DAP performance balanced with FOV and image quality. | - "Slight increase in DAP for the subject device" for most comparable/equivalent FOVs. However, "maximum FOV provided by the subject device demonstrated a reduced radiation dose compared to the predicate device." |
Study Details (as far as applicable and available)
-
Sample size used for the test set and the data provenance:
- Test Set Description: The "test set" in this context is referring to data collected through bench testing using phantoms for image quality assessment, and potentially clinical images for subjective comparison, rather than a clinical trial cohort.
- Sample Size: Not specified in terms of number of patient images. The testing was conducted on the device itself using phantom studies (e.g., test patterns for MTF, line pairs for resolution, phantoms for low contrast).
- Data Provenance: The bench tests were conducted in a laboratory ("in a laboratory using the same test protocol as the predicate device"). The "Clinical consideration" section for image quality evaluation implies some clinical image review, but the origin (e.g., country, retrospective/prospective) of these potential clinical images is not detailed.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not Applicable in the traditional sense of an AI study. Ground truth for an X-ray imaging device's performance is typically established through quantitative measurements using phantoms and technical specifications, not expert consensus on medical findings.
- The document mentions "Image Quality Evaluation Report and Clinical consideration" and concludes "the subject device performed equivalently or better than the predicate device in overall image quality." This implies some form of expert review for subjective image quality, but the number or qualifications of these "experts" are not detailed.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not Applicable. As noted above, this is not an AI diagnostic study with a human-in-the-loop component requiring adjudication of disease findings. The evaluations are primarily technical and quantitative measurements comparing physical properties and image output.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No. An MRMC study was not described. This is an X-ray imaging system, not an AI diagnostic software that assists human readers. The new software functions (Auto Pano, Smart Focus, Scout) described are image manipulation/reconstruction features, not AI for diagnostic assistance.
-
If a standalone (i.e. algorithm only, without human-in-the-loop performance) was done:
- Not applicable in the context of an AI algorithm. The device is an X-ray system; its "standalone" performance refers to its ability to produce images according to technical specifications, which was assessed through bench testing (MTF, DQE, NPS, etc.). There is no mention of a diagnostic AI algorithm that operates standalone.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Quantitative Phantom Measurements and Technical Specifications: For image quality (MTF, DQE, NPS, line pair resolution, contrast, noise, CNR), the ground truth is established by physical measurements using standardized phantoms and reference values as per IEC standards.
- Predicate Device Performance: A key "ground truth" for substantial equivalence is the established performance of the predicate device. The new device's performance is compared against this benchmark.
- Internal Criteria: For functionalities like FACE SCAN, "internally established criteria" were used as a benchmark.
-
The sample size for the training set:
- Not Applicable. This document describes an X-ray imaging device, not an AI model that requires a training set. The software functions mentioned (Auto Pano, Smart Focus, Scout) are described as computational algorithms for image reconstruction or enhancement, not machine learning models that learn from a training dataset.
-
How the ground truth for the training set was established:
- Not Applicable. No training set for an AI model is described.
Ask a specific question about this device
(294 days)
VATECH Co., Ltd
The unit is intended to produce panoramic or cephalometric digital x-ray images. It provides diagnostic details of the dento-maxillofacial, sinus and TMJ for adult and pediatric patients. The system also utilizes carpal images for orthodontic treatment. The device is to be operated by physicians, dentists, and x-ray technicians.
The device is an advanced 2-in-1 digital X-ray imaging system that incorporates PANO and CEPH (Optional) imaging capabilities into a single system and acquires 2D diagnostic image data in conventional panoramic and cephalometric modes.
The device is not intended for CBCT imaging.
VistaPano S is identified as panoramic-only models for VistaPano S Ceph.
ProVecta S-Pan Ceph and ProVecta S-Pan are alternative model for VistaPano S Ceph and VistaPano S respectively.
The subject device has different model names designated for different US distributors:
- -VistaPano S Ceph, VistaPano S: DÜRR DENTAL
- ProVecta S-Pan Ceph, ProVecta S-Pan: AIR TECHNIQUES ।
Key components of the device -
- VistaPano S Ceph 2.0 (Model: VistaPano S Ceph), VistaPano S 2.0 (Model: VistaPano S) digital x-ray equipment (Alternate: ProVecta S-Pan Ceph 2.0 (Model: ProVecta S-Pan Ceph), ProVecta S-Pan 2.0 (Model: ProVecta S-Pan))
-
- SSXI detector: Xmaru1501CF-PLUS, Xmaru2602CF
-
- X-ray generator
-
- PC system
-
- Imaging software
The provided text describes the substantial equivalence of the new VATECH X-ray imaging systems (VistaPano S Ceph 2.0, VistaPano S 2.0, ProVecta S-Pan Ceph 2.0, ProVecta S-Pan 2.0) to their predicate device (PaX-i Plus/PaX-i Insight, K170731).
Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:
Acceptance Criteria and Reported Device Performance
The acceptance criteria are not explicitly stated as numerical metrics in a table. Instead, the document focuses on demonstrating substantial equivalence to a predicate device. This is a common approach for medical device clearance, where the new device is shown to be as safe and effective as a legally marketed device. The "performance" in this context refers to demonstrating that the new device functions similarly or better than the predicate, especially for the new "non-binning" mode.
The study aims to show that the new device's performance is equivalent or better than the predicate device, particularly for the new "HD mode (non-binning)" in CEPH imaging. The primary comparison points are:
Acceptance Criteria (Implied by Substantial Equivalence Goal) | Reported Device Performance (vs. Predicate) |
---|---|
PANO Mode Image Quality: Equivalent to predicate. | "similar" (implied "equivalent") |
CEPH Mode Image Quality (SD/2x2 binning): Equivalent to predicate. | "same" (as predicate's Fast mode) |
CEPH Mode Image Quality (HD/non-binning): Equivalent or better than predicate. | "better performance" and "performed better or equivalent in line pair resolution than the predicate device." |
Dosimetric Performance (DAP): Similar to predicate. | "DAP measurement in the PANO mode of each device under the same X-ray exposure conditions... was similar." and "SD mode... same X-ray exposure conditions (exposure time, tube voltage, tube current) are the same with the Fast mode of the predicate device." |
Biocompatibility of Components: Meets ISO 10993-1 standard. | "biocompatibility testing results showed that the device's accessory part are biocompatible and safe for its intended use." |
Software Functionality and Safety: Meets FDA guidance for "moderate" level of concern. | "Software verification and validation were conducted and documented... The software for this device was considered as a 'moderate' level of concern." Cybersecurity guidance was also applied. |
Electrical, Mechanical, Environmental Safety & EMC: Conforms to relevant IEC standards. | "Electrical, mechanical, environmental safety and performance testing according to standard IEC 60601-1... IEC 60601-1-3... IEC 60601-2-63... EMC testing were conducted in accordance with standard IEC 60601-1-2... All test results were satisfactory." |
Conformity to EPRC standards: | "The manufacturing facility is in conformance with the relevant EPRC standards... and the records are available for review." |
DICOM Conformity: | "The device conforms to the provisions of NEMA PS 3.1-3.18, Digital Imaging and Communications in Medicine (DICOM) Set." |
Study Details:
The provided document is a 510(k) summary, not a detailed study report. Therefore, some specific details about the study methodology (like expert qualifications or full sample sizes for clinical images) are not granularly described. However, we can infer some information:
-
Sample sizes used for the test set and the data provenance:
- The document states "Clinical images obtained from the subject and predicate devices are evaluated and compared." However, the exact sample size for this clinical image evaluation (the "test set" in AI/ML terms) is not specified.
- The data provenance is implied to be from a retrospective collection of images, likely from VATECH's own testing/development or existing clinical sites that used the predicate device and potentially early versions of the subject device. The country of origin for the clinical images is not explicitly stated, but given the manufacturer is based in Korea ("VATECH Co., Ltd. Address: 13, Samsung 1-ro 2-gil, Hwaseong-si, Gyeonggi-do, 18449, Korea"), it's reasonable to infer some data might originate from there.
- For the bench testing, the sample size is also not specified, but it would involve phantoms.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- The document mentions "Clinical images obtained from the subject and predicate devices are evaluated and compared." However, it does not specify the number of experts, their qualifications, or how they established "ground truth" for these clinical images. The evaluation is described in general terms, implying a qualitative assessment of general image quality ("general image quality of the subject device is equivalent or better than the predicate device").
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- The document does not describe any formal adjudication method for the clinical image evaluation. It simply states "evaluated and compared."
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC study was NOT done. This device is an X-ray imaging system, not an AI-assisted diagnostic tool for interpretation. The study focused on demonstrating the image quality of the system itself (hardware and associated basic image processing software) as being substantially equivalent or better than a predicate system, not on improving human reader performance with AI assistance. The "VisionX 3.0" software is an image viewing program, not an AI interpretation tool.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- This is not applicable in the sense of a diagnostic AI algorithm. The performance evaluation is inherently about the "algorithm" and physics of the X-ray system itself (detector, X-ray generator, image processing pipeline) without human interaction for image generation, but humans are integral for image interpretation. The device's performance (image quality, resolution, DAP) is measured directly, similar to a standalone evaluation of a sensor's capabilities.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For the bench testing, the ground truth was based on physical phantom measurements (e.g., line pair resolution, contrast using phantoms).
- For the clinical image evaluation, the "ground truth" or reference was implicitly the subjective assessment of "general image quality" by unspecified evaluators, compared to images from the predicate device. There is no mention of an objective clinical ground truth like pathology or patient outcomes.
-
The sample size for the training set:
- The document describes an X-ray imaging system, not a device incorporating a machine learning model that requires a "training set" in the conventional sense. Therefore, there is no mention of a training set sample size. The software mentioned (VisionX 3.0) is a general image viewing program, not a deep learning model requiring a specific training dataset.
-
How the ground truth for the training set was established:
- Not applicable, as no external "training set" for a machine learning model is described.
Ask a specific question about this device
(29 days)
Vatech Co., Ltd
Green X 12 (Model : PHT-75CHS) is intended to produce panoramic, cephalometric or 3D digital x-ray images. It provides diagnostic details of the dento-maxillofacial, ENT, sinus and pediatric patients. The system also utilizes carpal images for orthodontic treatment The device is to be operated by healthcare professionals.
Green X 12 (Model : PHT-75CHS) is an advanced 4-in-1 digital X-ray imaging system that incorporates PANO, CEPH(optional), CBCT and MODEL Scan imaging capabilities into a single system. Green X 12 (Model : PHT-75CHS), a digital radiographic imaging system, acquires and processes multi-FOV diagnostic images for dentists. Designed explicitly for dental radiography. Green X 12 (Model : PHT-75CHS) is a complete digital X-ray system equipped with imaging viewers, an X-ray generator and a dedicated SSXI detector.
The digital CBCT system is based on a CMOS digital X-ray detector. The CMOS CT detector is used to capture 3D radiographic images of the head, neck, oral surgery, implant and orthodontic treatment.
Green X 12 (Model : PHT-75CHS) can also acquire 2D diagnostic image data in conventional PANO and CEPH modes.
The materials, safety characteristics, X-ray source, indications for use and image reconstruction/MAR(Metal Artifact Reduction) algorithm of the subject device are same to the predicate device (PHT-75CHS (K201627)). The difference from the predicate device is that the maximum FOV provided to the user is different by equipping the new CBCT/PANO detector. Also, New software functions (Auto Pano, Smart Focus, Scout) have been added.
The provided document details the 510(k) submission for the "Green X 12 (Model: PHT-75CHS)" dental X-ray imaging system. The submission aims to demonstrate substantial equivalence to a predicate device, the "Green X (Model: PHT-75CHS)" (K201627), and references another device, "Green Smart (Model: PHT-35LHS)" (K162660).
The primary changes in the subject device compared to the predicate device are:
- New detector: Equipped with the Xmaru1404CF-PLUS detector (cleared with K162660).
- New software functions: Auto Pano, Smart Focus, and Scout.
The document describes non-clinical performance evaluations rather than clinical studies with human readers.
Here's the breakdown of the information based on your request:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly provide a table of acceptance criteria with specific numerical thresholds and corresponding reported performance values for each criterion in an acceptance study. Instead, it states that the device's performance was compared to the predicate device and relevant standards.
However, based on the text, we can infer some criteria and the general findings:
Acceptance Criterion (Inferred) | Reported Device Performance |
---|---|
General Image Quality (CT) | Measured Contrast, Noise, CNR (Contrast-to-Noise Ratio), and MTF (Modulation Transfer Function) with FDK (back projection) and CS (iterative) reconstruction algorithms. Results demonstrated equivalent performance to the predicate device. |
Dosimetric Performance (DAP) | In PANO mode, DAP (Dose Area Product) measurements were the same as the predicate device under identical FDD, exposure area, exposure time, tube voltage, and tube current. |
In CEPH mode, DAP measurements were the same as the predicate device under identical FDD, detector specifications, and exposure conditions. | |
In CBCT mode (at common FOVs 80x80 / 80x50 / Endo 40x40 mm), the DAP of the subject device was lower than the predicate device. | |
Clinical Image Quality | Evaluation Report demonstrated that the general image quality of the subject device is equivalent to the predicate device in PANO/CBCT mode. |
Software V&V | Software verification and validation were conducted according to FDA guidance. Considered "moderate" level of concern. |
Cybersecurity | Applied in compliance with FDA guidance. |
Safety, EMC, Performance | Electrical, mechanical, environmental safety, and performance testing conducted per standards (IEC 60601-1, IEC 60601-1-3, IEC 60601-2-63, IEC 60601-1-2). All test results were satisfactory. |
DICOM Conformance | Conforms to NEMA PS 3.1-3.18. |
Added Software Functions | - Auto Pano: Already cleared in reference device (K162660). |
- Smart Focus: FOV 40x40 mm images were clinically evaluated by a US licensed dentist.
- Image Quality (new software): Performed in compliance with IEC 61223-3-4 and IEC 61223-3-5. Both standard requirements were satisfied. |
2. Sample Size Used for the Test Set and Data Provenance
The document describes non-clinical performance and image quality evaluations rather than a test set of patient cases.
- Sample Size for Test Set: Not applicable in the context of human patient data. The "test set" was described as physical measurements on the device itself and phantoms. No specific number of cases or images are mentioned for the "clinical evaluation" of Smart Focus or the image quality assessment of new functions beyond complying with IEC standards.
- Data Provenance: Not applicable in terms of country of origin or retrospective/prospective for patient data. The tests were laboratory-based and non-clinical. The "clinical evaluation" for Smart Focus was conducted by a "US licensed dentist," implying evaluation of generated images rather than a broad clinical trial.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- Number of Experts: For the "Smart Focus" mode, "a US licensed dentist" performed the clinical evaluation. This indicates at least one expert. For other image quality evaluations, the primary assessment relies on compliance with IEC standards and comparison to the predicate device, which would involve technical experts in radiology and medical imaging rather than medical specialists establishing "ground truth" on patient cases.
- Qualifications of Experts: For Smart Focus, "a US licensed dentist" is specified. No specific years of experience are listed. For other evaluations, the experts are implied to be qualified in medical device testing, radiology physics, and engineering.
4. Adjudication Method for the Test Set
Not applicable. The evaluations described are primarily non-clinical measurements and comparisons against a predicate device or standards, rather than a diagnostic performance study requiring adjudication of expert interpretations of patient cases. The "clinical evaluation" of Smart Focus involved a single "US licensed dentist," suggesting no adjudication process was needed for this part.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No, an MRMC comparative effectiveness study was not done. The submission focuses on demonstrating substantial equivalence through non-clinical performance testing and comparison to a predicate device, not on assessing human reader performance with or without AI assistance.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) Was Done
The document primarily describes an imaging system, not an independent AI algorithm. However, the evaluation of "general image quality" (Contrast, Noise, CNR, MTF) for the CT reconstruction algorithms (FDK and CS) can be considered a standalone performance assessment of the image generation and processing aspects of the device, separate from human interpretation. The "new software functions" (Auto Pano, Smart Focus, Scout) are features of the device, and their performance was evaluated for image quality and clinical utility, effectively in a "standalone" manner in terms of the algorithm producing the image/feature.
7. The Type of Ground Truth Used
- For the non-clinical image quality metrics (Contrast, Noise, CNR, MTF), the "ground truth" is based on physical measurements using phantoms and established metrology for X-ray imaging systems (e.g., as per IEC standards).
- For the clinical evaluation of the Smart Focus mode, the "ground truth" is implied to be the expert opinion/assessment of a US licensed dentist regarding the quality and diagnostic utility of the 40x40 mm images.
- For the new software functions' image quality evaluation, adherence to IEC 61223-3-4 and IEC 61223-3-5 standards serves as the ground truth/benchmark.
8. The Sample Size for the Training Set
Not applicable. This document describes the 510(k) submission for a medical imaging device (CT X-ray system) and its software functions, not a machine learning or AI algorithm that requires a separate training set for model development. The "Auto Pano" function, while a software feature, is not described as an AI algorithm that learns from data; it reconstructs 3D CBCT data into 2D panoramic images, a known image processing technique.
9. How the Ground Truth for the Training Set Was Established
Not applicable, as no training set for a machine learning model is mentioned in the document.
Ask a specific question about this device
(26 days)
Vatech Co., Ltd
The EzRay Air 2 Wall (Model Name: VEX-S350W) is a dental X-ray system intended for use by a trained and qualified dentist or dental technician for both adult and pediatric subjects for producing diagnostic dental radiographs for the treatment of diseases of the teeth, jaw, and other oral structures using intra-oral image receptors.
The EzRay Air 2 Wall (Model: VEX-S350W) is a dental X-ray system intended for intra-oral imaging. It consists of an X-ray generator, X-ray controller, beam limiting device, operation panel, and mechanical arm. The X-ray controller allows for accurate exposure control, and the adjustable mechanical arm allows for easy positioning. The functions of the VEX-S350W intra-oral system are supported by software (firmware). The software is based on the predicate device and is of Moderate level of concern. The system can be used with an imaging system.
The provided text describes a 510(k) premarket notification for the EzRay Air 2 Wall (Model: VEX-S350W) dental X-ray system. The submission focuses on demonstrating substantial equivalence to a predicate device (EzRay Air Wall (Model: VEX-S300W) / K163705) rather than providing a detailed study proving the device meets specific acceptance criteria in the context of an AI/ML medical device.
Therefore, many of the requested details regarding acceptance criteria for AI/ML performance, study design (sample size, ground truth, expert adjudication, MRMC studies), training set specifics, and effect sizes of human reader improvement with AI assistance are not present in this document. The document primarily addresses the safety and performance of an X-ray imaging system itself, not an AI component.
However, based on the information provided regarding the device's performance testing for its X-ray imaging capabilities, here's an attempt to extract and reframe the information to fit the structure of your request where possible, acknowledging the limitations due to the nature of the submitted document:
Device: EzRay Air 2 Wall (Model: VEX-S350W) - Dental X-ray System
The provided document describes the acceptance criteria and performance study for the EzRay Air 2 Wall, a dental X-ray system. It focuses on the device's imaging capabilities, not an integrated AI component. Therefore, the details requested for AI/ML device validation are not applicable or available in this submission.
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria for this X-ray system are based on compliance with Federal Standards and International Standards related to X-ray performance and safety. The performance is demonstrated through various tests.
Acceptance Criteria (Standard / Requirement) | Reported Device Performance (EzRay Air 2 Wall (VEX-S350W)) |
---|---|
Minimum Source to Skin Distance | Longer than the minimum length of 20 cm |
Accuracy of Loading Factors | Met essential performance requirements (e.g., |
Ask a specific question about this device
(107 days)
Vatech Co., Ltd
EzRay M18 (Model: VMX-P400) is a portable X-ray system, intended for use by a qualified/trained physician or technician to acquire X-ray images of the desired parts of a patient's anatomy on adult and pediatric patients (including head, chest, abdomen, cervical spine, and extremities).
The device may be used for handheld diagnostic imaging of body extremities.
The system is subject to the following limitations of use when stand-mounted:
-
The device may be used for diagnostic imaging of the head, abdomen, cervical spine, or extremities.
-
The device may be used for imaging of the chest when used without a grid.
This device is not intended for mammography.
EzRay M18 (Model: VMX-P400), a medical portable X-ray system, operates on 57.6 Vdc supplied by a rechargeable Li-ion battery pack. The system is composed of an X-ray generating part including an X-ray tube, a device controller, a power controller, a user interface, a beam limiting part, and two optional components: remote controller and stand. The device software supports the EzRay M18 system, and the software is of Moderate level of concern. The device is designed for the human body using image receptors. The image detectors, a necessary component for a fully-functional x-ray system, are not part of this submission.
The provided document is a 510(k) premarket notification for the EzRay M18 (Model: VMX-P400) mobile X-ray system. It establishes substantial equivalence by comparing the device to a predicate device (MinXray, Inc. TR90BH, K182207) and a reference device (OSKO, Inc. XR5, K150663).
However, the document does not contain information about an AI/algorithm component. Instead, it focuses on the hardware and its compliance with established safety and performance standards for X-ray systems. Therefore, many of the requested details related to AI performance, ground truth establishment, expert adjudication, and MRMC studies are not applicable or cannot be extracted from this document.
Here's the information that can be extracted, based on the non-clinical testing performed for this traditional 510(k) submission:
Acceptance Criteria and Reported Device Performance (Non-AI Device)
This device is an X-ray system, not an AI-powered diagnostic tool. The acceptance criteria for such a device primarily revolve around safety, electrical performance, and image quality characteristics, rather than diagnostic accuracy metrics like sensitivity or specificity for a specific condition.
Acceptance Criteria Category | Reported Device Performance (as described in the document) |
---|---|
Safety and Electrical Performance | - Complies with IEC 60601-1:2005 (Medical electrical equipment - General requirements for basic safety and essential performance). |
- Complies with IEC 60601-1-2:2014 (Electromagnetic compatibility - Requirements and tests).
- Complies with IEC 60601-2-28:2017 (Medical Electrical Equipment - Exposure conditions - Diagnostic X-ray equipment).
- Complies with IEC 60601-2-54:2009, AMD1:2015, AMD2:2018 (X-ray equipment for radiography and radioscopy).
- Complies with IEC 62133-2:2017 (Secondary cells and batteries containing alkaline or other non-acid electrolytes – Safety requirements for portable sealed secondary cells, and for batteries made from them, for use in portable applications).
- Conforms to 21 CFR 1020 Subchapter J (Performance Standards for Ionizing Radiation Emitting Products), specifically 21 CFR 1020.30 (Diagnostic x-ray system) and 21 CFR 1020.31 (Radiographic Equipment). |
| Image Quality Performance Parameters | - MTF (Modulation Transfer Function) results compared with reference device (XR5, K150663) and found equivalent. - Spatial Frequency results compared with reference device and found equivalent.
- DQE (Detective Quantum Efficiency) results compared with reference device and found equivalent.
- NPS (Noise Power Spectrum) results compared with reference device and found equivalent. |
| Functional Equivalence | - Capable of setting higher mA (20 mA vs 15 mA max for predicate) under same maximum tube voltage (90kV), allowing capture of radiographic images for the same body parts as the predicate device. - User interface (soft touch push buttons) and collimator (continuously adjustable light beam type) are similar to the predicate.
- Energy source (rechargeable Li-ion battery pack) is the same as the predicate. |
| Intended Indications for Use Adherence | - Intended for use to acquire X-ray images of desired parts of a patient's anatomy on adult and pediatric patients (head, chest, abdomen, cervical spine, extremities). - May be used for handheld diagnostic imaging of body extremities.
- Stand-mounted use limitations: diagnostic imaging of head, abdomen, cervical spine, or extremities; chest imaging without a grid.
- Not intended for mammography. (All align with predicate and intended use). |
Study Details (Relevant to this Traditional 510(k) X-ray System Submission)
-
Sample Size used for the test set and data provenance:
- For image performance testing, the document states: "Image performance testing was performed on the EzRay M18 in comparison with a reference device XR5 (K150663) using a FDA-cleared flat-panel detector, 1417WCC (K171418)."
- The document does not specify the sample size (e.g., number of images, phantoms, or subjects) used for this image performance testing.
- The data provenance (country of origin, retrospective/prospective) is not mentioned. These tests are typically bench tests using phantoms or controlled images, not patient data in the context of an X-ray machine clearance.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- This is not applicable as the study described is a technical performance comparison of an X-ray machine's image characteristics (MTF, DQE, etc.) with a reference device, rather than a diagnostic accuracy study requiring expert human reads for ground truth. Ground truth for these technical measurements is derived from the physical properties of the test phantoms or from established engineering principles and measurements.
-
Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not applicable. There's no human reader interpretation or adjudication described, as the study focuses on the physical image output quality metrics.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- Not applicable. This submission is for a mobile X-ray system hardware, not an AI diagnostic algorithm. Therefore, no MRMC study, AI assistance, or human reader improvement associated with AI is discussed or performed.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Not applicable. There is no AI algorithm being submitted. The device is purely an imaging acquisition system.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For the image performance testing, the "ground truth" for metrics like MTF, DQE, spatial frequency, and NPS would be derived from the physical and mathematical properties of the test phantoms or technical measurement standards used in a controlled laboratory setting. It is essentially comparing objective technical measurements against those of a known, legally marketed device.
-
The sample size for the training set:
- Not applicable. This device does not involve a "training set" as it is a hardware device, not a machine learning model.
-
How the ground truth for the training set was established:
- Not applicable. As above, there is no training set mentioned or implied for this device.
Ask a specific question about this device
(14 days)
Vatech Co., Ltd
Green X 18 (Model : PHT-75CHS) is intended to produce panoramic, cephalometric or 3D digital x-ray images. It provides diagnostic details of the dento-maxillofacial, ENT, sinus and TMJ for adult and pediatric patients. The system also utilizes carpal images for orthodontic treatment. The device is to be operated by healthcare professionals.
Green X 18 (Model : PHT-75CHS) is an advanced 4-in-1 digital X-ray imaging system that incorporates PANO, CEPH(optional), CBCT and MODEL Scan imaging capabilities into a single system. Green X 18 (Model : PHT-75CHS), a digital radiographic imaging system, acquires and processes multi-FOV diagnostic images for dentists. Designed explicitly for dental radiography. Green X 18 (Model : PHT-75CHS) is a complete digital X-ray system equipped with imaging viewers, an X-ray generator and a dedicated SSXI detector. The digital CBCT system is based on a CMOS digital X-ray detector. The CMOS CT detector is used to capture 3D radiographic images of the head, neck, oral surgery, implant and orthodontic treatment. The materials, safety characteristics, X-ray source, indications for use and image reconstruction/MAR(Metal Artifact Reduction) algorithm of the subject device are same to the predicate device (PHT-75CHS (K201627)). The difference from the predicate device is that it is equipped with a new CBCT/PANO detector to provide users with a larger CBCT FOV.
The medical device in question is the Green X 18 (Model: PHT-75CHS), a digital X-ray imaging system for panoramic, cephalometric, and 3D dental imaging. The study described focuses on demonstrating substantial equivalence to a predicate device, the Green X (Model: PHT-75CHS, K201627), particularly concerning a new detector, Xmaru1524CF Master Plus OP.
Based on the provided text, the acceptance criteria and study information can be summarized as follows:
1. Table of Acceptance Criteria and Reported Device Performance:
The document primarily focuses on demonstrating equivalence to the predicate device rather than setting specific numeric acceptance criteria for unique features or diagnostic accuracy. Instead, the acceptance is based on the new detector performing "equivalently or better" than the predicate.
Acceptance Criterion (Implicit) | Reported Device Performance (Subject Device vs. Predicate Device) |
---|---|
Technological Characteristics | The fundamental technological characteristics of the subject and predicate device are identical. Similar imaging modes (PANO, CEPH (Optional), CBCT, and 3D MODEL Scan). The materials, safety characteristics, X-ray source, indications for use, and image reconstruction/MAR (Metal Artifact Reduction) algorithm are the same as the predicate device. The difference is the new CBCT/PANO detector for a larger CBCT FOV. |
Pixel Resolution (New Detector vs. Predicate Detector) | New Detector (Xmaru1524CF Master Plus OP): 5 lp/mm (2x2 binning), 2.5 lp/mm (4x4 binning) for CT&PANO. |
Predicate Detector (Xmaru1314CF): 5 lp/mm (2x2 binning), 2.5 lp/mm (4x4 binning). Test patterns of the new sensor images show the test subjects without aliasing throughout the same spatial frequency as the predicate device. | |
DQE Performance (New Detector vs. Predicate Detector) | New Detector (Xmaru1524CF Master Plus OP): Similarly or better overall DQE performance. At a low spatial frequency (~0.5 lp/mm), DQE of 41% (4x4 binning). |
Predicate Detector (Xmaru1314CF): DQE of 36% (4x4 binning) at ~0.5 lp/mm. | |
MTF and NPS Performance (New Detector vs. Predicate Detector) | The new sensor also exhibits similar performances in terms of MTF and NPS. |
Image Quality (Contrast, Noise, CNR, MTF in CT mode) | The subject device performed equivalently or better than the predicate device in the general image quality, measured with FDK (back projection) and CS (iterative) reconstruction algorithm. |
Dosimetric Performance (DAP) | PANO mode: DAP measurement was the same under identical FDD, exposure area, X-ray exposure time, tube voltage, and tube current. |
CEPH mode: DAP measurement was the same under identical FDD, detector specifications, X-ray exposure conditions (exposure time, tube voltage, tube current). | |
CBCT mode: DAP measurements compared at different FOV sizes (12x9/8x8/8x5/5x5 cm) were equivalent under identical FDD and exposure conditions. | |
General Clinical Image Quality (PANO/CBCT mode) | The Clinical consideration and Image Quality Evaluation Report further demonstrated that the general image quality of the subject device is equivalent or better than the predicate device. |
Compliance with Standards (Non-Clinical) | The acceptance test was performed according to the requirements of 21 CFR Part 1020.30, 1020.33 and IEC 61223-3-5. The device passed these tests. Non-clinical consideration report according to FDA Guidance "Guidance for the submissions of 510(k)'s for Solid State X-ray Imaging Devices" was provided. Bench testing according to FDA Guidance "Format for Traditional and Abbreviated 510(k)s, Performance Testing – Bench" were performed. Acceptance test and Image evaluation report according to IEC 61223-3-4 and IEC 61223-3-5 were also performed. All test results were satisfactory. |
Software Verification and Validation | Software verification and validation were conducted and documented as recommended by FDA's Guidance for Industry and FDA Staff, "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices." The software was considered a "moderate" level of concern. The viewing programs EzDent-i (K202116) and Ez3D-i (K200178) were previously cleared. |
Safety, EMC, and Performance Standards (Electrical, Mechanical, Environmental) | Electrical, mechanical, environmental safety and performance testing according to IEC 60601-1:2005+AMD1:2012(Edition 3.1), IEC 60601-1-3:2008+AMD1:2013 (Edition 2.1), IEC 60601-2-63:2012+AMD1:2017 (Edition 1.1) were performed. EMC testing was conducted in accordance with IEC 60601-1-2:2014 (Edition 4). Manufacturing facility conforms with relevant EPRC standards (21 CFR 1020.30, 31, and 33). Conforms to NEMA PS 3.1-3.18, Digital Imaging and Communications in Medicine (DICOM) Set. All test results were satisfactory. |
2. Sample Size for the Test Set and Data Provenance:
- Sample Size: Not explicitly stated as a number of patient cases or images. The performance testing was conducted in a laboratory setting using test protocols and phantoms, not a clinical test set of patient images.
- Data Provenance: The testing was "non-clinical" and "in a laboratory." It compared the performance of the new detector and the subject device against the predicate device. This implies retrospective comparison against previously established performance data for the predicate, and possibly prospective bench testing on the new device itself. The data is likely from the manufacturer's internal testing facilities (presumably in Korea, given the manufacturer's address).
3. Number of Experts and Qualifications for Ground Truth of Test Set:
- There is no mention of human experts being used to establish ground truth for a test set of clinical images. The provided information describes non-clinical performance testing using quantitative metrics (DQE, MTF, NPS, Contrast, Noise, CNR) and comparison to the predicate device, as well as a "Clinical consideration and Image Quality Evaluation Report" which "demonstrated that the general image quality of the subject device is equivalent or better than the predicate device in PANO/CBCT mode." However, details on how this "Clinical consideration" was performed (e.g., blinded review by experts, number of experts, their qualifications, or what "ground truth" they used) are not provided in this summary.
4. Adjudication Method for the Test Set:
- Not applicable as the testing described is primarily non-clinical and does not involve human readers or a clinical test set requiring adjudication in the context of diagnostic accuracy.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly mentioned. The study focuses on equivalence through non-clinical performance metrics and comparison to a predicate device, not on how human readers' performance improves with or without the device.
6. Standalone Performance:
- Yes, performance data for the standalone device/detector was done. The document outlines bench testing of the Xmaru1524CF Master Plus OP detector and the Green X 18 system itself, measuring metrics such as pixel size, DQE, MTF, NPS, Contrast, Noise, CNR, and DAP. These measurements represent the algorithm-only/device-only performance in a controlled environment.
7. Type of Ground Truth Used:
- For the non-clinical performance testing, the "ground truth" was established by physical standards and quantitative measurements in a laboratory setting, comparing the device's performance against industry standards (e.g., 21 CFR Part 1020.30, 1020.33, IEC 61223-3-5, IEC 61223-3-4) and the performance of the predicate device.
- For the "Clinical consideration and Image Quality Evaluation Report," the method for establishing ground truth is not detailed, but it would presumably involve expert review of images obtained from the device.
8. Sample Size for the Training Set:
- No training set information is provided, as the submission describes a medical imaging device (X-ray system), not an AI algorithm that requires a training set of images. The "image reconstruction/MAR(Metal Artifact Reduction) algorithm" is mentioned as being the same as the predicate device, implying it was developed prior and is not a new algorithm requiring a new training study for this submission.
9. How the Ground Truth for the Training Set Was Established:
- Not applicable, as this is an imaging device and not an AI algorithm requiring a training set with established ground truth.
Ask a specific question about this device
(48 days)
Vatech Co., Ltd
EzRay M (Model: VMX-P300) is a portable general-purpose X-ray system that users can operate with one hand. The device uses a fixed tube current and voltage (kVp) and, therefore, is limited to taking diagnostic X-rays of extremities. It is intended to be used by a qualified and trained clinician on adult patients. It is not intended to replace a radiographic system with variable tube current and voltage (kVp), which may be required for full optimization of image quality and radiation exposure for different exam types.
EzRay M (Model: VMX-P300), a medical portable X-ray system, operates on 21.6 Vdc supplied by a rechargeable Li-ion battery pack. The system is designed for medical examination and composed of an X-ray generating part with an X-ray tube including a device controller, a power controller, a user interface, a beam limiting part, and optional items. It is intended to be used by a qualified and trained clinician on adult patients. The device is intended to assist the diagnosis of bones and tissues through X-ray exposure using an imaging receptor. The image receptor (an integral part of a complete diagnostic system) is not part of this submission.
This is a 510(k) premarket notification for a mobile x-ray system, EzRay M (Model: VMX-P300). The document does not describe the acceptance criteria and a study that proves a device meets acceptance criteria for performance relevant to AI/ML. Instead, it focuses on demonstrating substantial equivalence to a predicate device (NOMAD MD Handheld X-ray System, K140723) based on device characteristics, indications for use, and compliance with various international and federal safety standards for X-ray equipment.
Therefore, I cannot provide the requested information about acceptance criteria and a study proving device performance in the context of AI/ML, as this document does not contain that type of information. It is a regulatory submission for a traditional medical device (X-ray system), not an AI/ML powered device.
Ask a specific question about this device
(28 days)
Vatech Co., Ltd
vatech A9 (Model : PHT-30CSS) is intended to produce panoramic, cone beam computed tomography, or cephalometric digital x-ray images. It provides diagnostic details of the dento-maxillofacial, sinus, and TMJ for adult and pediatric patients. The system also utilizes carpal images for orthodontic treatment. The device is to be operated by healthcare professionals.
vatech A9 (Model : PHT-30CSS) is an advanced 3-in-1 digital X-ray imaging system that incorporates PANO, CEPH (Optional), and CBCT scan imaging capabilities into a single system. vatech A9 (Model : PHT-30CSS), a digital radiography imaging system, is specially designed to take X-ray images of patients on the chair and assist dentists. Designed explicitly for dental radiography, vatech A9 (Model : PHT-30CSS) is a complete digital X-ray system equipped with imaging viewers, an X-ray generator, and a dedicated SSXI detector. The digital CBCT system is based on a CMOS digital X-ray detector. The CMOS CT detector is used to capture 3D radiographic images of the head, neck, oral surgery, implant, and orthodontic treatment.
Here's an analysis of the acceptance criteria and study information for the Vatech A9 (Model: PHT-30CSS) based on the provided document:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly list numerical acceptance criteria in a table format. Instead, it states that the device's performance was compared to a predicate device and international standards. The general acceptance criterion appears to be "equivalent or better than the predicate device" in terms of image quality and meeting relevant international standards for X-ray systems.
Acceptance Criteria (Inferred) | Reported Device Performance (as stated) |
---|---|
Image Quality: Equivalent or better than the predicate device (Green16/Green18, K170066) in terms of Contrast, Noise, CNR, and MTF, for CBCT, PANO, and CEPH images. | "The results demonstrated that the general image quality of the subject device is equivalent or better than the predicate device." (Applies to Contrast, Noise, CNR, MTF in CT; also stated for PANO/CEPH/CBCT images generically) |
Dosimetry Performance (DAP): For Panoramic mode, dose to be in line with the predicate device. For Cephalometric mode, DAP measurements to be the same as the predicate device under identical conditions. For CBCT mode, similar performance to the predicate device considering FOV differences. | Panoramic Mode: "The mA setting for the subjective device was increased to be in line with the DAP of the predicate device in the Normal Panoramic mode." |
CEPH Mode: "The CEPH mode for the subject device and the predicate device has the same FDD... the same DAP measurement under the same X-ray exposure conditions." | |
CBCT Mode: "the outcome result confirmed that the CBCT mode for both devices performed similarly." | |
Compliance with International Standards: Meeting requirements of 21 CFR Part 1020.30, 1020.33, IEC 61223-3-5,IEC 60601-1:2005+AMD1:2012(Edition 3.1), IEC 60601-1-3:2008+AMD1:2013 (Edition 2.1), IEC 60601-2-63:2012+AMD1:2017 (Edition 1.1), IEC 60601-1-2:2014 (Edition 4), NEMA PS 3.1-3.18. | "The acceptance test was performed according to the requirements of 21 CFR Part 1020.30. 1020.33 and IEC 61223-3-5..." |
"Electrical, mechanical, environmental safety and performance testing according to standard IEC 60601-1:2005+AMD1:2012(Edition 3.1), IEC 60601-1-3:2008+AMD1:2013 (Edition 2.1), IEC 60601-2-63:2012+AMD1:2017 (Edition 1.1) were performed, and EMC testing were conducted in accordance with standard IEC 60601-1-2:2014 (Edition 4)." | |
"The vatech A9 (Model : PHT-30CSS) conforms to the provisions of NEMA PS 3.1-3.18, Digital Imaging and Communications in Medicine (DICOM) Set." | |
Software: "Moderate" level of concern, with existing cleared viewing programs. | "Software verification and validation were conducted and documented as recommended by FDA's Guidance... The software for this device was considered as a 'moderate' level of concern..." |
"vatech A9 (Model: PHT-30CSS) provides the following imaging viewer programs; -2D Image viewing program: EzDent-i(K202116) -3D Image viewing program: Ez3D-i(K200178)" | |
X-Ray Source (D-054SB): Specifications (max rating, emission & filament characteristics) equivalent to the predicate device's D-052SB. | "The specification for both D-054SB and D-052SB x-ray source (tube) is the same as confirmed by the maximum rating charts, emission & filament characteristics." |
Detector (Xmaru1404CF-PLUS): Previously cleared. | "The subject device is equipped with the Xmaru1404CF-PLUS detector which has been cleared with previous 510k submissions, PCH-30CS (K170731)." |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: The document does not specify the numerical sample size (number of images or cases) used for the performance testing or image quality evaluations. It mentions that "the same test protocol was used to test the performance of the subject and the predicate device for comparison."
- Data Provenance: The document does not explicitly state the country of origin or whether the data was retrospective or prospective. Given that the testing involved comparing the subject device with a predicate device and was conducted in a laboratory, it appears to be bench testing/non-clinical performance testing rather than testing on patient data.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document mentions that "PANO/CEPH/CBCT images from the subject and predicate device are evaluated in the Image Quality Evaluation Report." However, it does not specify the number of experts who performed this evaluation, nor does it provide their qualifications (e.g., "radiologist with 10 years of experience").
4. Adjudication Method for the Test Set
The document does not describe any specific adjudication method (e.g., 2+1, 3+1, none) for the image quality evaluation or performance testing. It simply states the images "are evaluated."
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not explicitly described or indicated. The evaluation mentioned is an "Image Quality Evaluation Report" comparing the subject and predicate device, but it doesn't detail a study involving multiple human readers to assess improvement with or without AI assistance. The device is an X-ray imaging system, not an AI diagnostic aid.
6. If a Standalone (i.e. algorithm only without human-in-the loop performance) was done
This question is not directly applicable in the typical sense for this device. The Vatech A9 is itself an imaging device, an X-ray system, not an AI algorithm that generates a diagnosis or interpretation in a standalone manner. Its performance (image quality, dose) is evaluated as a standalone system. The software components (viewing programs EzDent-i and Ez3D-i) are also cleared, indicating their standalone functionality in displaying images.
7. The Type of Ground Truth Used
The "ground truth" for the performance evaluation appears to be based on:
- Physical Measurements and Standards: For Contrast, Noise, CNR, MTF, and Dosimetry (DAP), these are objective physical measurements taken with phantoms or test protocols.
- Comparison to a Predicate Device: The performance of the subject device was directly compared to the performance of the legally marketed predicate device (Green16/Green18, K170066) using the "same test protocol."
- Expert Evaluation: For the "Image Quality Evaluation Report," the "ground truth" implicitly relies on expert subjective assessment of the images, although details are missing.
8. The Sample Size for the Training Set
The document does not describe a training set. This is because the device is a medical imaging hardware system (CT X-ray system), not an AI algorithm that requires a training set for machine learning. The viewing software (EzDent-i, Ez3D-i) is separate and was cleared through previous 510(k) submissions.
9. How the Ground Truth for the Training Set was Established
Not applicable, as there is no mention of a training set for an AI algorithm.
Ask a specific question about this device
(133 days)
Vatech Co., Ltd
Green X (Model : PHT-75CHS) is intended to produce panoramic, cephalometric or 3D digital x-ray images. It provides diagnostic details of the dento-maxillofacial, ENT, sinus and TMJ for adult and pediatric patients. The system also utilizes carpal images for orthodontic treatment The device is to be operated by healthcare professionals.
Green X (Model : PHT-75CHS) is an advanced 4-in-1 digital X-ray imaging system that incorporates PANO, CEPH(optional), CBCT and MODEL Scan imaging capabilities into a single system. Green X (Model : PHT-75CHS), a digital radiographic imaging system, acquires and processes multi-FOV diagnostic images for dentists. Designed explicitly for dental radiography. Green X is a complete digital X-ray system equipped with imaging viewers, an X-ray generator and a dedicated SSXI detector.
The digital CBCT system is based on a CMOS digital X-ray detector. The CMOS CT detector is used to capture 3D radiographic images of the head, neck, oral surgery, implant and orthodontic treatment. Green X (Model : PHT-75CHS) can also acquire 2D diagnostic image data in conventional PANO and CEPH modes.
The provided text describes the Green X (Model: PHT-75CHS) dental X-ray imaging system and its substantial equivalence to a predicate device. However, it does not contain detailed information about a study proving the device meets acceptance criteria for an AI feature with specific performance metrics such as sensitivity, specificity, or AUC calculated on a test set, nor does it describe an MRMC study.
The document discusses improvements and additions to the device, including "Endo mode," "Double Scan function," "Insight PAN 2.0," and the availability of FDK and CS reconstruction algorithms. It mentions some quantitative evaluations for these features, primarily focusing on image quality metrics and stitching accuracy, but not clinical performance metrics typical for AI algorithms (e.g., detection of specific pathologies).
Based on the provided text, here's an attempt to answer the questions, highlighting where information is missing for AI-specific criteria:
Acceptance Criteria and Device Performance (Based on available information):
Feature/Metric | Acceptance Criteria (Stated) | Reported Device Performance |
---|---|---|
Endo Mode | Quantitative evaluation satisfied IEC 61223-3-5 standard criteria for Noise, Contrast, CNR, MTF 10%. Clinical images demonstrated "sufficient diagnostic quality." | MTF (@10%): 3.4 lp/mm. Clinical images demonstrated "sufficient diagnostic quality to provide accurate information of the size and location of the periapical lesion and root apex in relation to structure for endodontic surgical procedure." |
Double Scan Function (Stitching Accuracy) | Average SSIM. RMSE less than 1 voxel (0.3mm). Clinical evaluation confirmed "no sense of heterogeneity." | Average SSIM: 0.9674. RMSE: 0.0027 (less than 1 voxel (0.3mm)). Clinical efficacy confirmed "without any sense of heterogeneity." |
Insight PAN 2.0 | Image quality factors (line pair resolution, low contrast resolution) satisfy IEC 61223-3-4 standard criteria. Clinical evaluation confirmed adequacy for specific diagnostic cases. | Image quality factors satisfied IEC 61223-3-4. Clinically evaluated and found adequate for challenging diagnostic cases (multi-root diagnosis, pericoronitis, dens in dente, apical root shape). |
FDK/CS Algorithms | Measured values for 4 parameters (Noise, CNR, MTF 10%) satisfy IEC 61223-3-5 standard criteria. | Values for Noise, CNR, MTF 10% satisfied IEC 61223-3-5 for both FDK and CS reconstruction images. |
General Image Quality | Equivalent or better than the predicate device. | Demonstrated to be equivalent or better than the predicate device (based on CT Image Quality Evaluation Report). |
Dosimetry (DAP) | Equivalent to predicate device in PANO/CEPH. For CBCT, FOV 12x9 mode DAP equivalent to predicate. | DAP in CEPH/PANO was the same. DAP of FOV 12x9 CBCT mode was equivalent to predicate. |
1. Sample size used for the test set and the data provenance:
- Test Set Sample Size: Not explicitly stated for any of the described evaluations (Endo mode, Double Scan, Insight PAN 2.0, FDK/CS algorithms, or general image quality). The evaluations seem to be based on a limited number of clinical images/test phantoms rather than large-scale patient datasets.
- Data Provenance: Not specified. It indicates "clinical images generated in Endo mode" and "3D clinical consideration" for Double Scan, and "clinical evaluation" for Insight PAN 2.0. There is no mention of country of origin or whether the data was retrospective or prospective.
2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Endo Mode: "A US licensed dentist" evaluated the clinical images. The number of dentists is not specified (could be one or multiple). No specific years of experience or sub-specialty are mentioned beyond "US licensed dentist."
- Double Scan Function: "3D clinical consideration and evaluation" was performed. No specific number or qualifications of experts are mentioned.
- Insight PAN 2.0: "Clinical evaluation was performed." No specific number or qualifications of experts are mentioned.
- Other evaluations: The document refers to "satisfying standard criteria" (IEC 61223-3-5, IEC 61223-3-4) and measurements on phantoms, which typically do not involve expert ground truth in the same way clinical AI performance studies do.
3. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- None specified. The evaluations appear to involve a single "US licensed dentist" for Endo mode, and "clinical evaluation" without detailing the adjudication process for other features. This is not a typical AI performance study setup where multiple readers independently review and a consensus process might be employed.
4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done:
- No, an MRMC comparative effectiveness study was not done. The document describes performance evaluations of the device's features (e.g., image quality, stitching accuracy, clinical utility) but not a comparative study where human readers' performance with and without AI assistance is measured. Thus, no effect size for human improvement is reported.
5. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- This is not explicitly an AI-only device where the "algorithm" performs diagnostic tasks autonomously. The features described (Endo mode, Double Scan, Insight PAN 2.0, reconstruction algorithms) are functionalities of an X-ray imaging system that produce images for human interpretation. The "evaluations" described are largely for image quality metrics and technical performance, not for algorithmic detection or classification of disease. Therefore, a standalone performance study in the context of an AI diagnostic aid is not applicable in the way it might be for, say, an algorithm that flags lesions.
6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Primary Ground Truth:
- Phantom Measurements: For quantitative image quality metrics (Noise, Contrast, CNR, MTF, line pair resolution, low contrast resolution) according to IEC standards.
- Calculated Metrics: For stitching accuracy (SSIM, RMSE).
- Clinical Evaluation: For confirming "diagnostic quality" (Endo mode) and "clinical efficacy" (Double Scan, Insight PAN 2.0), which relies on expert judgement of the generated images, rather than independent pathology or outcomes data. It functions more as a qualitative assessment of the image's utility.
7. The sample size for the training set:
- Not applicable/Not provided. The document describes a traditional X-ray imaging system with new features, some of which might involve algorithms (e.g., stitching algorithm, reconstruction algorithms) but doesn't explicitly state that these features are "AI" in the sense of requiring a large, labeled training dataset of images to learn to perform a diagnostic task. If these features involve machine learning (e.g., for image enhancement or reconstruction), the training data for those specific algorithms is not detailed.
8. How the ground truth for the training set was established:
- Not applicable/Not provided for the reasons stated above.
Summary of the Device and Evaluation Context:
The FDA 510(k) clearance process for the Green X (Model: PHT-75CHS) system focuses on demonstrating substantial equivalence to a predicate device. The performance evaluations described are primarily related to the physical and technical performance of the X-ray imaging system and its new functionalities (Endo mode, Double Scan, Insight PAN 2.0, FDK/CS algorithms). These evaluations confirm that the device produces images of sufficient quality, that spatial and contrast resolutions meet standards, and that new features like image stitching are accurate.
Crucially, this is not a submission for an AI/ML-driven diagnostic medical device that would typically involve large, diverse test sets, multiple expert readers, detailed ground truth establishment (like pathology or clinical outcomes), and comparative effectiveness studies to measure how much AI improves human reader performance for a specific diagnostic task (e.g., detecting a particular disease from the image). The "performance data" provided relates to the image acquisition capabilities and processing algorithms of the imaging system itself, which are fundamental to any diagnostic interpretation by a human professional rather than an algorithmic diagnosis or detection.
Ask a specific question about this device
Page 1 of 4