Search Results
Found 2 results
510(k) Data Aggregation
(28 days)
vatech A9 (Model : PHT-30CSS) is intended to produce panoramic, cone beam computed tomography, or cephalometric digital x-ray images. It provides diagnostic details of the dento-maxillofacial, sinus, and TMJ for adult and pediatric patients. The system also utilizes carpal images for orthodontic treatment. The device is to be operated by healthcare professionals.
vatech A9 (Model : PHT-30CSS) is an advanced 3-in-1 digital X-ray imaging system that incorporates PANO, CEPH (Optional), and CBCT scan imaging capabilities into a single system. vatech A9 (Model : PHT-30CSS), a digital radiography imaging system, is specially designed to take X-ray images of patients on the chair and assist dentists. Designed explicitly for dental radiography, vatech A9 (Model : PHT-30CSS) is a complete digital X-ray system equipped with imaging viewers, an X-ray generator, and a dedicated SSXI detector. The digital CBCT system is based on a CMOS digital X-ray detector. The CMOS CT detector is used to capture 3D radiographic images of the head, neck, oral surgery, implant, and orthodontic treatment.
Here's an analysis of the acceptance criteria and study information for the Vatech A9 (Model: PHT-30CSS) based on the provided document:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly list numerical acceptance criteria in a table format. Instead, it states that the device's performance was compared to a predicate device and international standards. The general acceptance criterion appears to be "equivalent or better than the predicate device" in terms of image quality and meeting relevant international standards for X-ray systems.
Acceptance Criteria (Inferred) | Reported Device Performance (as stated) |
---|---|
Image Quality: Equivalent or better than the predicate device (Green16/Green18, K170066) in terms of Contrast, Noise, CNR, and MTF, for CBCT, PANO, and CEPH images. | "The results demonstrated that the general image quality of the subject device is equivalent or better than the predicate device." (Applies to Contrast, Noise, CNR, MTF in CT; also stated for PANO/CEPH/CBCT images generically) |
Dosimetry Performance (DAP): For Panoramic mode, dose to be in line with the predicate device. For Cephalometric mode, DAP measurements to be the same as the predicate device under identical conditions. For CBCT mode, similar performance to the predicate device considering FOV differences. | Panoramic Mode: "The mA setting for the subjective device was increased to be in line with the DAP of the predicate device in the Normal Panoramic mode." |
CEPH Mode: "The CEPH mode for the subject device and the predicate device has the same FDD... the same DAP measurement under the same X-ray exposure conditions." | |
CBCT Mode: "the outcome result confirmed that the CBCT mode for both devices performed similarly." | |
Compliance with International Standards: Meeting requirements of 21 CFR Part 1020.30, 1020.33, IEC 61223-3-5,IEC 60601-1:2005+AMD1:2012(Edition 3.1), IEC 60601-1-3:2008+AMD1:2013 (Edition 2.1), IEC 60601-2-63:2012+AMD1:2017 (Edition 1.1), IEC 60601-1-2:2014 (Edition 4), NEMA PS 3.1-3.18. | "The acceptance test was performed according to the requirements of 21 CFR Part 1020.30. 1020.33 and IEC 61223-3-5..." |
"Electrical, mechanical, environmental safety and performance testing according to standard IEC 60601-1:2005+AMD1:2012(Edition 3.1), IEC 60601-1-3:2008+AMD1:2013 (Edition 2.1), IEC 60601-2-63:2012+AMD1:2017 (Edition 1.1) were performed, and EMC testing were conducted in accordance with standard IEC 60601-1-2:2014 (Edition 4)." | |
"The vatech A9 (Model : PHT-30CSS) conforms to the provisions of NEMA PS 3.1-3.18, Digital Imaging and Communications in Medicine (DICOM) Set." | |
Software: "Moderate" level of concern, with existing cleared viewing programs. | "Software verification and validation were conducted and documented as recommended by FDA's Guidance... The software for this device was considered as a 'moderate' level of concern..." |
"vatech A9 (Model: PHT-30CSS) provides the following imaging viewer programs; -2D Image viewing program: EzDent-i(K202116) -3D Image viewing program: Ez3D-i(K200178)" | |
X-Ray Source (D-054SB): Specifications (max rating, emission & filament characteristics) equivalent to the predicate device's D-052SB. | "The specification for both D-054SB and D-052SB x-ray source (tube) is the same as confirmed by the maximum rating charts, emission & filament characteristics." |
Detector (Xmaru1404CF-PLUS): Previously cleared. | "The subject device is equipped with the Xmaru1404CF-PLUS detector which has been cleared with previous 510k submissions, PCH-30CS (K170731)." |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: The document does not specify the numerical sample size (number of images or cases) used for the performance testing or image quality evaluations. It mentions that "the same test protocol was used to test the performance of the subject and the predicate device for comparison."
- Data Provenance: The document does not explicitly state the country of origin or whether the data was retrospective or prospective. Given that the testing involved comparing the subject device with a predicate device and was conducted in a laboratory, it appears to be bench testing/non-clinical performance testing rather than testing on patient data.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document mentions that "PANO/CEPH/CBCT images from the subject and predicate device are evaluated in the Image Quality Evaluation Report." However, it does not specify the number of experts who performed this evaluation, nor does it provide their qualifications (e.g., "radiologist with 10 years of experience").
4. Adjudication Method for the Test Set
The document does not describe any specific adjudication method (e.g., 2+1, 3+1, none) for the image quality evaluation or performance testing. It simply states the images "are evaluated."
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not explicitly described or indicated. The evaluation mentioned is an "Image Quality Evaluation Report" comparing the subject and predicate device, but it doesn't detail a study involving multiple human readers to assess improvement with or without AI assistance. The device is an X-ray imaging system, not an AI diagnostic aid.
6. If a Standalone (i.e. algorithm only without human-in-the loop performance) was done
This question is not directly applicable in the typical sense for this device. The Vatech A9 is itself an imaging device, an X-ray system, not an AI algorithm that generates a diagnosis or interpretation in a standalone manner. Its performance (image quality, dose) is evaluated as a standalone system. The software components (viewing programs EzDent-i and Ez3D-i) are also cleared, indicating their standalone functionality in displaying images.
7. The Type of Ground Truth Used
The "ground truth" for the performance evaluation appears to be based on:
- Physical Measurements and Standards: For Contrast, Noise, CNR, MTF, and Dosimetry (DAP), these are objective physical measurements taken with phantoms or test protocols.
- Comparison to a Predicate Device: The performance of the subject device was directly compared to the performance of the legally marketed predicate device (Green16/Green18, K170066) using the "same test protocol."
- Expert Evaluation: For the "Image Quality Evaluation Report," the "ground truth" implicitly relies on expert subjective assessment of the images, although details are missing.
8. The Sample Size for the Training Set
The document does not describe a training set. This is because the device is a medical imaging hardware system (CT X-ray system), not an AI algorithm that requires a training set for machine learning. The viewing software (EzDent-i, Ez3D-i) is separate and was cleared through previous 510(k) submissions.
9. How the Ground Truth for the Training Set was Established
Not applicable, as there is no mention of a training set for an AI algorithm.
Ask a specific question about this device
(133 days)
Green X (Model : PHT-75CHS) is intended to produce panoramic, cephalometric or 3D digital x-ray images. It provides diagnostic details of the dento-maxillofacial, ENT, sinus and TMJ for adult and pediatric patients. The system also utilizes carpal images for orthodontic treatment The device is to be operated by healthcare professionals.
Green X (Model : PHT-75CHS) is an advanced 4-in-1 digital X-ray imaging system that incorporates PANO, CEPH(optional), CBCT and MODEL Scan imaging capabilities into a single system. Green X (Model : PHT-75CHS), a digital radiographic imaging system, acquires and processes multi-FOV diagnostic images for dentists. Designed explicitly for dental radiography. Green X is a complete digital X-ray system equipped with imaging viewers, an X-ray generator and a dedicated SSXI detector.
The digital CBCT system is based on a CMOS digital X-ray detector. The CMOS CT detector is used to capture 3D radiographic images of the head, neck, oral surgery, implant and orthodontic treatment. Green X (Model : PHT-75CHS) can also acquire 2D diagnostic image data in conventional PANO and CEPH modes.
The provided text describes the Green X (Model: PHT-75CHS) dental X-ray imaging system and its substantial equivalence to a predicate device. However, it does not contain detailed information about a study proving the device meets acceptance criteria for an AI feature with specific performance metrics such as sensitivity, specificity, or AUC calculated on a test set, nor does it describe an MRMC study.
The document discusses improvements and additions to the device, including "Endo mode," "Double Scan function," "Insight PAN 2.0," and the availability of FDK and CS reconstruction algorithms. It mentions some quantitative evaluations for these features, primarily focusing on image quality metrics and stitching accuracy, but not clinical performance metrics typical for AI algorithms (e.g., detection of specific pathologies).
Based on the provided text, here's an attempt to answer the questions, highlighting where information is missing for AI-specific criteria:
Acceptance Criteria and Device Performance (Based on available information):
Feature/Metric | Acceptance Criteria (Stated) | Reported Device Performance |
---|---|---|
Endo Mode | Quantitative evaluation satisfied IEC 61223-3-5 standard criteria for Noise, Contrast, CNR, MTF 10%. Clinical images demonstrated "sufficient diagnostic quality." | MTF (@10%): 3.4 lp/mm. Clinical images demonstrated "sufficient diagnostic quality to provide accurate information of the size and location of the periapical lesion and root apex in relation to structure for endodontic surgical procedure." |
Double Scan Function (Stitching Accuracy) | Average SSIM. RMSE less than 1 voxel (0.3mm). Clinical evaluation confirmed "no sense of heterogeneity." | Average SSIM: 0.9674. RMSE: 0.0027 (less than 1 voxel (0.3mm)). Clinical efficacy confirmed "without any sense of heterogeneity." |
Insight PAN 2.0 | Image quality factors (line pair resolution, low contrast resolution) satisfy IEC 61223-3-4 standard criteria. Clinical evaluation confirmed adequacy for specific diagnostic cases. | Image quality factors satisfied IEC 61223-3-4. Clinically evaluated and found adequate for challenging diagnostic cases (multi-root diagnosis, pericoronitis, dens in dente, apical root shape). |
FDK/CS Algorithms | Measured values for 4 parameters (Noise, CNR, MTF 10%) satisfy IEC 61223-3-5 standard criteria. | Values for Noise, CNR, MTF 10% satisfied IEC 61223-3-5 for both FDK and CS reconstruction images. |
General Image Quality | Equivalent or better than the predicate device. | Demonstrated to be equivalent or better than the predicate device (based on CT Image Quality Evaluation Report). |
Dosimetry (DAP) | Equivalent to predicate device in PANO/CEPH. For CBCT, FOV 12x9 mode DAP equivalent to predicate. | DAP in CEPH/PANO was the same. DAP of FOV 12x9 CBCT mode was equivalent to predicate. |
1. Sample size used for the test set and the data provenance:
- Test Set Sample Size: Not explicitly stated for any of the described evaluations (Endo mode, Double Scan, Insight PAN 2.0, FDK/CS algorithms, or general image quality). The evaluations seem to be based on a limited number of clinical images/test phantoms rather than large-scale patient datasets.
- Data Provenance: Not specified. It indicates "clinical images generated in Endo mode" and "3D clinical consideration" for Double Scan, and "clinical evaluation" for Insight PAN 2.0. There is no mention of country of origin or whether the data was retrospective or prospective.
2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Endo Mode: "A US licensed dentist" evaluated the clinical images. The number of dentists is not specified (could be one or multiple). No specific years of experience or sub-specialty are mentioned beyond "US licensed dentist."
- Double Scan Function: "3D clinical consideration and evaluation" was performed. No specific number or qualifications of experts are mentioned.
- Insight PAN 2.0: "Clinical evaluation was performed." No specific number or qualifications of experts are mentioned.
- Other evaluations: The document refers to "satisfying standard criteria" (IEC 61223-3-5, IEC 61223-3-4) and measurements on phantoms, which typically do not involve expert ground truth in the same way clinical AI performance studies do.
3. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- None specified. The evaluations appear to involve a single "US licensed dentist" for Endo mode, and "clinical evaluation" without detailing the adjudication process for other features. This is not a typical AI performance study setup where multiple readers independently review and a consensus process might be employed.
4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done:
- No, an MRMC comparative effectiveness study was not done. The document describes performance evaluations of the device's features (e.g., image quality, stitching accuracy, clinical utility) but not a comparative study where human readers' performance with and without AI assistance is measured. Thus, no effect size for human improvement is reported.
5. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- This is not explicitly an AI-only device where the "algorithm" performs diagnostic tasks autonomously. The features described (Endo mode, Double Scan, Insight PAN 2.0, reconstruction algorithms) are functionalities of an X-ray imaging system that produce images for human interpretation. The "evaluations" described are largely for image quality metrics and technical performance, not for algorithmic detection or classification of disease. Therefore, a standalone performance study in the context of an AI diagnostic aid is not applicable in the way it might be for, say, an algorithm that flags lesions.
6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Primary Ground Truth:
- Phantom Measurements: For quantitative image quality metrics (Noise, Contrast, CNR, MTF, line pair resolution, low contrast resolution) according to IEC standards.
- Calculated Metrics: For stitching accuracy (SSIM, RMSE).
- Clinical Evaluation: For confirming "diagnostic quality" (Endo mode) and "clinical efficacy" (Double Scan, Insight PAN 2.0), which relies on expert judgement of the generated images, rather than independent pathology or outcomes data. It functions more as a qualitative assessment of the image's utility.
7. The sample size for the training set:
- Not applicable/Not provided. The document describes a traditional X-ray imaging system with new features, some of which might involve algorithms (e.g., stitching algorithm, reconstruction algorithms) but doesn't explicitly state that these features are "AI" in the sense of requiring a large, labeled training dataset of images to learn to perform a diagnostic task. If these features involve machine learning (e.g., for image enhancement or reconstruction), the training data for those specific algorithms is not detailed.
8. How the ground truth for the training set was established:
- Not applicable/Not provided for the reasons stated above.
Summary of the Device and Evaluation Context:
The FDA 510(k) clearance process for the Green X (Model: PHT-75CHS) system focuses on demonstrating substantial equivalence to a predicate device. The performance evaluations described are primarily related to the physical and technical performance of the X-ray imaging system and its new functionalities (Endo mode, Double Scan, Insight PAN 2.0, FDK/CS algorithms). These evaluations confirm that the device produces images of sufficient quality, that spatial and contrast resolutions meet standards, and that new features like image stitching are accurate.
Crucially, this is not a submission for an AI/ML-driven diagnostic medical device that would typically involve large, diverse test sets, multiple expert readers, detailed ground truth establishment (like pathology or clinical outcomes), and comparative effectiveness studies to measure how much AI improves human reader performance for a specific diagnostic task (e.g., detecting a particular disease from the image). The "performance data" provided relates to the image acquisition capabilities and processing algorithms of the imaging system itself, which are fundamental to any diagnostic interpretation by a human professional rather than an algorithmic diagnosis or detection.
Ask a specific question about this device
Page 1 of 1