(71 days)
Ez3D-i is dental imaging software that is intended to provide diagnostic tools for maxillofacial radiographic imaging. These tools are available to view and interpret a series of DICOM compliant dental radiology images and are meant to be used by trained medical professionals such as radiologist and dentist.
Ez3D-i is intended for use as software to load, view and save DICOM images from CT, panorama, cephalometric and intraoral imaging equipment and to provide 3D visualization, 2D analysis, in various MPR (Multi-Planar Reconstruction) functions.
Ez3D-i is 3D viewing software for dental CT images in DICOM format with a host of useful functions including MPR, 2-dimensional analysis and 3-dimensional image reformation. It provides advanced simulation functions such as Implant Simulation, Drawing Canal, and Implant Environ Bone Density, etc. for the benefit of effective doctor and patient communication and precise treatment planning.
This FDA 510(k) summary for Ewoosoft Co., Ltd.'s Ez3D-i/E3 device (K211791) focuses on demonstrating substantial equivalence to a previous version of the same device (K200178). As such, it does not provide detailed acceptance criteria and a study proving the device meets those criteria in the way one might expect for a novel device or a significantly modified one. Instead, the performance data section states that "SW verification/validation and the measurement accuracy test were conducted to establish the performance, functionality and reliability characteristics of the modified devices. The device passed all of the tests based on pre-determined Pass/Fail criteria." This indicates that the study performed was primarily a verification and validation study to ensure the new version performed as expected and was equivalent to the predicate.
Given the information provided, I will extract and present the available details while noting where specific information, such as detailed acceptance criteria and comprehensive study results, is not present in this type of submission.
1. Table of Acceptance Criteria and Reported Device Performance
The FDA 510(k) submission does not provide specific quantitative acceptance criteria or detailed reported device performance metrics for a clinical study comparing the device to ground truth. Instead, it relies on demonstrating substantial equivalence to a predicate device (Ez3D-i/E3, K200178) through software verification and validation, and measurement accuracy tests. The performance is reported in terms of passing pre-determined Pass/Fail criteria.
Note: For this type of submission, detailed performance metrics like sensitivity, specificity, or AUC are not typically required if substantial equivalence is being claimed for minor software updates where the core diagnostic functionality remains unchanged and validated.
Metric/Characteristic | Acceptance Criteria (Implied) | Reported Device Performance |
---|---|---|
Software Functionality | All specified functions (e.g., MPR, 3D visualization, 2D analysis, implant simulation) performed as intended. | Passed all tests based on pre-determined Pass/Fail criteria. |
Measurement Accuracy | Measurements (e.g., length, angle, volume) performed accurately. | Passed all tests based on pre-determined Pass/Fail criteria. |
Reliability | Software operated reliably without critical errors or crashes. | Passed all tests based on pre-determined Pass/Fail criteria. |
Equivalence to Predicate | Overall performance and safety equivalent to predicate device (K200178). | Deemed substantially equivalent; differences do not raise new safety or effectiveness questions. |
2. Sample Size Used for the Test Set and Data Provenance
The document does not specify a "test set" in the context of a clinical performance study with human subjects. The performance data refers to "SW verification/validation and the measurement accuracy test." These typically involve testing the software against pre-defined test cases, simulated data, or existing (potentially de-identified) DICOM images, rather than a prospective clinical dataset.
- Sample Size (Test Set): Not specified.
- Data Provenance: Not specified. Given it's a software verification/validation, the data would likely be a mix of internal test datasets, and potentially de-identified DICOM images used for functionality testing. The country of origin and retrospective/prospective nature are not mentioned.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
This information is not provided because the submission does not detail a clinical study where ground truth was established by experts for a specific test set. The validation performed was software-centric. The "Indications for Use" statement does, however, mention the intended users: "trained medical professionals such as radiologist and dentist."
4. Adjudication Method for the Test Set
This information is not provided as the submission does not describe a clinical study with expert adjudication of a test set.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was an MRMC study done? No. The document does not mention any MRMC study. The submission focuses on demonstrating substantial equivalence to a predicate device through non-clinical performance data (software verification/validation).
- Effect size of human reader improvement with AI vs. without AI assistance: Not applicable, as no MRMC study was conducted or reported.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study
- Was a standalone study done? No, not in the typical sense of evaluating diagnostic accuracy of an AI algorithm against ground truth. The device is a "Medical image management and processing system" that provides tools for human interpretation, not an AI diagnostic algorithm meant to operate standalone. The performance data describes "SW verification/validation and the measurement accuracy test" for the software's functionality and reliability, which is a standalone evaluation of the software components but not in the context of diagnostic accuracy.
7. Type of Ground Truth Used
For the "SW verification/validation and the measurement accuracy test," the "ground truth" would likely be:
- Pre-defined expected outputs/behaviors for various software functions.
- Known measurements or anatomical landmarks in test images used for accuracy checks.
- Industry standards for DICOM compliance and image processing.
This is distinct from clinical ground truth such as pathology or outcomes data, which would be expected for a diagnostic AI system.
8. Sample Size for the Training Set
Not applicable. The Ez3D-i/E3 device is described as "3D viewing software for dental CT images" that provides diagnostic tools and image manipulation functions. It is not explicitly stated to be an AI/machine learning device that requires a "training set" in the context of supervised learning for a specific diagnostic task. The software's functionality is based on image processing algorithms and user interface design.
9. How the Ground Truth for the Training Set Was Established
Not applicable, as no "training set" for AI/machine learning is described.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).