K Number
K222069
Device Name
Ez3D-i/E3
Manufacturer
Date Cleared
2022-09-06

(54 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Ez3D-i is dental imaging software that is intended to provide diagnostic tools for maxillofacial radiographic imaging. These tools are available to view and interpret a series of DICOM compliant dental radiology images and are meant to be used by trained medical professionals such as radiologist and dentist.

Ez3D-i is intended for use as software to load, view and save DICOM images from CT, panorama, cephalometric and intraoral imaging equipment and to provide 3D visualization, 2D analysis, in various MPR (Multi-Planar Reconstruction) functions.

Device Description

Ez3D-i is 3D viewing software for dental CT images from CT, panorama, cephalometric and intraoral image equipment in DICOM format with a host of useful functions including MPR, 2-dimensional analysis and 3-dimensional image reformation. It provides advanced simulation functions such as Implant Simulation, Drawing Canal, and Implant Environ Bone Density, etc. for the benefit of effective doctor and patient communication and precise treatment planning.

Ez3D-i's main functions are:

  • Image adaptation through various rendering methods such as Teeth/Bone/Soft tissue/MIP
  • Versatile 3D image viewing via MPR Rotating, Curve mode
  • "Sculpt" for deleting unnecessary parts to view only the region of interest.
  • Implant Simulation for efficient treatment planning and effective patient consultation
  • Canal Draw to trace alveolar canal and its geometrical orientation relative to teeth.
  • "Bone Density" test to measure bone density around the site of an implant(s) .
  • Various utilities such as Measurement, Annotation, Gallery, and Report
  • 3D Volume function to transform the image into 3D Panorama and the Tab has been optimized for Implant Simulation.
  • Provides the Axial View of TMJ, the Condyle/Fossa images in 3D and the Section images, and supports functions to separate the Condyle/Fossa and display the bone density
  • STO/VTO Simulation to predict orthodontic treatment/ surgery results with 3D Photo image.
  • Segmentation function to get tooth segmentation data from CT, label each segmented tooth data as an object and utilize them in simulation such as tooth extraction, implant simulation, etc.
AI/ML Overview

The provided text describes a 510(k) summary for the Ez3D-i/E3 device, primarily focusing on proving its substantial equivalence to a predicate device (K211791) rather than detailing specific acceptance criteria and a comprehensive study proving the device meets them.

The filing states: "The SW verification/validation and the measurement accuracy test were conducted to establish the performance, functionality and reliability characteristics of the modified devices. The device passed all of the tests based on pre-determined Pass/Fail criteria." However, it does not provide the specific acceptance criteria, the detailed results of these tests, or the methodology of the study.

Therefore, many of the requested details cannot be extracted from the given text.

Based on the information provided, here's what can be extracted and what is missing:


Acceptance Criteria and Device Performance

The document does not explicitly state specific acceptance criteria in a quantitative manner or provide a table of reported device performance against such criteria. It generally states that the device "passed all of the tests based on pre-determined Pass/Fail criteria," but these criteria are not detailed.


Study Details

Given the context of a 510(k) for a software update (Ez3D-i v5.4 to Ez3D-i v5.3), the studies conducted appear to be software verification and validation (V&V) and measurement accuracy tests. These are typically internal tests to ensure the new version functions as intended and maintains the performance of the previous version, rather than large-scale clinical trials.

1. Sample sized used for the test set and the data provenance:

  • Not explicitly stated for the "measurement accuracy test" or "SW verification/validation." The document mentions the device processes DICOM images from CT, panorama, cephalometric, and intraoral imaging equipment. The data provenance (country of origin, retrospective/prospective) is also not stated.

2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

  • Not stated. For a software update focusing on functionality and measurement accuracy of an image viewer, ground truth might be established through technical specifications or comparison to known accurate measurements rather than expert consensus on diagnostic interpretations. The document mentions the software is "meant to be used by trained medical professionals such as radiologist and dentist," but it doesn't specify if these professionals were involved in establishing ground truth for testing.

3. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

  • Not stated. This method is typically relevant for human-in-the-loop studies where multiple readers interpret images to establish consensus. For software V&V and measurement accuracy, it's unlikely to be applicable in the traditional sense.

4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance:

  • No, an MRMC study was not done or reported. The device is described as "dental imaging software that is intended to provide diagnostic tools" and is used by professionals as "an adjunctive to standard radiology practices for diagnosis." It is not presented as an AI-assisted diagnostic tool that directly improves human reader performance in the way an AI algorithm for disease detection might be. The focus of the submission is on software functionality and substantial equivalence.

5. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

  • Not explicitly detailed as a "standalone performance study" in terms of diagnostic accuracy. The "measurement accuracy test" could be considered a form of standalone testing for specific functions, but no specific metrics (e.g., sensitivity, specificity for diagnostic tasks) are provided. The device is not an AI algorithm making diagnostic predictions in the absence of a human.

6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

  • Not explicitly stated. For "measurement accuracy," ground truth would likely be established by known physical dimensions or validated measurements. For general software verification/validation, ground truth often relates to the expected output from a given input based on design specifications.

7. The sample size for the training set:

  • Not applicable/Not stated. This device is described as a medical image management and processing system, not an AI/ML device that requires a training set. While it performs "3D visualization, 2D analysis, in various MPR (Multi-Planar Reconstruction) functions," these are standard image processing techniques, not algorithms that learn from data.

8. How the ground truth for the training set was established:

  • Not applicable/Not stated. As it's not described as an AI/ML device, there's no "training set."

Summary of what is known from the document:

  • Device Name: Ez3D-i /E3 (K222069)
  • Intended Use: Dental imaging software for maxillofacial radiographic imaging, providing diagnostic tools to view and interpret DICOM images from various dental imaging equipment, offering 3D visualization, 2D analysis, and MPR functions. Used by trained medical professionals (radiologists and dentists).
  • Predicate Device: Ez3D-i /E3 v.5.3 (K211791)
  • Studies Conducted: Software verification/validation and measurement accuracy tests.
  • Conclusion: The device passed all tests based on pre-determined Pass/Fail criteria, leading to a conclusion of substantial equivalence to the predicate device.

What is demonstrably missing from the provided text:

  • Specific, quantitative acceptance criteria.
  • Detailed reported performance data against those criteria.
  • Specific sample sizes for the test set.
  • Data provenance (country of origin, retrospective/prospective).
  • Details on experts and ground truth establishment methodologies for the test set.
  • Adjudication methods for the test set.
  • Any MRMC study details or effect sizes related to AI assistance.
  • Detailed standalone performance metrics (e.g., diagnostic accuracy metrics).
  • Ground truth type beyond general "measurement accuracy."
  • Training set information (as it's not an AI/ML device).

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).