K Number
K223820
Manufacturer
Date Cleared
2023-02-17

(58 days)

Product Code
Regulation Number
892.2050
Panel
RA
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

EzDent-i is dental imaging software that is intended to provide diagnostic tools for maxillofacial radiographic imaging. These tools are available to view and interpret a series of DICOM compliant dental radiology images and are meant to be used by trained medical professionals such as radiologist and dentist.

EzDent-i is intended for use as software to acquire, view, save 2D image files, and load DICOM project files from panorama, cephalometric, and intra-oral imaging equipment.

Device Description

EzDent-i v3.4 is a device that provides various features to acquire, transfer, edit, display, store, and perform digital processing of medical images. EzDent-i is a patient & image management software specifically for digital dental radiography. It also provides server/client model so that the users upload and download clinical diagnostic images and patient information from any workstations in the network environment.

EzDent-i supports general image formats such as JPG and BMP for 2D image viewing as well as DICOM format. For 3D image management, it provides uploading and downloading support for dental CT Images in DICOM format. It interfaces with a 3D imaging software made by our company, the Ez3D-i (K131616, K150761, K161246, K163539, K173863, K190791, K200178, K211791, K222069) but the EzDent-i itself does not view, transfer or process 3D radiographs. None of the changes to the predicate software are related to the 3D functions.

EzDent-i supports the acquisition of dental images by interfacing with OpenCV library to import the intra-oral camera images. It also supports the acquisition of CT/Panoramic/Cephalo/Intra-Oral Sensor /Intra-Oral Scanner images by interfacing with Xray capture software.

The software level of concern is Moderate.

AI/ML Overview

The provided document is a 510(k) summary for the EzDent-i / E2 / Prora View / Smart M Viewer software. It describes the device, its intended use, and argues for its substantial equivalence to a predicate device.

However, the document does not contain the detailed information necessary to answer all parts of your request, specifically regarding acceptance criteria, reported device performance, sample sizes, expert qualifications, adjudication methods, MRMC studies, standalone performance, or training set details.

This 510(k) submission primarily focuses on demonstrating that the updated software version (v3.4) is substantially equivalent to a previous cleared version (v3.3) by highlighting that the changes are for "user convenience and do not affect the device safety or effectiveness". It therefore does not provide a comprehensive study report with quantified performance metrics against specific acceptance criteria for diagnostic accuracy, which would typically be found in direct performance studies for devices with diagnostic claims.

Here's what can be extracted and what is missing:


1. Table of Acceptance Criteria and Reported Device Performance

Acceptance CriteriaReported Device Performance
Not SpecifiedNot Specified
(The document states "The device passed all of the tests based on pre-determined Pass/Fail criteria." but does not elaborate on what these criteria or the test results were in terms of specific performance metrics.)

2. Sample size used for the test set and the data provenance

  • Sample Size for Test Set: Not specified.
  • Data Provenance (e.g., country of origin of the data, retrospective or prospective): Not specified.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

  • Number of Experts: Not specified.
  • Qualifications of Experts: Not specified.
    • The Indications for Use state the device is "meant to be used by trained medical professionals such as radiologist and dentist." This implies the target users, but not the experts for ground truth.

4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

  • Adjudication Method: Not specified.

5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

  • MRMC Study: No, an MRMC comparative effectiveness study is not mentioned. The device is described as "dental imaging software that is intended to provide diagnostic tools" and its results are "dependent on the interpretation of trained and licensed radiologists, clinicians and referring physicians as an adjunctive to standard radiology practices for diagnosis." This suggests it is a viewing and processing tool, not a diagnostic AI that would typically undergo an MRMC study to show improvement in human reader performance.

6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

  • Standalone Performance Study: No, a standalone performance study is not described. Given the device's function as an image management and processing system used "as an adjunctive to standard radiology practices for diagnosis," it is not presented as an AI algorithm providing standalone diagnoses.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

  • Type of Ground Truth: Not specified, as specific performance tests against diagnostic ground truth are not detailed. The performance data section broadly mentions "SW verification/validation and the measurement accuracy test were conducted," implying functional and technical testing rather than a clinical performance study with established ground truth for diagnostic accuracy.

8. The sample size for the training set

  • Sample Size for Training Set: Not applicable/Not specified. The document describes software with various image processing and management features, rather than a machine learning or AI algorithm that would require a distinct "training set."

9. How the ground truth for the training set was established

  • Ground Truth for Training Set: Not applicable/Not specified, as no training set for a machine learning model is mentioned.

Summary of what the document implies about performance testing:

The "Performance Data" section states: "SW verification/validation and the measurement accuracy test were conducted to establish the performance, functionality and reliability characteristics of the modified devices. The device passed all of the tests based on pre-determined Pass/Fail criteria."

This indicates that Ewoosoft performed internal validation testing to ensure the software met its specified functional and technical requirements (e.g., correct image display, accurate measurements for linear distance and angle, proper functioning of new features like "Image Share" and "IO sensor image Preview"). However, these are not detailed clinical performance metrics for diagnostic efficacy or accuracy that would typically be associated with AI-driven diagnostic devices. Since the device is presented as substantially equivalent to a predicate device for managing and processing images, the focus of the 510(k) is on the safety and effectiveness of the software updates rather than demonstrating novel diagnostic performance.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).