(74 days)
Ez3D-i is dental imaging software that is intended to provide diagnostic tools for maxillofacial radiographic imaging. These tools are available to view and interpret a series of DICOM compliant dental radiology images and are meant to be used by trained medical professionals such as radiologist and dentist.
Ez3D-i is intended for use as software to load, view and save DICOM images from CT, panorama, cephalometric and intraoral imaging equipment and to provide 3D visualization, 2D analysis, in various MPR (Multi-Planar Reconstruction) functions.
Ez3D-i is 3D viewing software for prompt and accurate diagnosis dental CT images in DICOM format with a host of useful functions including MPR, 2-dimensional analysis and 3dimensional image reformation. It provides advanced simulation functions such as Implant Simulation, Drawing Canal, and Implant Environ Bone Density, etc for the benefit of effective doctor and patient communication and precise treatment planning. Ez3D-i is a useful tool for an easier diagnosis and analysis by processing a 3D image with simple and convenient user interface. Ez3D-i's main functions are;
- · Image adaptation through various rendering methods such as Teeth/Bone/Soft tissue/MIP
- · Versatile 3D image viewing via MPR Rotating, Curve mode
- · "Sculpt" for deleting unnecessary parts to view only the region of interest.
- · Implant Simulation for efficient treatment planning and effective patient consultation
- · Canal Draw to trace alveolar canal and its geometrical orientation relative to teeth.
- · "Bone Density" test to measure bone density around the site of an implant(s)
- · Various utilities such as Measurement, Annotation, Gallery, and Report
- · 3D Volume function to transform the image into 3D Panorama and the Tab has been optimized for Implant Simulation.
This document is a 510(k) premarket notification for the Ez3D-i / E3 dental imaging software. The purpose of this document is to demonstrate "substantial equivalence" to a predicate device, not necessarily to provide full clinical trial results as would be required for a PMA (Pre-Market Approval). Therefore, much of the requested information regarding detailed acceptance criteria, specific study designs, and performance metrics (like sensitivity, specificity, reader improvement, etc.) is not present in this document.
Here's a breakdown of the available information:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not provide a formal table of quantitative acceptance criteria or specific performance metrics (like accuracy, sensitivity, specificity, etc.) for diagnostic performance. The “Performance Data” section broadly states:
Acceptance Criteria | Reported Device Performance |
---|---|
Pre-determined Pass/Fail criteria for verification, validation, and testing activities. | The device passed all of the tests based on these pre-determined Pass/Fail criteria. |
2. Sample Size Used for the Test Set and Data Provenance
This information is not provided in the document. The document refers to "system level validation tests" but does not specify the number of cases or the nature of the data (e.g., specific patient scans, simulated data). The provenance of any data (country, retrospective/prospective) is also not mentioned.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
This information is not provided. The document states that the software's results "are dependent on the interpretation of trained and licensed radiologists, clinicians and referring physicians as an adjunctive to standard radiology practices for diagnosis." However, it does not detail how ground truth was established for the validation tests mentioned.
4. Adjudication Method for the Test Set
This information is not provided.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
A MRMC comparative effectiveness study is not mentioned in the document. The document focuses on demonstrating substantial equivalence to a predicate device, which typically involves showing that the new device performs as intended and is safe, rather than proving improved human reader performance with AI assistance.
6. Standalone Performance Study
A standalone performance study (i.e., algorithm only without human-in-the-loop performance metrics like sensitivity and specificity) is not explicitly detailed in the document. The "Performance Data" section broadly states that "Verification, validation and testing activities were conducted to establish the performance, functionality and reliability characteristics of the modified devices" and that it "passed all of the tests." However, specific metrics for standalone diagnostic accuracy are not provided.
7. Type of Ground Truth Used
The type of ground truth used for validation (e.g., expert consensus, pathology, outcomes data) is not specified in the document.
8. Sample Size for the Training Set
The document is about a modified device and its validation. It does not mention a training set sample size, as the submission is not focused on the de novo development of a machine learning model from scratch but rather on a software update/modification.
9. How Ground Truth for the Training Set Was Established
Since no training set is mentioned or detailed, the method for establishing its ground truth is not provided.
Summary of what is available and what is not:
- Available: General statement that the device passed pre-determined Pass/Fail criteria in validation tests.
- Not Available:
- Specific quantitative acceptance criteria (e.g., sensitivity, specificity thresholds).
- Detailed performance metrics against those criteria.
- Sample size and provenance of test data.
- Details on ground truth establishment (number/qualifications of experts, adjudication method, type of ground truth).
- Information on MRMC comparative effectiveness studies or standalone diagnostic performance metrics (e.g., AUC, sensitivity, specificity).
- Information on training set size or ground truth establishment for a training set.
This level of detail is typical for a 510(k) submission for certain types of software modifications, where the focus is on maintaining substantial equivalence to an existing (predicate) device, rather than proving novel clinical efficacy or superior diagnostic performance with detailed clinical studies. The "performance data" referred to likely pertains to software verification and validation, ensuring functions work as designed and that new features (like 3D Panorama View, Navigator, and Collision Detection) do not introduce new safety or effectiveness issues.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).