(86 days)
Dolphin Imaging software is designed for use by specialized dental practices for capturing, storing and presenting patient images and assisting in treatment planning and case diagnosis. Results produced by the software's diagnostic and treatment planning tools are dependent on the interpretation of trained and licensed practitioners.
The software contains eight major components (modules) for achieving its basic functionalities. They are ImagingPlus™, Ceph Tracing, Treatment Simulation, Arnett/Gunson FAB, McLaughlin Dental VTO, Impanner™, Dolphin 3D, and Dolphin Letter System.
ImagingPlus™ is the foundation of the Dolphin product suites. It allows the user to capture, organize, edit, print, store and present patient image records.
Ceph Tracing allows the user to digitize landmarks on a patient's radiograph, trace cephlometric structures, view cephalometric measurements, and superimpose images for analysis.
Treatment Simulation (VTO) Module provides a tool to simulate orthodontic and surgical treatment results using Visual Treatment Objective (VTO).
Amett/Gunson FAB Analyses performs face, ainway, bite analysis, and simulate treatment for orthodontic and surgical cases based on the methodologies of Dr. William Arnett.
The McLaughlin Dental VTO is an interactive treatment-planning and case presentation software program based on the theories of Dr. Richard P. McLaughlin, a renowned clinician, author and lecturer. It analyzes and evaluates tooth positions and dental treatment options, which assists clinicians in planning precise, quantifiable movement of dentition using clinical examination and treatment planning values.
Dolphin Implanner Module is for planning implant procedures. Using this module, simulated dental implants can be placed on a patient's lateral or panoramic x-ray images.
The Dolphin 3D module contains features for generating a multitude of views of the volumetric data, including simulated x-ray views and 3D-rendered views of the volume.
Dolphin Letter System generates letters that include the images and diagnostic questionnaire data entered by the user can choose a pre-defined letter template or create a custom template.
The provided text does not contain detailed information about specific acceptance criteria or a dedicated study proving the device meets these criteria. The document is primarily a 510(k) premarket notification for the Dolphin Imaging software, focusing on its intended use, description of modules, and demonstrating substantial equivalence to predicate devices.
However, based on the general information provided, we can infer some aspects and highlight what is not present.
Inferred Acceptance Criteria (Based on typical software validation in medical devices):
- Functional Correspondence: The software's features and output should align with its described functionalities (e.g., image acquisition, storage, manipulation, cephalometric tracing, treatment simulation, 3D visualization, reporting).
- Accuracy/Consistency of Measurements: If the software performs measurements (e.g., cephalometric measurements), these should be consistent and accurate when compared to manual methods or established standards.
- Image Quality/Integrity: Images should be acquired, stored, and displayed without degradation.
- Data Integrity: Patient data and image records should be stored and retrieved accurately and securely.
- Usability/User Interface: The software should be intuitive and easy for trained practitioners to use as intended.
- Compatibility: Compatibility with various imaging devices (TWIN, DICOM) and standard image formats.
- Performance/Reliability: The software should perform its functions reliably without crashes or errors.
- Safety: The software should not introduce new risks to patient safety.
Reported Device Performance (Implicit from the document):
The document states: "Dolphin Imaging has successfully completed integration testing/verification testing and Beta validation. In addition, potential hazards have been evaluated and controlled to an acceptable level."
This indicates that internal testing validated the software's performance against its design specifications and risk controls, but specific performance metrics (e.g., numerical accuracy, sensitivity, specificity, processing speed) are not reported.
Detailed Breakdown of Information (Based on the provided text):
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria (Inferred) | Reported Device Performance (Implicit) |
---|---|
Functional Correspondence: All modules (ImagingPlus™, Ceph Tracing, Treatment Simulation, Arnett/Gunson FAB, McLaughlin Dental VTO, Impanner™, Dolphin 3D, Dolphin Letter System) function as described. | The software was "designed, developed, tested, and validated according to written procedures." It "successfully completed integration testing/verification testing and Beta validation." This implies functional requirements were met. |
Accuracy/Consistency of Measurements: Cephalometric measurements are accurate. | Not explicitly stated with quantifiable metrics. The software performs calculations based on digitized landmarks, but no study is detailed to quantify the accuracy of these measurements against a gold standard. The document emphasizes that "Results produced by the software's diagnostic and treatment planning tools are dependent on the interpretation of trained and licensed practitioners." |
Image Quality/Integrity: Acquired, stored, and displayed images maintain quality. | Not explicitly stated with quantifiable metrics. The software allows users to "crop, rotate, enhanced and otherwise manipulated" images, implying image processing capabilities. |
Data Integrity and Storage: Patient records, images, and derived data (e.g., altered cephalometric data from VTO) are stored and retrieved accurately. | The software manages "patient image records," "Dolphin Data Storage," and "proprietary XML-based data format" for reports. "Reports can be output...or storage in the Dolphin Data Storage." Implies successful storage and retrieval. |
Compatibility: Compatibility with TWIN, DICOM, and standard image file formats. | Explicitly stated: "Dolphin Imaging is TWAIN compatible and has the similar 2D functionalities with VistDenat at Complete; DICOM compatible and has the similar 3D functionalities with VitaDent 3D for communication of images with other medical imaging devices." |
Safety: Device operation does not introduce new safety concerns. | "Minor technological differences do not raise any new questions regarding safety or effectiveness of the device." "potential hazards have been evaluated and controlled to an acceptable level." |
Effectiveness Claim (Overall): Substantially equivalent to predicate devices for intended use. | The FDA review found the device "substantially equivalent (for the indications for use stated in the enclosure) to legally marketed predicate devices..." for "Picture archiving and communications system." This is the primary effectiveness claim for regulatory clearance. The document notes, "The result of these operations is a morphed x-ray or photograph of the simulated post-treatment patient. This result is to be viewed as a guideline for the medical professional when making his or her treatment decisions, not as advice or a guaranteed outcome." This clarifies the device's role as an aid, not a definitive diagnostic or treatment tool. |
2. Sample size used for the test set and the data provenance
- Sample Size for Test Set: Not specified. The document mentions "integration testing/verification testing and Beta validation," but does not provide details on the number of cases or images used in these tests.
- Data Provenance: Not specified. It's likely internal testing data, but no explicit mention of country of origin, retrospective/prospective nature, or type of patients (e.g., specific conditions, age groups).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Not specified. The document mentions "trained and licensed practitioners" for interpretation and "medical professional" for decision-making, but does not detail expert involvement in ground truth establishment for testing. Given the clearance year (2011), formal ground truth establishment involving multiple experts for an AI study was less common for this type of device.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
- Not specified. This level of detail is not present in the document.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, an MRMC comparative effectiveness study was not explicitly done or reported. The document describes the software as assisting in treatment planning and diagnosis, and that results are dependent on practitioner interpretation. It does not claim improvement of human readers with AI assistance nor provides any effect sizes. The clearance is based on substantial equivalence to predicate devices, not on a demonstrated improvement in reader performance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- A standalone performance study of an "algorithm" as typically defined for AI was not reported. The device is a software suite with various tools, not presented as a single standalone diagnostic algorithm. Its functionalities (like cephalometric tracing) involve algorithms, but their performance is not reported in isolation with metrics like sensitivity/specificity compared to a ground truth. Rather, the software is consistently described as a tool for a "trained and licensed practitioner" whose interpretation is key.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Not specified. Given the nature of the device (image management and analysis tools for dental practitioners), "ground truth" might pertain to the accuracy of digitized anatomical landmarks, consistency of measurements, or correctness of image manipulation functions rather than a definitive medical diagnosis. However, the exact methods are not detailed.
8. The sample size for the training set
- Not applicable / Not specified. The document does not describe the use of machine learning models that would require a separate "training set" in the modern sense. The software's functionalities are based on established algorithms (e.g., for image processing, geometric calculations for cephalometry) rather than data-driven AI models that are "trained."
9. How the ground truth for the training set was established
- Not applicable / Not specified. As no machine learning training set is described, there's no information on how its ground truth would have been established.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).