Search Results
Found 1 results
510(k) Data Aggregation
(25 days)
Dolphin Imaging 12.0
Dolphin Imaging 12.0 software is designed for use by specialized dental practices for capturing, storing and presenting patient images and assisting in treatment planning and case diagnosis. Results produced by the software's diagnostic and treatment planning tools are dependent on the interpretation of trained and licensed practitioners.
Dolphin Imaging 12.0 software provides imaging, diagnostics, and case presentation capabilities for dental specialty professionals. The Dolphin Imaging 12.0 suite of software products is a collection of modules that together provide a comprehensive toolset for the dental specialty practitioner. Users can easily manage 2D/3D images and x-rays; accurately diagnose and treatment plan, quickly communicate and present cases to patients and can work efficiently with colleagues on multidisciplinary cases. The below functionalities make up the medical device modules:
Cephalometric Tracing: Digitize landmarks on a patient's radiograph, trace cephalometric structures, view cephalometric measurements, superimpose images for analysis and perform custom analysis.
Treatment Simulation (VTO): Simulate orthodontic and surgical treatment results using Visual Treatment Objective (VTO) and growth features.
Arnett/Gunson FAB Analyses: Perform face, airway, bite (FAB) analysis and simulate treatment for orthodontic and surgical cases based on the methodologies of Dr. William Arnett.
McLaughlin Dental VTO: Analyze and evaluate orthodontic and surgical visual treatment objective (VTO) based on the theories of Dr. Richard McLaughlin.
Implanner™: Plan dental implant procedures in 2D.
Dolphin 3D: Plan, diagnose and present orthodontic and surgical cases, airway analysis, study models, implant planning and surgery treatment simulation in 3D.
The provided text is a 510(k) Premarket Notification from the FDA for "Dolphin Imaging 12.0." This document primarily focuses on establishing substantial equivalence to a predicate device (Dolphin Imaging 11.5) rather than presenting a detailed acceptance criteria and study proving device performance against such criteria for a novel AI/ML medical device.
Therefore, the requested information regarding acceptance criteria and a study proving the device meets those criteria, particularly in the context of AI/ML performance metrics (like accuracy, sensitivity, specificity, MRMC studies, standalone performance, and ground truth establishment), is not present in the provided document.
The document states: "No clinical testing was required to support substantial equivalence." This indicates that no new performance studies (clinical or non-clinical, beyond basic software and system testing) were conducted or needed for this specific 510(k) clearance, as the changes were deemed moderate and the device maintained the same "key medical device functionality" as its predicate.
The "Non-Clinical Performance Testing" section lists general software and system testing (Performance Testing, Manual Testing/Integration Testing, System and Regression testing) and adherence to recognized standards (Usability, Software Life Cycle, DICOM, Risk Management). These are standard validation practices for software modifications, not performance studies as typically understood for AI/ML devices with specific numerical acceptance criteria.
To directly answer your request based on the provided text:
Acceptance Criteria and Study Proving Device Meets Acceptance Criteria
No specific acceptance criteria related to a numerical performance metric (e.g., accuracy, sensitivity, AUC) for a diagnostic AI/ML algorithm are mentioned or detailed in this 510(k) summary.
No study proving the device meets specific performance-based acceptance criteria (as would be typical for an AI/ML algorithm) is described. The 510(k) submission primarily relies on demonstrating substantial equivalence to a previously cleared predicate device due to software usability enhancements and system updates, rather than a new AI-driven diagnostic capability.
However, if we were to infer the closest thing to "acceptance criteria" and "proof" from the document's context of substantial equivalence and safe and effective functionality, it would be:
- Acceptance Criteria (Implied): The Dolphin Imaging 12.0 software operates with the same core medical device functionalities (listed in the "Medical Device Features" table) as the predicate (Dolphin Imaging 11.5) without introducing new safety or efficacy concerns. Usability enhancements are functional and do not degrade existing performance. Compliance with specified industry standards (IEC, DICOM, ISO) is met.
- Study Proving Acceptance (Implied): The "Non-Clinical Performance Testing" which included "Performance Testing," "Manual Testing/Integration Testing," and "System and Regression testing," alongside adherence to recognized standards, served to demonstrate that the updated software continued to function as intended and comparably to the predicate, with the added usability enhancements.
Responding to your specific numbered points, recognizing that this document is not for a novel AI/ML algorithm performance study:
-
A table of acceptance criteria and the reported device performance:
- Acceptance Criteria: Not explicitly stated as numerical performance targets. Implicitly, the device must maintain the "Medical Device Features" as the predicate and perform comparably, without issues.
- Reported Device Performance: No quantitative performance metrics (e.g., accuracy, sensitivity) are reported. The "performance" is demonstrated through verification that the software functions as expected and complies with relevant standards.
Acceptance Criterion (Implied) Reported Device Performance (as evident from clearance) Maintains all "key medical device functionality" of predicate. Confirmed via internal testing and substantial equivalence claim. Usability enhancements function as intended. Confirmed via internal testing. No new safety or efficacy concerns compared to predicate. Concluded by FDA based on submission. Compliance with IEC 62366 (Usability Engineering). Stated as compliant. Compliance with ANSI/AAMI/IEC 62304 (Software Life Cycle). Stated as compliant. Compliance with NEMA PS 3.1-3.20 (DICOM). Stated as compliant. Compliance with ISO 14971 (Risk Management). Stated as compliant. -
Sample size used for the test set and the data provenance: Not applicable/not provided. This was a software upgrade submission, not an AI/ML diagnostic performance study requiring a test set of patient data.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable/not provided. No ground truth was established from clinical data for a diagnostic performance study.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set: Not applicable/not provided.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: No MRMC study was performed or required. The device is described as "assisting in treatment planning and case diagnosis," with results "dependent on the interpretation of trained and licensed practitioners," implying a human-in-the-loop system, but no study on human performance improvement with the updated software is detailed.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: No standalone performance study was performed or required.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.): Not applicable/not provided for a diagnostic performance study. The "ground truth" for the software's functionality was its adherence to specifications and its comparable behavior to the predicate device.
-
The sample size for the training set: Not applicable/not provided. This is not an AI/ML model being trained on a dataset.
-
How the ground truth for the training set was established: Not applicable/not provided.
Ask a specific question about this device
Page 1 of 1