Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K222072
    Date Cleared
    2022-08-08

    (25 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Dolphin Imaging 12.0

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Dolphin Imaging 12.0 software is designed for use by specialized dental practices for capturing, storing and presenting patient images and assisting in treatment planning and case diagnosis. Results produced by the software's diagnostic and treatment planning tools are dependent on the interpretation of trained and licensed practitioners.

    Device Description

    Dolphin Imaging 12.0 software provides imaging, diagnostics, and case presentation capabilities for dental specialty professionals. The Dolphin Imaging 12.0 suite of software products is a collection of modules that together provide a comprehensive toolset for the dental specialty practitioner. Users can easily manage 2D/3D images and x-rays; accurately diagnose and treatment plan, quickly communicate and present cases to patients and can work efficiently with colleagues on multidisciplinary cases. The below functionalities make up the medical device modules:
    Cephalometric Tracing: Digitize landmarks on a patient's radiograph, trace cephalometric structures, view cephalometric measurements, superimpose images for analysis and perform custom analysis.
    Treatment Simulation (VTO): Simulate orthodontic and surgical treatment results using Visual Treatment Objective (VTO) and growth features.
    Arnett/Gunson FAB Analyses: Perform face, airway, bite (FAB) analysis and simulate treatment for orthodontic and surgical cases based on the methodologies of Dr. William Arnett.
    McLaughlin Dental VTO: Analyze and evaluate orthodontic and surgical visual treatment objective (VTO) based on the theories of Dr. Richard McLaughlin.
    Implanner™: Plan dental implant procedures in 2D.
    Dolphin 3D: Plan, diagnose and present orthodontic and surgical cases, airway analysis, study models, implant planning and surgery treatment simulation in 3D.

    AI/ML Overview

    The provided text is a 510(k) Premarket Notification from the FDA for "Dolphin Imaging 12.0." This document primarily focuses on establishing substantial equivalence to a predicate device (Dolphin Imaging 11.5) rather than presenting a detailed acceptance criteria and study proving device performance against such criteria for a novel AI/ML medical device.

    Therefore, the requested information regarding acceptance criteria and a study proving the device meets those criteria, particularly in the context of AI/ML performance metrics (like accuracy, sensitivity, specificity, MRMC studies, standalone performance, and ground truth establishment), is not present in the provided document.

    The document states: "No clinical testing was required to support substantial equivalence." This indicates that no new performance studies (clinical or non-clinical, beyond basic software and system testing) were conducted or needed for this specific 510(k) clearance, as the changes were deemed moderate and the device maintained the same "key medical device functionality" as its predicate.

    The "Non-Clinical Performance Testing" section lists general software and system testing (Performance Testing, Manual Testing/Integration Testing, System and Regression testing) and adherence to recognized standards (Usability, Software Life Cycle, DICOM, Risk Management). These are standard validation practices for software modifications, not performance studies as typically understood for AI/ML devices with specific numerical acceptance criteria.

    To directly answer your request based on the provided text:

    Acceptance Criteria and Study Proving Device Meets Acceptance Criteria

    No specific acceptance criteria related to a numerical performance metric (e.g., accuracy, sensitivity, AUC) for a diagnostic AI/ML algorithm are mentioned or detailed in this 510(k) summary.

    No study proving the device meets specific performance-based acceptance criteria (as would be typical for an AI/ML algorithm) is described. The 510(k) submission primarily relies on demonstrating substantial equivalence to a previously cleared predicate device due to software usability enhancements and system updates, rather than a new AI-driven diagnostic capability.


    However, if we were to infer the closest thing to "acceptance criteria" and "proof" from the document's context of substantial equivalence and safe and effective functionality, it would be:

    • Acceptance Criteria (Implied): The Dolphin Imaging 12.0 software operates with the same core medical device functionalities (listed in the "Medical Device Features" table) as the predicate (Dolphin Imaging 11.5) without introducing new safety or efficacy concerns. Usability enhancements are functional and do not degrade existing performance. Compliance with specified industry standards (IEC, DICOM, ISO) is met.
    • Study Proving Acceptance (Implied): The "Non-Clinical Performance Testing" which included "Performance Testing," "Manual Testing/Integration Testing," and "System and Regression testing," alongside adherence to recognized standards, served to demonstrate that the updated software continued to function as intended and comparably to the predicate, with the added usability enhancements.

    Responding to your specific numbered points, recognizing that this document is not for a novel AI/ML algorithm performance study:

    1. A table of acceptance criteria and the reported device performance:

      • Acceptance Criteria: Not explicitly stated as numerical performance targets. Implicitly, the device must maintain the "Medical Device Features" as the predicate and perform comparably, without issues.
      • Reported Device Performance: No quantitative performance metrics (e.g., accuracy, sensitivity) are reported. The "performance" is demonstrated through verification that the software functions as expected and complies with relevant standards.
      Acceptance Criterion (Implied)Reported Device Performance (as evident from clearance)
      Maintains all "key medical device functionality" of predicate.Confirmed via internal testing and substantial equivalence claim.
      Usability enhancements function as intended.Confirmed via internal testing.
      No new safety or efficacy concerns compared to predicate.Concluded by FDA based on submission.
      Compliance with IEC 62366 (Usability Engineering).Stated as compliant.
      Compliance with ANSI/AAMI/IEC 62304 (Software Life Cycle).Stated as compliant.
      Compliance with NEMA PS 3.1-3.20 (DICOM).Stated as compliant.
      Compliance with ISO 14971 (Risk Management).Stated as compliant.
    2. Sample size used for the test set and the data provenance: Not applicable/not provided. This was a software upgrade submission, not an AI/ML diagnostic performance study requiring a test set of patient data.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable/not provided. No ground truth was established from clinical data for a diagnostic performance study.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set: Not applicable/not provided.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: No MRMC study was performed or required. The device is described as "assisting in treatment planning and case diagnosis," with results "dependent on the interpretation of trained and licensed practitioners," implying a human-in-the-loop system, but no study on human performance improvement with the updated software is detailed.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: No standalone performance study was performed or required.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.): Not applicable/not provided for a diagnostic performance study. The "ground truth" for the software's functionality was its adherence to specifications and its comparable behavior to the predicate device.

    8. The sample size for the training set: Not applicable/not provided. This is not an AI/ML model being trained on a dataset.

    9. How the ground truth for the training set was established: Not applicable/not provided.

    Ask a Question

    Ask a specific question about this device

    K Number
    K110430
    Device Name
    DOLPHIN IMAGING
    Date Cleared
    2011-05-11

    (86 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    DOLPHIN IMAGING

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Dolphin Imaging software is designed for use by specialized dental practices for capturing, storing and presenting patient images and assisting in treatment planning and case diagnosis. Results produced by the software's diagnostic and treatment planning tools are dependent on the interpretation of trained and licensed practitioners.

    Device Description

    The software contains eight major components (modules) for achieving its basic functionalities. They are ImagingPlus™, Ceph Tracing, Treatment Simulation, Arnett/Gunson FAB, McLaughlin Dental VTO, Impanner™, Dolphin 3D, and Dolphin Letter System.

    ImagingPlus™ is the foundation of the Dolphin product suites. It allows the user to capture, organize, edit, print, store and present patient image records.

    Ceph Tracing allows the user to digitize landmarks on a patient's radiograph, trace cephlometric structures, view cephalometric measurements, and superimpose images for analysis.

    Treatment Simulation (VTO) Module provides a tool to simulate orthodontic and surgical treatment results using Visual Treatment Objective (VTO).

    Amett/Gunson FAB Analyses performs face, ainway, bite analysis, and simulate treatment for orthodontic and surgical cases based on the methodologies of Dr. William Arnett.

    The McLaughlin Dental VTO is an interactive treatment-planning and case presentation software program based on the theories of Dr. Richard P. McLaughlin, a renowned clinician, author and lecturer. It analyzes and evaluates tooth positions and dental treatment options, which assists clinicians in planning precise, quantifiable movement of dentition using clinical examination and treatment planning values.

    Dolphin Implanner Module is for planning implant procedures. Using this module, simulated dental implants can be placed on a patient's lateral or panoramic x-ray images.

    The Dolphin 3D module contains features for generating a multitude of views of the volumetric data, including simulated x-ray views and 3D-rendered views of the volume.

    Dolphin Letter System generates letters that include the images and diagnostic questionnaire data entered by the user can choose a pre-defined letter template or create a custom template.

    AI/ML Overview

    The provided text does not contain detailed information about specific acceptance criteria or a dedicated study proving the device meets these criteria. The document is primarily a 510(k) premarket notification for the Dolphin Imaging software, focusing on its intended use, description of modules, and demonstrating substantial equivalence to predicate devices.

    However, based on the general information provided, we can infer some aspects and highlight what is not present.

    Inferred Acceptance Criteria (Based on typical software validation in medical devices):

    • Functional Correspondence: The software's features and output should align with its described functionalities (e.g., image acquisition, storage, manipulation, cephalometric tracing, treatment simulation, 3D visualization, reporting).
    • Accuracy/Consistency of Measurements: If the software performs measurements (e.g., cephalometric measurements), these should be consistent and accurate when compared to manual methods or established standards.
    • Image Quality/Integrity: Images should be acquired, stored, and displayed without degradation.
    • Data Integrity: Patient data and image records should be stored and retrieved accurately and securely.
    • Usability/User Interface: The software should be intuitive and easy for trained practitioners to use as intended.
    • Compatibility: Compatibility with various imaging devices (TWIN, DICOM) and standard image formats.
    • Performance/Reliability: The software should perform its functions reliably without crashes or errors.
    • Safety: The software should not introduce new risks to patient safety.

    Reported Device Performance (Implicit from the document):

    The document states: "Dolphin Imaging has successfully completed integration testing/verification testing and Beta validation. In addition, potential hazards have been evaluated and controlled to an acceptable level."

    This indicates that internal testing validated the software's performance against its design specifications and risk controls, but specific performance metrics (e.g., numerical accuracy, sensitivity, specificity, processing speed) are not reported.


    Detailed Breakdown of Information (Based on the provided text):

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria (Inferred)Reported Device Performance (Implicit)
    Functional Correspondence: All modules (ImagingPlus™, Ceph Tracing, Treatment Simulation, Arnett/Gunson FAB, McLaughlin Dental VTO, Impanner™, Dolphin 3D, Dolphin Letter System) function as described.The software was "designed, developed, tested, and validated according to written procedures." It "successfully completed integration testing/verification testing and Beta validation." This implies functional requirements were met.
    Accuracy/Consistency of Measurements: Cephalometric measurements are accurate.Not explicitly stated with quantifiable metrics. The software performs calculations based on digitized landmarks, but no study is detailed to quantify the accuracy of these measurements against a gold standard. The document emphasizes that "Results produced by the software's diagnostic and treatment planning tools are dependent on the interpretation of trained and licensed practitioners."
    Image Quality/Integrity: Acquired, stored, and displayed images maintain quality.Not explicitly stated with quantifiable metrics. The software allows users to "crop, rotate, enhanced and otherwise manipulated" images, implying image processing capabilities.
    Data Integrity and Storage: Patient records, images, and derived data (e.g., altered cephalometric data from VTO) are stored and retrieved accurately.The software manages "patient image records," "Dolphin Data Storage," and "proprietary XML-based data format" for reports. "Reports can be output...or storage in the Dolphin Data Storage." Implies successful storage and retrieval.
    Compatibility: Compatibility with TWIN, DICOM, and standard image file formats.Explicitly stated: "Dolphin Imaging is TWAIN compatible and has the similar 2D functionalities with VistDenat at Complete; DICOM compatible and has the similar 3D functionalities with VitaDent 3D for communication of images with other medical imaging devices."
    Safety: Device operation does not introduce new safety concerns."Minor technological differences do not raise any new questions regarding safety or effectiveness of the device." "potential hazards have been evaluated and controlled to an acceptable level."
    Effectiveness Claim (Overall): Substantially equivalent to predicate devices for intended use.The FDA review found the device "substantially equivalent (for the indications for use stated in the enclosure) to legally marketed predicate devices..." for "Picture archiving and communications system." This is the primary effectiveness claim for regulatory clearance. The document notes, "The result of these operations is a morphed x-ray or photograph of the simulated post-treatment patient. This result is to be viewed as a guideline for the medical professional when making his or her treatment decisions, not as advice or a guaranteed outcome." This clarifies the device's role as an aid, not a definitive diagnostic or treatment tool.

    2. Sample size used for the test set and the data provenance

    • Sample Size for Test Set: Not specified. The document mentions "integration testing/verification testing and Beta validation," but does not provide details on the number of cases or images used in these tests.
    • Data Provenance: Not specified. It's likely internal testing data, but no explicit mention of country of origin, retrospective/prospective nature, or type of patients (e.g., specific conditions, age groups).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Not specified. The document mentions "trained and licensed practitioners" for interpretation and "medical professional" for decision-making, but does not detail expert involvement in ground truth establishment for testing. Given the clearance year (2011), formal ground truth establishment involving multiple experts for an AI study was less common for this type of device.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

    • Not specified. This level of detail is not present in the document.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No, an MRMC comparative effectiveness study was not explicitly done or reported. The document describes the software as assisting in treatment planning and diagnosis, and that results are dependent on practitioner interpretation. It does not claim improvement of human readers with AI assistance nor provides any effect sizes. The clearance is based on substantial equivalence to predicate devices, not on a demonstrated improvement in reader performance.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • A standalone performance study of an "algorithm" as typically defined for AI was not reported. The device is a software suite with various tools, not presented as a single standalone diagnostic algorithm. Its functionalities (like cephalometric tracing) involve algorithms, but their performance is not reported in isolation with metrics like sensitivity/specificity compared to a ground truth. Rather, the software is consistently described as a tool for a "trained and licensed practitioner" whose interpretation is key.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • Not specified. Given the nature of the device (image management and analysis tools for dental practitioners), "ground truth" might pertain to the accuracy of digitized anatomical landmarks, consistency of measurements, or correctness of image manipulation functions rather than a definitive medical diagnosis. However, the exact methods are not detailed.

    8. The sample size for the training set

    • Not applicable / Not specified. The document does not describe the use of machine learning models that would require a separate "training set" in the modern sense. The software's functionalities are based on established algorithms (e.g., for image processing, geometric calculations for cephalometry) rather than data-driven AI models that are "trained."

    9. How the ground truth for the training set was established

    • Not applicable / Not specified. As no machine learning training set is described, there's no information on how its ground truth would have been established.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1