Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K242437
    Device Name
    Smile Dx®
    Manufacturer
    Date Cleared
    2025-05-14

    (271 days)

    Product Code
    Regulation Number
    892.2070
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Smile Dx® is a computer-assisted detection (CADe) software designed to aid dentists in the review of digital files of bitewing and periapical radiographs of permanent teeth. It is intended to aid in the detection and segmentation of suspected dental findings which include: caries, periapical radiolucencies (PARL), restorations, and dental anatomy.

    Smile Dx® is also intended to aid dentists in the measurement (in millimeter and percentage measurements) of mesial and distal bone levels associated with each tooth.

    The device is not intended as a replacement for a complete dentist's review or their clinical judgment that takes into account other relevant information from the image, patient history, and actual in vivo clinical assessment.

    Smile Dx® supports both digital and phosphor sensors.

    Device Description

    Smile Dx® is a computer assisted detection (CADe) device indicated for use by licensed dentists as an aid in their assessment of bitewing and periapical radiographs of secondary dentition in adult patients. Smile Dx® utilizes machine learning to produce annotations for the following findings:

    • Caries
    • Periapical radiolucencies
    • Bone level measurements (mesial and distal)
    • Normal anatomy (enamel, dentin, pulp, and bone)
    • Restorations
    AI/ML Overview

    The provided FDA 510(k) Clearance Letter for Smile Dx® outlines the device's acceptance criteria and the studies conducted to prove it meets those criteria.

    Acceptance Criteria and Device Performance

    The acceptance criteria are implicitly defined by the performance metrics reported in the "Performance Testing" section. The device's performance is reported in terms of various metrics for both standalone and human-in-the-loop (MRMC) evaluations.

    Here's a table summarizing the reported device performance against the implied acceptance criteria:

    Table 1: Acceptance Criteria and Reported Device Performance

    Feature/MetricAcceptance Criteria (Implied)Reported Device Performance
    Standalone Testing:
    Caries DetectionHigh Dice, SensitivityDice: 0.74 [0.72 0.76] Sensitivity (overall): 88.3% [83.5%, 92.6%]
    Periapical Radiolucency (PARL) DetectionHigh Dice, SensitivityDice: 0.77 [0.74, 0.80] Sensitivity: 86.1% [80.2%, 91.9%]
    Bone Level Detection (Bitewing)High Sensitivity, Specificity, Low MAESensitivity: 95.5% [94.3%, 96.7%] Specificity: 94.0% [91.1%, 96.6%] MAE: 0.30 mm [0.29mm, 0.32mm]
    Bone Level Detection (Periapical)High Sensitivity, Specificity, Low MAE (percentage)Sensitivity: 87.3% [85.4%, 89.2%] Specificity: 92.1% [89.9%, 94.1%] MAE: 2.6% [2.4%, 2.8%]
    Normal Anatomy DetectionHigh Dice, Sensitivity, SpecificityDice: 0.84 [0.83, 0.85] Sensitivity (Pixel-level): 86.1% [85.4%, 86.8%] Sensitivity (Contour-level): 95.2% [94.5%, 96%] Specificity (Contour-level): 93.5% [91.6%, 95.8%]
    Restorations DetectionHigh Dice, Sensitivity, SpecificityDice: 0.87 [0.85, 0.90] Sensitivity (Pixel-level): 83.1% [80.3%, 86.4%] Sensitivity (Contour-level): 90.9% [88.2%, 93.9%] Specificity (Contour-level): 99.6% [99.3%, 99.8%]
    MRMC Clinical Evaluation - Reader Improvement:
    Caries Detection (wAFROC Δθ)Statistically significant improvement+0.127 [0.081, 0.172] (p < 0.001)
    PARL Detection (wAFROC Δθ)Statistically significant improvement+0.098 [0.061, 0.135] (p < 0.001)
    Caries Detection (Sensitivity Improvement)Increased sensitivity with device assistance19.6% [12.8%, 26.4%] increase (from 64.3% to 83.9%)
    PARL Detection (Sensitivity Improvement)Increased sensitivity with device assistance19.1% [13.6%, 24.7%] increase (from 70.7% to 89.8%)
    Caries Detection (Specificity Improvement)Maintained or improved specificity with device assistance16.7% [13.5%, 19.9%] increase (from 73.6% to 90.2%)
    PARL Detection (Specificity Improvement)Maintained or improved specificity with device assistance4.7% [3%, 6.4%] increase (from 92.6% to 97.3%)

    Study Details for Device Performance Proof:

    1. Sample Sizes for the Test Set and Data Provenance

    • Standalone Testing:
      • Caries and Periapical Radiolucency Detection: 867 cases.
      • Bone Level Detection and Bone Loss Measurement: 352 cases.
      • Normal Anatomy and Restorations: 200 cases.
    • MRMC Clinical Evaluation: 352 cases.
    • Data Provenance: All test sets were collected from "multiple U.S. sites." The data is retrospective, as it's used in a "retrospective study" for the MRMC evaluation. Sub-group analysis also included "imaging hardware" and "patient demographics (i.e., age, sex, race)," indicating diversity in data.

    2. Number of Experts and Qualifications for Ground Truth

    • Standalone Testing (Implicit): Not explicitly stated how ground truth for standalone testing was established, but it is likely derived from expert consensus, similar to the MRMC study.
    • MRMC Clinical Evaluation: Ground truth was established by the "consensus labels of four US licensed dentists."

    3. Adjudication Method for the Test Set

    • MRMC Clinical Evaluation: The ground truth for the MRMC study was established by the "consensus labels of four US licensed dentists." This implies a form of consensus adjudication, likely where all four experts reviewed and reached agreement on the findings. The specific method (e.g., majority vote, 2+1, 3+1) is not explicitly detailed beyond "consensus labels." For standalone testing, the adjudication method for ground truth is not specified.

    4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Yes, an MRMC comparative effectiveness study was done.
    • Effect Size of Human Readers' Improvement with AI vs. Without AI Assistance:
      • Caries Detection:
        • wAFROC Δθ: +0.127 [0.081, 0.172] (p < 0.001)
        • Sensitivity Improvement: 19.6% increase (from 64.3% without device to 83.9% with device).
        • Specificity Improvement: 16.7% increase (from 73.6% without device to 90.2% with device).
      • Periapical Radiolucency (PARL) Detection:
        • wAFROC Δθ: +0.098 [0.061, 0.135] (p < 0.001)
        • Sensitivity Improvement: 19.1% increase (from 70.7% without device to 89.8% with device).
        • Specificity Improvement: 4.7% increase (from 92.6% without device to 97.3% with device).
    • The study design was a "fully-crossed, multiple-reader multiple-case (MRMC) evaluation method" with "at least 13 US licensed dentists (Smile Dx® had 14 readers)." Half of the data set contained unannotated images, and the second half contained radiographs that had been processed through the CADe device. Radiographs were presented to readers in alternating groups throughout two different sessions, separated by a washout period.

    5. Standalone Performance Study (Algorithm Only)

    • Yes, a standalone performance study was done.
    • It evaluated the algorithm's performance for:
      • Caries and Periapical Radiolucency Detection (Dice, Sensitivity)
      • Bone Level Detection and Bone Loss Measurement (Sensitivity, Specificity, Mean Absolute Error)
      • Normal Anatomy and Restorations Detection (Dice, Pixel-level Sensitivity, Contour-level Sensitivity, Contour-level Specificity)

    6. Type of Ground Truth Used

    • Explicitly for MRMC Study: "Consensus labels of four US licensed dentists." This indicates expert consensus was used for ground truth for the human-in-the-loop evaluation and likely for the standalone evaluation's ground truth as well. There is no mention of pathology or outcomes data.

    7. Sample Size for the Training Set

    • The document does not explicitly state the sample size for the training set. It only mentions the test set sizes.

    8. How Ground Truth for Training Set Was Established

    • The document does not explicitly state how ground truth for the training set was established. It mentions the model "utilizes machine learning to produce annotations" and "training data" is used, but provides no details on its annotation process. It's highly probable that expert annotation was also used for the training data, similar to the test set, but this is not confirmed in the provided text.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1