Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K221921
    Manufacturer
    Date Cleared
    2023-03-28

    (270 days)

    Product Code
    Regulation Number
    892.2070
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    DTX Studio Clinic is a computer assisted detection (CADe) device that analyses intraoral radiographs to identify and localize dental findings, which include caries, calculus, periapical radiolucency, root canal filling deficiency, discrepancy at margin of an existing restoration and bone loss.

    The DTX Studio Clinic CADe functionality is indicated for the concurrent review of bitewing and periapical radiographs of permanent teeth in patients 15 years of age or older.

    Device Description

    DTX Studio Clinic features an AI-powered Focus Area Detection algorithm which analyzes intraoral radiographs for potential dental findings or image artifacts. The detected focus areas can be converted afterwards to diagnostic findings after approval by the user. The following dental findings can be detected by the device: Caries, Discrepancy at margin of an existing restoration, Periapical radiolucency, Root canal filling deficiency, Bone loss, Calculus.

    AI/ML Overview

    The provided text describes the acceptance criteria and the study that proves the device meets those criteria for the DTX Studio Clinic 3.0.

    Here's the breakdown:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state "acceptance criteria" as a pass/fail threshold, but rather presents the performance results from the standalone (algorithm only) and clinical (human-in-the-loop) studies. The acceptance is implicitly based on these results demonstrating clinical benefit and safety.

    Standalone Performance (Algorithm-Only)

    Dental Finding TypeMetricReported Performance (95% CI)
    CariesSensitivity0.70 [0.65, 0.75]
    Mean IoU58.6 [56.2, 60.9]%
    Mean Dice71.9 [69.9, 74.0]%
    Periapical RadiolucencySensitivity0.68 [0.59, 0.77]
    Mean IoU48.9 [44.9, 52.9]%
    Mean Dice63.7 [59.9, 67.5]%
    Root Canal Filling DeficiencySensitivity0.95 [0.91, 0.99]
    Mean IoU51.9 [49.3, 54.6]%
    Mean Dice66.9 [64.3, 69.4]%
    Discrepancy at Restoration MarginSensitivity0.82 [0.77, 0.87]
    Mean IoU48.4 [46.0, 50.7]%
    Mean Dice63.5 [61.3, 65.8]%
    Bone LossSensitivity0.78 [0.75, 0.81]
    Mean IoU44.8 [43.4, 46.3]%
    Mean Dice60.1 [58.7, 61.6]%
    CalculusSensitivity0.80 [0.76, 0.84]
    Mean IoU55.5 [53.7, 57.3]%
    Mean Dice70.1 [68.4, 71.7]%
    OverallSensitivity0.79 [0.74, 0.84]
    Precision0.45 [0.40, 0.50]

    Clinical Performance (Human-in-the-Loop)

    MetricReported Performance (95% CI)
    Overall AUC Increase (Aided vs. Unaided)8.7% [6.5, 10.9]%
    Caries AUC Increase6.1%
    Periapical Radiolucency AUC Increase10.2%
    Root Canal Filling Deficiency AUC Increase13.5%
    Discrepancy at Restoration Margin AUC Increase10.1%
    Bone Loss AUC Increase5.6%
    Calculus AUC Increase7.2%
    Overall Instance Sensitivity Increase22.4% [20.1, 24.7]%
    Caries Sensitivity Increase19.6%
    Bone Loss Sensitivity Increase23.5%
    Calculus Sensitivity Increase18.1%
    Discrepancy at Restoration Margin Sensitivity Increase28.5%
    Periapical Radiolucency Sensitivity Increase20.6%
    Root Canal Filling Deficiency Sensitivity Increase27.4%
    Overall Image Level Specificity Decrease8.7% [6.6, 10.7]%

    2. Sample Size and Data Provenance

    • Test Set (Standalone Performance):

      • Sample Size: 452 adult intraoral radiograph (IOR) images (bitewings and periapical radiographs).
      • Data Provenance: Not explicitly stated, but implicitly retrospective as they were "assembled" and "ground-truthed" for the study.
    • Test Set (Clinical Performance Assessment - MRMC Study):

      • Sample Size: 216 periapical and bitewing IOR images.
      • Data Provenance: Acquired in US-based dental offices by either sensor or photostimulable phosphor plates. This suggests retrospective collection from real-world clinical settings in the US.

    3. Number of Experts and Qualifications for Ground Truth

    • Test Set (Standalone Performance):

      • Number of Experts: A group of 10 dental practitioners followed by an additional expert review.
      • Qualifications: "Dental practitioners" and "expert review" (no further details on experience or specialized qualifications are provided for this set).
    • Test Set (Clinical Performance Assessment - MRMC Study):

      • Number of Experts: 4 ground truthers.
      • Qualifications: All ground truthers have "at least 20 years of experience in reading of dental x-rays."

    4. Adjudication Method for the Test Set

    • Test Set (Standalone Performance): "ground-truthed by a group of 10 dental practitioners followed by an additional expert review." - The specific consensus method (e.g., majority vote) is not explicitly stated.

    • Test Set (Clinical Performance Assessment - MRMC Study): Ground truth was defined by 4 ground truthers with a 3 out of 4 consensus. This is an explicit 3+1 (or simply 3 out of 4) adjudication method.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Yes, a MRMC comparative effectiveness study was done.
    • Effect Size of Human Readers Improvement with AI vs. Without AI Assistance:
      • The primary endpoint, overall Area Under the Curve (AUC), showed a statistically significant increase of 8.7% (CI [6.5, 10.9], p<0.001) for all dental finding types combined in the aided reading arm compared to the control (unaided) arm.
      • Instance-level sensitivity (reader's ability to detect existing dental findings) improved substantially by 22.4% (CI [20.1, 24.7]) overall.
      • Specific AUC increases per finding type ranged from 5.6% (bone loss) to 13.5% (root canal filling deficiency).
      • Specific sensitivity increases per finding type ranged from 18.1% (calculus) to 28.5% (discrepancy at restoration margin).
      • A mild decrease in image-level specificity of 8.7% (CI [6.6, 10.7]) was observed.

    6. Standalone (Algorithm Only) Performance

    • Yes, a standalone performance assessment was done.
    • This assessment measured the performance of the AI-powered Focus Area Detection algorithm "by itself, in the absence of any interaction with a dentist." The results (Overall sensitivity, precision, and Mean IoU/Dice scores per finding type) are detailed in the table in section 1 above.

    7. Type of Ground Truth Used

    • Expert Consensus:
      • For the standalone performance study, ground truth was established by "a group of 10 dental practitioners followed by an additional expert review."
      • For the clinical performance (MRMC) study, ground truth was defined by "4 ground truthers with a 3 out of 4 consensus," all having "at least 20 years of experience in reading of dental x-rays."

    8. Sample Size for the Training Set

    • The document does not specify the sample size for the training set. It only describes the test sets.

    9. How Ground Truth for the Training Set Was Established

    • The document does not describe how the ground truth for the training set was established. It only refers to the training process generically as "supervised machine learning" and focuses on the ground truthing of the test sets.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1