Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K221921
    Manufacturer
    Date Cleared
    2023-03-28

    (270 days)

    Product Code
    Regulation Number
    892.2070
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    DTX Studio Clinic 3.0

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    DTX Studio Clinic is a computer assisted detection (CADe) device that analyses intraoral radiographs to identify and localize dental findings, which include caries, calculus, periapical radiolucency, root canal filling deficiency, discrepancy at margin of an existing restoration and bone loss.

    The DTX Studio Clinic CADe functionality is indicated for the concurrent review of bitewing and periapical radiographs of permanent teeth in patients 15 years of age or older.

    Device Description

    DTX Studio Clinic features an AI-powered Focus Area Detection algorithm which analyzes intraoral radiographs for potential dental findings or image artifacts. The detected focus areas can be converted afterwards to diagnostic findings after approval by the user. The following dental findings can be detected by the device: Caries, Discrepancy at margin of an existing restoration, Periapical radiolucency, Root canal filling deficiency, Bone loss, Calculus.

    AI/ML Overview

    The provided text describes the acceptance criteria and the study that proves the device meets those criteria for the DTX Studio Clinic 3.0.

    Here's the breakdown:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state "acceptance criteria" as a pass/fail threshold, but rather presents the performance results from the standalone (algorithm only) and clinical (human-in-the-loop) studies. The acceptance is implicitly based on these results demonstrating clinical benefit and safety.

    Standalone Performance (Algorithm-Only)

    Dental Finding TypeMetricReported Performance (95% CI)
    CariesSensitivity0.70 [0.65, 0.75]
    Mean IoU58.6 [56.2, 60.9]%
    Mean Dice71.9 [69.9, 74.0]%
    Periapical RadiolucencySensitivity0.68 [0.59, 0.77]
    Mean IoU48.9 [44.9, 52.9]%
    Mean Dice63.7 [59.9, 67.5]%
    Root Canal Filling DeficiencySensitivity0.95 [0.91, 0.99]
    Mean IoU51.9 [49.3, 54.6]%
    Mean Dice66.9 [64.3, 69.4]%
    Discrepancy at Restoration MarginSensitivity0.82 [0.77, 0.87]
    Mean IoU48.4 [46.0, 50.7]%
    Mean Dice63.5 [61.3, 65.8]%
    Bone LossSensitivity0.78 [0.75, 0.81]
    Mean IoU44.8 [43.4, 46.3]%
    Mean Dice60.1 [58.7, 61.6]%
    CalculusSensitivity0.80 [0.76, 0.84]
    Mean IoU55.5 [53.7, 57.3]%
    Mean Dice70.1 [68.4, 71.7]%
    OverallSensitivity0.79 [0.74, 0.84]
    Precision0.45 [0.40, 0.50]

    Clinical Performance (Human-in-the-Loop)

    MetricReported Performance (95% CI)
    Overall AUC Increase (Aided vs. Unaided)8.7% [6.5, 10.9]%
    Caries AUC Increase6.1%
    Periapical Radiolucency AUC Increase10.2%
    Root Canal Filling Deficiency AUC Increase13.5%
    Discrepancy at Restoration Margin AUC Increase10.1%
    Bone Loss AUC Increase5.6%
    Calculus AUC Increase7.2%
    Overall Instance Sensitivity Increase22.4% [20.1, 24.7]%
    Caries Sensitivity Increase19.6%
    Bone Loss Sensitivity Increase23.5%
    Calculus Sensitivity Increase18.1%
    Discrepancy at Restoration Margin Sensitivity Increase28.5%
    Periapical Radiolucency Sensitivity Increase20.6%
    Root Canal Filling Deficiency Sensitivity Increase27.4%
    Overall Image Level Specificity Decrease8.7% [6.6, 10.7]%

    2. Sample Size and Data Provenance

    • Test Set (Standalone Performance):

      • Sample Size: 452 adult intraoral radiograph (IOR) images (bitewings and periapical radiographs).
      • Data Provenance: Not explicitly stated, but implicitly retrospective as they were "assembled" and "ground-truthed" for the study.
    • Test Set (Clinical Performance Assessment - MRMC Study):

      • Sample Size: 216 periapical and bitewing IOR images.
      • Data Provenance: Acquired in US-based dental offices by either sensor or photostimulable phosphor plates. This suggests retrospective collection from real-world clinical settings in the US.

    3. Number of Experts and Qualifications for Ground Truth

    • Test Set (Standalone Performance):

      • Number of Experts: A group of 10 dental practitioners followed by an additional expert review.
      • Qualifications: "Dental practitioners" and "expert review" (no further details on experience or specialized qualifications are provided for this set).
    • Test Set (Clinical Performance Assessment - MRMC Study):

      • Number of Experts: 4 ground truthers.
      • Qualifications: All ground truthers have "at least 20 years of experience in reading of dental x-rays."

    4. Adjudication Method for the Test Set

    • Test Set (Standalone Performance): "ground-truthed by a group of 10 dental practitioners followed by an additional expert review." - The specific consensus method (e.g., majority vote) is not explicitly stated.

    • Test Set (Clinical Performance Assessment - MRMC Study): Ground truth was defined by 4 ground truthers with a 3 out of 4 consensus. This is an explicit 3+1 (or simply 3 out of 4) adjudication method.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Yes, a MRMC comparative effectiveness study was done.
    • Effect Size of Human Readers Improvement with AI vs. Without AI Assistance:
      • The primary endpoint, overall Area Under the Curve (AUC), showed a statistically significant increase of 8.7% (CI [6.5, 10.9], p
    Ask a Question

    Ask a specific question about this device

    K Number
    K213562
    Manufacturer
    Date Cleared
    2022-03-25

    (136 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    DTX Studio Clinic 3.0

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    DTX Studio Clinic is a software program for the acquisition, management, transfer and analysis of dental and craniomaxillofacial image information, and can be used to provide design input for dental restorative solutions. It displays and enhances digital images from various sources to support the diagnostic process and treatment planning. It stores and provides these images within the system or across computer systems at different locations.

    Device Description

    DTX Studio Clinic is a software interface for dental/medical practitioners used to analyze 2D and 3D imaging data, in a timely fashion, for the treatment of dental, cramomaxillofacial and related conditions. DTX Studio Clinic displays and processes imaging data from different devices (i.e. intraoral X-Rays, (CB)CT scanners, intraoral scanners, intraoral and extraoral cameras).

    AI/ML Overview

    This document is a 510(k) Premarket Notification for the DTX Studio Clinic 3.0. It primarily focuses on demonstrating substantial equivalence to a predicate device rather than providing a detailed technical study report with specific acceptance criteria and performance metrics for novel functionalities.

    Therefore, the requested information regarding detailed acceptance criteria, specific performance data (e.g., accuracy metrics), sample sizes for test sets, data provenance, expert qualifications, and ground truth establishment for the automatic annotation of mandibular canals is not explicitly detailed in the provided text.

    The document states that "Automatic annotation of the mandibular canals" is a new feature in DTX Studio Clinic 3.0, and it is compared to the reference device InVivoDental (K123519) which has "Creation and visualization of the nerve manually or by using the Automatic Nerve feature." However, it does not provide the specific study details for validating this new feature within DTX Studio Clinic 3.0. It only broadly states that "Software verification and validation testing was conducted on the subject device."

    Based on the provided text, I cannot fulfill most of the requested information directly because it is not present. The document's purpose is to establish substantial equivalence based on the overall device function and safety, not to detail the rigorous validation of a specific AI/ML component with numerical acceptance criteria.

    However, I can extract the available information and highlight what is missing.


    Acceptance Criteria and Study for DTX Studio Clinic 3.0's Automatic Mandibular Canal Annotation (Information extracted from the document):

    Given the provided text, the specific, quantitative acceptance criteria and detailed study proving the device meets these criteria for the automatic annotation of the mandibular canal are not explicitly described. The document focuses on a broader claim of substantial equivalence and general software validation.

    1. Table of Acceptance Criteria and Reported Device Performance:

    Feature/MetricAcceptance CriteriaReported Device PerformanceSource/Methodology (if available in text)
    Automatic annotation of mandibular canalsNot explicitly stated in quantitative terms. Implied acceptance is that the functionality is "similar as in the reference device InVivoDental (K123519)" and the user can "manually indicate or adjust the mandibular canal."No specific performance metrics (e.g., accuracy, precision, recall, Dice coefficient) are provided. The text states: "The software automatically segments the mandibular canal based on the identification of the mandibular foramen and the mental foramen. This functionality is similar as in the reference device InVivoDental (K123519). The user can also manually indicate or adjust the mandibular canal."Comparison to reference device and user adjustability. Software verification and validation testing was conducted, but details are not provided.

    2. Sample size used for the test set and the data provenance:

    • Sample Size: Not specified for the automatic mandibular canal annotation feature. The document states "Software verification and validation testing was conducted on the subject device," but provides no numbers.
    • Data Provenance: Not specified (e.g., country of origin, retrospective/prospective).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Number of Experts: Not specified.
    • Qualifications of Experts: Not specified.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    • Adjudication Method: Not specified.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • MRMC Study: Not mentioned or detailed. The document primarily makes a substantial equivalence claim based on the device's overall functionality and features, not a comparative effectiveness study involving human readers.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    • Standalone Performance: Not explicitly detailed. The document describes the automatic segmentation functionality and mentions that the user can manually adjust, implying a human-in-the-loop scenario. No standalone performance metrics are provided.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • Type of Ground Truth: Not specified for the automatic mandibular canal annotation. Given the context of a dental/maxillofacial imaging device, it would likely involve expert annotations on CBCT scans, but this is not confirmed in the text.

    8. The sample size for the training set:

    • Training Set Sample Size: Not specified. This document is a 510(k) submission, which focuses on validation, not the development or training process.

    9. How the ground truth for the training set was established:

    • Ground Truth Establishment for Training Set: Not specified.

    Summary of what can be inferred/not inferred from the document regarding the mandibular canal annotation:

    • New Feature: Automatic annotation of mandibular canals is a new feature in DTX Studio Clinic 3.0 that was not present in the primary predicate (DTX Studio Clinic 2.0).
    • Comparison to Reference Device: This new feature's "functionality is similar as in the reference device InVivoDental (K123519)", which has "Creation and visualization of the nerve manually or by using the Automatic Nerve feature."
    • Human Oversight: The user has the ability to "manually indicate or adjust the mandibular canal," suggesting that the automatic annotation is an aid to the diagnostic process, not a definitive, unreviewable output. This is typical for AI/ML features in medical imaging devices that are intended to support, not replace, clinical judgment.
    • Validation Claim: The submission states that "Software verification and validation testing was conducted on the subject device and documentation was provided as recommended by FDA's Guidance for Industry and FDA Staff, 'Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices'." This implies that the validation was performed in accordance with regulatory guidelines, but the specific details of that validation for this particular feature are not disclosed in this public summary.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1