Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K210187
    Manufacturer
    Date Cleared
    2021-05-19

    (114 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Overjet Dental Assist is a radiological semi-automated image processing software device intended to aid dental professionals in the measurements of mesial and distal bone levels associated with each tooth from bitewing and periapical radiographs.

    It should not be used in-lieu of full patient evaluation or solely relied upon to make or confirm a diagnosis. The system is to be used by trained professionals including, but not limited to, dentists and dental hygienists.

    Device Description

    Overjet Dental Assist developed by Overjet Inc, is a radiological semi automated image processing software device intended to aid dental professionals in the measurements of mesial and distal bone levels associated with each tooth from bitewing and periapical radiographs.

    Overjet Dental Assist is a cloud native Software as a Medical Device that allows users to automate the measurement of interproximal bone levels for bitewing and periapical radiographs, review associated radiographs, view annotations, modify annotations.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and the study proving the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    MetricAcceptance CriteriaReported Performance (Bitewing Radiographs)Reported Performance (Periapical Radiographs)
    Sensitivity>85%98.7%88.94%
    Specificity>85%95.0%95.96%
    Mean Absolute Difference (Bone Level Measurement)<1.5mm0.307mm0.353mm
    Mean Absolute Difference (Periapical Root Length Measurement)<1.5mm (implied, as it's a measurement)Not applicable0.567mm

    Note regarding "Periapical Root Length": While the acceptance criteria for sensitivity, specificity, and mean absolute difference are explicitly stated for bone level measurements, the document also reports performance for "Periapical Root Length" with similar metrics and a mean absolute difference. It's reasonable to infer a similar acceptance criterion for MAE for this measurement type.

    2. Sample Sizes and Data Provenance

    • Test Set (Clinical Testing):

      • Sample Size: 65 bitewing and 96 periapical radiographs from 63 subjects.
      • Data Provenance: Retrospective clinical subject data from patients in the United States, 22 years old or older, without primary teeth. No information about ethnicity was available.
    • Test Set (Bench Testing):

      • Sample Size: 2234 bitewing radiographs and 6543 periapical radiographs.
      • Data Provenance: Not explicitly stated, beyond being "ground truth data set utilizing Object Keypoint Similarity assessment." The context suggests this is also retrospective imaging data.
    • Training Set (not explicitly called out as such here, but implied as distinct from the validation sets):

      • Sample Size: Not explicitly stated in the provided text.
      • Data Provenance: Not explicitly stated.

    3. Number of Experts to Establish Ground Truth for Test Set & Qualifications

    • Number of Experts: 3 US licensed dentists for initial labeling, plus 2 US Dental Radiologists for adjudication.
    • Qualifications of Experts: US licensed dentists; US Dental Radiologists. No specific years of experience are mentioned.

    4. Adjudication Method for the Test Set

    • Clinical Testing: The adjudication method was "initial measurements by three US licensed dentists, which were then adjudicated by two US Dental Radiologists." This can be interpreted as a form of expert consensus and review. It's not a standard 2+1 or 3+1, but a multi-expert review process.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • No explicit MRMC comparative effectiveness study was done to quantify the improvement of human readers with AI assistance versus without. The study focuses on standalone performance against expert-derived ground truth. The document does state, "The overall sensitivity of 88% was consistent with the performance of the three ground truth dentists," which indirectly compares the device's sensitivity to human performance on a specific metric, but this is not a formal MRMC study of human reading with and without AI.

    6. Standalone Performance Study

    • Yes, a standalone study was done. The entire "Clinical Testing" section describes the device's performance (sensitivity, specificity, mean absolute difference) when compared against an expert-established ground truth. The device results are reported directly without human intervention in the reported metrics.

    7. Type of Ground Truth Used

    • Expert Consensus. For the clinical testing, the ground truth was established by three US licensed dentists using a measurement tool, whose measurements were then adjudicated by two US Dental Radiologists. For the bench testing, it mentions "labeled keypoints," implying expert labeling.

    8. Sample Size for the Training Set

    • Not explicitly stated in the provided text. The document refers to "the Overjet met acceptable performance criteria with the following results:" and then lists results for "Bench Testing" and "Clinical Testing," which are typically considered validation/test sets, not the training set itself.

    9. How Ground Truth for Training Set Was Established

    • Not explicitly stated in the provided text, as the size and provenance of the training set itself are not detailed. It can be inferred that similar expert labeling processes would have been used to establish ground truth for training data, but this is not confirmed.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1