Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K250264

    Validate with FDA (Live)

    Device Name
    SugarBug (1.x)
    Manufacturer
    Date Cleared
    2025-11-07

    (282 days)

    Product Code
    Regulation Number
    892.2070
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    SugarBug is a radiological, automated, concurrent read, computer-assisted detection software intended to aid in the detection and segmentation of caries on bitewing radiographs. The device provides additional information for the dentist to use in their diagnosis of a tooth surface suspected of being carious. Sugarbug is intended to be used on patients 18 years and older. The device is not intended as a replacement for a complete dentist's review or their clinical judgment that takes into account other relevant information from the image, patient history, and actual in vivo clinical assessment.

    Device Description

    SugarBug is a software as a medical device (SaMD) that uses machine learning to label features that the reader should examine for evidence of decay. SugarBug uses convolutional neural network to perform a semantic segmentation task. The algorithm goes through every pixel in an image and assigns a probability value to it for the possibility that it contains decay. A threshold is used to determine which pixels are labeled in the device's output. The software reads the selected image using local processing; images are not imported or sent to a cloud server any time during routine use.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the SugarBug (1.x) device, based on the provided FDA 510(k) clearance letter:

    1. Acceptance Criteria and Reported Device Performance

    The direct "acceptance criteria" are not explicitly stated in a quantitative table for this device. However, based on the clinical study results and the stated objectives, the implicit acceptance criteria would have been:

    • Statistically significant improvement in overall diagnostic performance (wAFROC-AUC) for aided readers compared to unaided readers.
    • Demonstrated improvement in lesion-level sensitivity for aided readers.
    • Maintain or improve lesion annotation quality (DICE scores) with aid.
    • Standalone performance metrics (sensitivity, FPPI, DICE coefficient) within an acceptable range.

    Here's a table summarizing the reported device performance against these implicit criteria:

    MetricAcceptance Criteria (Implicit)Reported Unaided Reader PerformanceReported Aided Reader PerformanceReported Difference (Aided vs. Unaided)Statistical SignificanceStandalone Device Performance
    MRMC Study (Aided vs. Unaided)
    wAFROC-AUC (Primary Endpoint)Statistically significant improvement with aid0.659 (0.611, 0.707)0.725 (0.683, 0.767)0.066 (0.030, 0.102)p = 0.001 (Significant)N/A
    Lesion-Level SensitivityStatistically significant improvement with aid0.540 (0.445, 0.621)0.674 (0.615, 0.728)0.134 (0.066, 0.206)SignificantN/A
    Mean FPPIMaintain or improve (small or negative difference)0.328 (0.102, 0.331)0.325 (0.128, 0.310)-0.003 (-0.103, 0.086)Not statistically significant (small improvement)N/A
    Mean DICE Scores (Readers)Improvement in lesion delineation0.695 (0.688, 0.702)0.740 (0.733, 0.747)0.045 (0.035, 0.055)N/A (modest improvement)N/A
    Standalone Study
    Lesion-level sensitivityAcceptable rangeN/AN/AN/AN/A0.686 (0.655, 0.717)
    Mean FPPIAcceptable rangeN/AN/AN/AN/A0.231 (0.111, 0.303)
    DICE coefficient (vs. ground truth)Acceptable rangeN/AN/AN/AN/A0.746 (0.724, 0.768)

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set (MRMC Study): 300 bitewing radiographic images.
    • Sample Size for Test Set (Standalone Study): 400 de-identified images.
    • Data Provenance: Retrospectively collected from routine dental examinations of patients aged 18 and older from the US. The images were sampled to be representative of a range of x-ray sensor types (Vatech HD 29%, iSensor H2 11%, Schick 33: 45%, Dexis Platinum 15%).

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    • Number of Experts: 3 US licensed general dentists.
    • Qualifications: Mean of 27 years of clinical experience.

    4. Adjudication Method for the Test Set

    • Adjudication Method (Ground Truth): Consensus labels of the 3 US licensed general dentists.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was a MRMC study done? Yes.
    • Effect size of human readers improvement with AI vs. without AI assistance:
      • wAFROC-AUC Improvement: 0.066 (0.030, 0.102) with a p-value of 0.001.
      • Lesion-Level Sensitivity Improvement: 0.134 (0.066, 0.206).

    6. Standalone (Algorithm Only) Performance Study

    • Was a standalone study done? Yes.
    • Performance metrics:
      • Lesion-level sensitivity: 0.686 (0.655, 0.717)
      • Mean FPPI: 0.231 (0.111, 0.303)
      • DICE coefficient versus ground truth: 0.746 (0.724, 0.768)

    7. Type of Ground Truth Used

    • Type of Ground Truth: Expert consensus (established by 3 US licensed general dentists).

    8. Sample Size for the Training Set

    • The document does not explicitly state the sample size used for the training set. It only describes the test sets.

    9. How the Ground Truth for the Training Set Was Established

    • The document does not explicitly state how the ground truth for the training set was established. It only mentions that the standalone testing data (which could be considered a "test set" for the standalone algorithm) was "collected and labeled in the same procedure as the MRMC study," implying expert consensus was used for that, but it doesn't specify for the training data.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1