K Number
K241725
Date Cleared
2025-03-11

(270 days)

Product Code
Regulation Number
892.2070
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Better Diagnostics Caries Assist is a radiological, automated, concurrent read, CADe software intended to identify and localize carious lesions on bitewings and periapical radiographs acquired from patients aged 18 years or older. Better Diagnostics Caries assist is indicated for use by board licensed dentists. The device is not intended as a replacement for a complete dentist's review or their clinical judgment that takes into account other relevant information from the image, patient history or actual in vivo clinical assessment.

Device Description

Better Diagnostics Caries Assist (BDCA) Version 1.0 is a computer-aided detection (CADe) software designed for the automated detection of carious lesions in Bitewings and periapical dental radiographs. This software offers supplementary information to assist clinicians in their diagnosis of potentially carious tooth surfaces. It is important to note that BDCA v1.0 is not meant to replace a comprehensive clinical evaluation by a clinician, which should consider other pertinent information from the image, the patient's medical history and clinical examination. This software is intended for use in identifying carious lesions in permanent teeth of patients who are 18 years or older.

BDCA v1.0 does not make treatment recommendations or provide a diagnosis. Dentists should review images annotated by BDCA v1.0 concurrently with original, unannotated images before making the final diagnosis on a case. BDCA v1.0 is an adjunct tool and does not replace the role of the dentist. The CAD generated output should not be used as the primary interpretation by the dentists. BDCA v1.0 is not designed to detect conditions other than the following: Caries.

BDCA v1.0 comprises four main components:

  • Presentation Layer: This component includes "AI Results Screen" a web-based interface (user interface) that allows users to view AI marked annotations. This is a custom code provided by Better Diagnostics AI Corp to Dental PMS customers. User Interface uses Angular.js and node.js technology to show images on the "AI Results Screen". System can process PNG, BMP and JPG format images. All images are converted into JPEG format for processing. Computer Vision Models returns AI annotations and co-ordinates to the business layer. Business layer sends coordinates to the presentation layer and bounding boxes are drawn on the image using custom code written in Angular.js and node.js. Dentists can view, accept or reject the annotations based on his evaluation. Better Diagnostics AI provides UI code to customers e.g. dental practice management software and imaging firms for utilization of BCDA v1.0 software.
  • Application Programming Interface (API): APIs are a set of definitions and protocols for building and integrating application software. It's sometimes referred to as a contract between an information provider and an information user. BDCA v1.0 APIs connect the Dental PMS with the business layer. API receives images input from Dental PMS and passes it to the business layer. It also receives annotations and co-ordinates from the business layer and passes it to the presentation layer hosted by Dental PMS.
  • Business Layer: Receives image from the API Gateway and passes it to computer vision models. It also receives the bounding boxes coordinates from the model and retrieves images from the cloud storage. It sends all information to the "AI Results screen" to display rectangle bounding boxes.
  • Computer Vision Models (CV Models): These models are hosted on a cloud computing platform and are responsible for image processing. They provide a binary indication to determine the presence of carious findings. If carious findings are detected, the software will output the coordinates of the bounding boxes for each finding. If no carious lesions are found, the output will not contain any bounding boxes and will have a message stating "No Suspected: Caries Detected"

AI models have three parts:

  • Pre-Processing Module: Standardization of image to specific height and width to maintain consistency for AI model. Finds out the type of image including IOPA, Bitewings or other types. BDCA v 1.0 can only process Bitewings and IOPA images for patients over age 18. All other types of images will be rejected.
  • Core Module: This module provides carious lesion annotations and co-ordinates to draw bounding boxes.
  • Post-Processing Module: includes cleanup process to remove outliers/incorrect annotations from the images.
AI/ML Overview

Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) summary for Better Diagnostics Caries Assist (BDCA) Version 1.0.

1. Acceptance Criteria and Reported Device Performance

The acceptance criteria are not explicitly stated as a pass/fail threshold in a dedicated table, but rather embedded in the "predefined performance goals" mentioned in the standalone testing section. The reported device performance is presented against these implicit goals.

Metric (Level)Predefined Performance Goal (Implicit Acceptance Criteria)Reported Device Performance (BDCA v1.0)
BW Surface Sensitivity> 0.7489.2% (CI: [86.15%, 92.13%])
BW Surface Specificity> 0.9599.5% (CI: [99.32%, 99.57%])
IOPA Surface Sensitivity> 0.7688.2% (CI: [85.27%, 90.78%])
IOPA Surface Specificity> 0.9599.1% (CI: [98.88%, 99.31%])
IOPA Image Sensitivity (Optimistic)> 0.7591.8% (CI: [88.54%, 94.42%])

Note: For image-level sensitivity and specificity, the report provides both "Conservative" and "Optimistic" definitions as per FDA guidance. The "Optimistic" sensitivity for IOPA images had a specific goal. For other image-level metrics and conservative sensitivities, while performance is reported as "robust" and exceeding "performance thresholds," a specific numerical goal isn't explicitly listed in the text. However, the reported values consistently exceed the surface-level goals, implying sufficient performance.

2. Sample Size Used for the Test Set and Data Provenance

From the Standalone Testing section:

  • BW Images: 614 images (310 with cavities, 304 without). Within these, 15,687 surfaces were examined (585 positively identified with cavities, 15,102 negative).
  • IOPA Images: 684 images (367 with cavities, 317 without). Within these, 9,253 surfaces were examined (618 showed positive results for cavities, 8,635 negative).
  • Data Provenance: The document does not explicitly state the country of origin. It indicates that "A patient had at most one BW and one IOPA image included in the analysis dataset." The study states that "Twenty-nine United States (US) licensed dentists" participated in the MRMC study, which may suggest the data is from the US, but this is not definitively stated for the standalone test set. The study type is retrospective as existing images were analyzed.

From the Clinical Evaluation-Reader Improvement (MRMC) Study section:

  • Total Radiographs: 328 (108 BW and 220 IOPA).
  • BW Images with Cavities: 72 out of 108.
  • IOPA Images with Cavities: 91 out of 220.
  • BW Surfaces with Cavities: 221 out of 2716.
  • IOPA Surfaces with Cavities: 160 out of 2967.
  • Data Provenance: The document does not explicitly state the country of origin of the data itself, only that US licensed dentists interpreted them. The study setup implies a retrospective analysis of existing radiographs.

3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

  • Number of Experts: Three experienced, licensed dentists.
  • Qualifications: Each with over 10 years of professional experience.

4. Adjudication Method for the Test Set

  • Adjudication Method: Consensus of two out of three experts (2+1).
    • "Ground truth was determined through the consensus of two out of three experienced, licensed dentists... agreeing on the final labels for analysis when at least two dentists identified a surface as carious."

5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

  • Was an MRMC study done? Yes, a multi-reader, multi-case (MRMC) study was conducted.

  • Effect Size of Human Readers Improvement with AI vs. Without AI Assistance:

    • AFROC Score (AUC):
      • BW Images: Aided AUC = 0.848, Unaided AUC = 0.806. (Improvement = +0.042)
      • IOPA Images: Aided AUC = 0.845, Unaided AUC = 0.807. (Improvement = +0.038)
      • The differences were statistically significant (p-values

§ 892.2070 Medical image analyzer.

(a)
Identification. Medical image analyzers, including computer-assisted/aided detection (CADe) devices for mammography breast cancer, ultrasound breast lesions, radiograph lung nodules, and radiograph dental caries detection, is a prescription device that is intended to identify, mark, highlight, or in any other manner direct the clinicians' attention to portions of a radiology image that may reveal abnormalities during interpretation of patient radiology images by the clinicians. This device incorporates pattern recognition and data analysis capabilities and operates on previously acquired medical images. This device is not intended to replace the review by a qualified radiologist, and is not intended to be used for triage, or to recommend diagnosis.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Design verification and validation must include:
(i) A detailed description of the image analysis algorithms including a description of the algorithm inputs and outputs, each major component or block, and algorithm limitations.
(ii) A detailed description of pre-specified performance testing methods and dataset(s) used to assess whether the device will improve reader performance as intended and to characterize the standalone device performance. Performance testing includes one or more standalone tests, side-by-side comparisons, or a reader study, as applicable.
(iii) Results from performance testing that demonstrate that the device improves reader performance in the intended use population when used in accordance with the instructions for use. The performance assessment must be based on appropriate diagnostic accuracy measures (
e.g., receiver operator characteristic plot, sensitivity, specificity, predictive value, and diagnostic likelihood ratio). The test dataset must contain a sufficient number of cases from important cohorts (e.g., subsets defined by clinically relevant confounders, effect modifiers, concomitant diseases, and subsets defined by image acquisition characteristics) such that the performance estimates and confidence intervals of the device for these individual subsets can be characterized for the intended use population and imaging equipment.(iv) Appropriate software documentation (
e.g., device hazard analysis; software requirements specification document; software design specification document; traceability analysis; description of verification and validation activities including system level test protocol, pass/fail criteria, and results; and cybersecurity).(2) Labeling must include the following:
(i) A detailed description of the patient population for which the device is indicated for use.
(ii) A detailed description of the intended reading protocol.
(iii) A detailed description of the intended user and user training that addresses appropriate reading protocols for the device.
(iv) A detailed description of the device inputs and outputs.
(v) A detailed description of compatible imaging hardware and imaging protocols.
(vi) Discussion of warnings, precautions, and limitations must include situations in which the device may fail or may not operate at its expected performance level (
e.g., poor image quality or for certain subpopulations), as applicable.(vii) Device operating instructions.
(viii) A detailed summary of the performance testing, including: test methods, dataset characteristics, results, and a summary of sub-analyses on case distributions stratified by relevant confounders, such as lesion and organ characteristics, disease stages, and imaging equipment.