K Number
K233738
Manufacturer
Date Cleared
2024-03-04

(103 days)

Product Code
Regulation Number
892.2070
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Overjet Caries Assist-Pediatric (OCA-Ped) is a radiological, automated, concurrent-read, computer-assisted detection (CADe) software intended to aid in the detection and segmentation of caries on bitewing and periapical radiographs. The device provides additional information for the clinician to use in their diagnosis of a tooth surface suspected of being carious. The device is not intended as a replacement for a complete dentist's review or their clinical judgment that takes into account other relevant information from the image, patient history, or actual in vivo clinical assessment.

The intended patient population of the device is patients aged 4-11 years old that have primary or permanent teeth (primary or mixed dentition) and are indicated for dental radiographs.

Device Description

Overjet Caries Assist-Pediatric (OCA-Ped) is a radiological, automated, concurrent-read, computer-assisted detection (CADe) software intended to aid in the detection and segmentation of caries on bitewing and periapical radiographs. The device provides additional information for the dentist to use in their diagnosis of a tooth surface suspected of being carious. The device is not intended as a replacement for a complete dentist's review or their clinical judgment that takes into account other relevant information from the image, patient history, or actual in vivo clinical assessment.

OCA-Ped is a software-only device which operates in three layers: a Network Layer, a Presentation Layer, and a Decision Layer. Images are pulled in from a clinic/dental office, and the Machine Learning model creates predictions in the Decision Laver and results are pushed to the dashboard, which are in the Presentation Layer.

AI/ML Overview

The provided document describes the Overjet Caries Assist-Pediatric (OCA-Ped) device, a computer-assisted detection (CADe) software intended to aid in the detection and segmentation of caries on bitewing and periapical radiographs for patients aged 4-11 years old.

Here's the breakdown of the acceptance criteria and study details:

1. Table of Acceptance Criteria and Reported Device Performance

The document does not explicitly state "acceptance criteria" in a separate section with specific numerical thresholds. However, it presents the performance metrics from the MRMC reader study and standalone testing, which can be interpreted as demonstrating the device's acceptable performance.

Performance MetricAcceptance Criteria (Implicit)Reported Device Performance
MRMC Reader Study - Aided vs. UnaidedImprovement in diagnostic accuracy
AUC of wAFROC (improvement)(Not explicitly defined, but positive)7.5% improvement (95% CI: 0.062, 0.088)
Tooth-level Sensitivity (improvement)(Not explicitly defined, but positive)11.8% improvement (95% CI: 0.102, 0.137)
Tooth-level Specificity (change)(Not explicitly defined, but minimal decrease)-0.011 (95% CI: -0.015, -0.008)
Standalone PerformanceSufficient diagnostic accuracy
Tooth-level Sensitivity(Not explicitly defined)83.9% (95% CI: 0.816, 0.860)
Tooth-level Specificity(Not explicitly defined)97.5% (95% CI: 0.971, 0.979)
Standalone Dice(Not explicitly defined)79.0% (95% CI: 0.784, 0.797)

2. Sample Size Used for the Test Set and Data Provenance

  • MRMC Test Set: 636 images, each from a unique patient.
  • Standalone Test Set: 1190 bitewing and periapical images.
  • Data Provenance: "Images were obtained from male and female patients aged 4-11 years." For the Standalone testing, "images were obtained from male and female patients, from a range of distinctly different geographic regions." The document does not specify if the data was retrospective or prospective, nor does it explicitly mention the country of origin, beyond "US licensed dentists" participating in the MRMC study. Given that "US licensed dentists" participated and "distinctly different geographic regions" are mentioned for standalone, it's implied to be US-centric.

3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

  • Number of Experts: 3
  • Qualifications of Experts: General dentists. No specific experience level (e.g., "10 years of experience") is provided, only "3 general dentists."

4. Adjudication Method for the Test Set

  • Adjudication Method: Consensus ground truth established by 3 general dentists. The exact process of reaching consensus (e.g., silent read, discussion, majority vote) is not detailed, but it implies agreement among the three experts.

5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study Was Done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

  • Yes, an MRMC comparative effectiveness study was done.
  • Effect Size of Improvement with AI Assistance:
    • AUC of wAFROC: Averaged across all readers, there was a 7.5% improvement (95% CI: 0.062, 0.088) in assisted readers compared to unassisted readers.
    • Tooth-level Sensitivity: Averaged across all readers, sensitivity increased by 11.8% (95% CI: 0.102, 0.137) when compared to unassisted readers.
    • Tooth-level Specificity: A slight decrease of -0.011 (95% CI: -0.015, -0.008) between assisted and unassisted readers.

6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) Was Done

  • Yes, standalone performance testing was done.
    • Tooth-level standalone sensitivity was 83.9% (95% CI: 0.816, 0.860).
    • Tooth-level standalone specificity was 97.5% (95% CI: 0.971, 0.979).
    • Standalone Dice was a mean of 79.0% (95% CI: 0.784, 0.797).

7. The Type of Ground Truth Used

  • Ground Truth Type: Expert consensus. Specifically, "consensus reference standard established by 3 general dentists."

8. The Sample Size for the Training Set

  • The document does not provide the sample size for the training set. It only describes the test sets used for MRMC and standalone performance evaluation.

9. How the Ground Truth for the Training Set Was Established

  • The document does not describe how the ground truth for the training set was established. It only details the ground truth establishment for the test sets.

§ 892.2070 Medical image analyzer.

(a)
Identification. Medical image analyzers, including computer-assisted/aided detection (CADe) devices for mammography breast cancer, ultrasound breast lesions, radiograph lung nodules, and radiograph dental caries detection, is a prescription device that is intended to identify, mark, highlight, or in any other manner direct the clinicians' attention to portions of a radiology image that may reveal abnormalities during interpretation of patient radiology images by the clinicians. This device incorporates pattern recognition and data analysis capabilities and operates on previously acquired medical images. This device is not intended to replace the review by a qualified radiologist, and is not intended to be used for triage, or to recommend diagnosis.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Design verification and validation must include:
(i) A detailed description of the image analysis algorithms including a description of the algorithm inputs and outputs, each major component or block, and algorithm limitations.
(ii) A detailed description of pre-specified performance testing methods and dataset(s) used to assess whether the device will improve reader performance as intended and to characterize the standalone device performance. Performance testing includes one or more standalone tests, side-by-side comparisons, or a reader study, as applicable.
(iii) Results from performance testing that demonstrate that the device improves reader performance in the intended use population when used in accordance with the instructions for use. The performance assessment must be based on appropriate diagnostic accuracy measures (
e.g., receiver operator characteristic plot, sensitivity, specificity, predictive value, and diagnostic likelihood ratio). The test dataset must contain a sufficient number of cases from important cohorts (e.g., subsets defined by clinically relevant confounders, effect modifiers, concomitant diseases, and subsets defined by image acquisition characteristics) such that the performance estimates and confidence intervals of the device for these individual subsets can be characterized for the intended use population and imaging equipment.(iv) Appropriate software documentation (
e.g., device hazard analysis; software requirements specification document; software design specification document; traceability analysis; description of verification and validation activities including system level test protocol, pass/fail criteria, and results; and cybersecurity).(2) Labeling must include the following:
(i) A detailed description of the patient population for which the device is indicated for use.
(ii) A detailed description of the intended reading protocol.
(iii) A detailed description of the intended user and user training that addresses appropriate reading protocols for the device.
(iv) A detailed description of the device inputs and outputs.
(v) A detailed description of compatible imaging hardware and imaging protocols.
(vi) Discussion of warnings, precautions, and limitations must include situations in which the device may fail or may not operate at its expected performance level (
e.g., poor image quality or for certain subpopulations), as applicable.(vii) Device operating instructions.
(viii) A detailed summary of the performance testing, including: test methods, dataset characteristics, results, and a summary of sub-analyses on case distributions stratified by relevant confounders, such as lesion and organ characteristics, disease stages, and imaging equipment.