K Number
K243893
Manufacturer
Date Cleared
2025-05-05

(138 days)

Product Code
Regulation Number
892.2070
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Second Opinion® Pediatric is a computer aided detection ("CADe") software to aid in the detection of caries in bitewing and periapical radiographs.

The intended patient population of the device is patients aged 4 years and older that have primary or permanent teeth (primary or mixed dentition) and are indicated for dental radiographs.

Device Description

Second Opinion Pediatric is a radiological, automated, computer-assisted detection (CADe) software intended to aid in the detection and segmentation of caries on bitewing and periapical radiographs. The device is not intended as a replacement for a complete dentist's review or their clinical judgment which considers other relevant information from the image, patient history, or actual in vivo clinical assessment.

Second Opinion Pediatric consists of three parts:

  • Application Programing Interface ("API")
  • Machine Learning Modules ("ML Modules")
  • Client User Interface (UI) ("Client")

The processing sequence for an image is as follows:

  1. Images are sent for processing via the API
  2. The API routes images to the ML modules
  3. The ML modules produce detection output
  4. The UI renders the detection output

The API serves as a conduit for passing imagery and metadata between the user interface and the machine learning modules. The API sends imagery to the machine learning modules for processing and subsequently receives metadata generated by the machine learning modules which is passed to the interface for rendering.

Second Opinion® Pediatric uses machine learning to detect caries. Images received by the ML modules are processed yielding detections which are represented as metadata. The final output is made accessible to the API for the purpose of sending to the UI for visualization. Detected caries are displayed as polygonal overlays atop the original radiograph which indicate to the practitioner which teeth contain detected caries that may require clinical review. The clinician can toggle over the image to highlight a potential condition for viewing. Further, the clinician can hover over the detected caries to show a hover information box containing the segmentation of the caries in the form of percentages.

AI/ML Overview

Here's a breakdown of the acceptance criteria and study details for the Second Opinion® Pediatric device, based on the provided FDA 510(k) clearance letter:

Acceptance Criteria and Reported Device Performance

Acceptance CriteriaReported Device Performance
Primary Endpoint: Second Opinion® Pediatric sensitivity for caries detection > 75% for bitewing and periapical images.Lesion Level Sensitivity: 0.87 (87%) with a 95% Confidence Interval (CI) of (0.84, 0.90). The test for sensitivity > 70% was statistically significant (p-value: 0.70.

Study Details

  1. Sample sizes used for the test set and the data provenance:

    • Test Set Sample Size: 1182 radiographic images, containing 1085 caries lesions on 549 abnormal images.
    • Data Provenance: Not specified in the provided document (e.g., country of origin, retrospective or prospective). However, it states it was a "standalone retrospective study."
  2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Number of Experts: Not explicitly stated.
    • Qualifications of Experts: Not specified. The document only mentions "Ground Truth," but details on the experts who established it are absent.
  3. Adjudication method for the test set:

    • Adjudication Method: Not explicitly stated. The document refers to "Ground Truth" but does not detail how potential disagreements among experts (if multiple were used) were resolved. It previously mentions "consensus truthing method" for the predicate device's study, which might imply a similar approach, but it is not confirmed for the subject device.
  4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • MRMC Study: No, an MRMC comparative effectiveness study was not performed for the Second Opinion® Pediatric device (the subject device). The provided text states, "The effectiveness of Second Opinion® Pediatric was evaluated in a standalone performance assessment to validate the CAD." The predicate device description mentions its purpose is to "aid dental health professionals... as a second reader," which implies an assistive role, but no MRMC data on human reader improvement with AI assistance is provided for either the subject or predicate device.
  5. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    • Standalone Study: Yes, a standalone performance assessment was explicitly conducted for the Second Opinion® Pediatric device. The study "assessed the sensitivity of caries detection of Second Opinion® Pediatric compared to the Ground Truth."
  6. The type of ground truth used:

    • Ground Truth Type: Expert consensus is implied, as the study compared the device's performance against "Ground Truth" typically established by expert review. For the predicate, it explicitly mentions "consensus truthing method." It does not specify if pathology or outcomes data were used.
  7. The sample size for the training set:

    • Training Set Sample Size: Not specified in the provided document. The document focuses on the validation study.
  8. How the ground truth for the training set was established:

    • Training Set Ground Truth Establishment: Not specified in the provided document.

§ 892.2070 Medical image analyzer.

(a)
Identification. Medical image analyzers, including computer-assisted/aided detection (CADe) devices for mammography breast cancer, ultrasound breast lesions, radiograph lung nodules, and radiograph dental caries detection, is a prescription device that is intended to identify, mark, highlight, or in any other manner direct the clinicians' attention to portions of a radiology image that may reveal abnormalities during interpretation of patient radiology images by the clinicians. This device incorporates pattern recognition and data analysis capabilities and operates on previously acquired medical images. This device is not intended to replace the review by a qualified radiologist, and is not intended to be used for triage, or to recommend diagnosis.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Design verification and validation must include:
(i) A detailed description of the image analysis algorithms including a description of the algorithm inputs and outputs, each major component or block, and algorithm limitations.
(ii) A detailed description of pre-specified performance testing methods and dataset(s) used to assess whether the device will improve reader performance as intended and to characterize the standalone device performance. Performance testing includes one or more standalone tests, side-by-side comparisons, or a reader study, as applicable.
(iii) Results from performance testing that demonstrate that the device improves reader performance in the intended use population when used in accordance with the instructions for use. The performance assessment must be based on appropriate diagnostic accuracy measures (
e.g., receiver operator characteristic plot, sensitivity, specificity, predictive value, and diagnostic likelihood ratio). The test dataset must contain a sufficient number of cases from important cohorts (e.g., subsets defined by clinically relevant confounders, effect modifiers, concomitant diseases, and subsets defined by image acquisition characteristics) such that the performance estimates and confidence intervals of the device for these individual subsets can be characterized for the intended use population and imaging equipment.(iv) Appropriate software documentation (
e.g., device hazard analysis; software requirements specification document; software design specification document; traceability analysis; description of verification and validation activities including system level test protocol, pass/fail criteria, and results; and cybersecurity).(2) Labeling must include the following:
(i) A detailed description of the patient population for which the device is indicated for use.
(ii) A detailed description of the intended reading protocol.
(iii) A detailed description of the intended user and user training that addresses appropriate reading protocols for the device.
(iv) A detailed description of the device inputs and outputs.
(v) A detailed description of compatible imaging hardware and imaging protocols.
(vi) Discussion of warnings, precautions, and limitations must include situations in which the device may fail or may not operate at its expected performance level (
e.g., poor image quality or for certain subpopulations), as applicable.(vii) Device operating instructions.
(viii) A detailed summary of the performance testing, including: test methods, dataset characteristics, results, and a summary of sub-analyses on case distributions stratified by relevant confounders, such as lesion and organ characteristics, disease stages, and imaging equipment.