Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K250525

    Validate with FDA (Live)

    Manufacturer
    Date Cleared
    2025-11-14

    (266 days)

    Product Code
    Regulation Number
    892.2070
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Second Opinion® Panoramic is a radiological automated image processing software device intended to identify and mark regions, in panoramic radiographs, in relation to suspected dental findings which include: Caries, Periapical radiolucency, and Impacted third molars.

    It is designed to aid dental health professionals to review panoramic radiographs of permanent teeth in patients 16 years of age or older as both a concurrent and second reader.

    Device Description

    Second Opinion® PR is a radiological automated image processing software device intended to identify and mark regions, in panoramic radiographs, in relation to suspected dental findings which include: caries, periapical radiolucency, and impacted third molars. It should not be used in lieu of full patient evaluation or solely relied upon to make or confirm a diagnosis.

    It is designed to aid dental health professionals to review panoramic radiographs of permanent teeth in patients 16 years of age or older as a concurrent and second reader.

    Second Opinion® PR consists of three parts:

    • Application Programing Interface ("API")
    • Machine Learning Modules ("ML Modules")
    • Client User Interface (UI) ("Client")

    The processing sequence for an image is as follows:

    1. Images are sent for processing via the API
    2. The API routes images to the ML modules
    3. The ML modules produce detection output
    4. The UI renders the detection output

    The API serves as a conduit for passing imagery and metadata between the user interface and the machine learning modules. The API sends imagery to the machine learning modules for processing and subsequently receives metadata generated by the machine learning modules which is passed to the interface for rendering.

    Second Opinion® PR uses machine learning to detect regions of interest. Images received by the ML modules are processed yielding detections which are represented as metadata. The final output is made accessible to the API for the purpose of sending to the UI for visualization. Detected regions of interest are displayed as mask overlays atop the original radiograph which indicate to the practitioner which regions contain which detected potential conditions that may require clinical review. The clinician can toggle over the image to highlight a potential condition for viewing.

    AI/ML Overview

    Here's a detailed breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) Clearance Letter:


    1. A table of acceptance criteria and the reported device performance

    Performance MetricAcceptance Criteria (Pre-specified Performance Threshold)Reported Device Performance (Standalone Study)
    Impacted Third Molars
    wAFROC FOM> 0.780.9788
    Lesion-level SensitivityNot explicitly stated (implied high)99%
    DiceNot explicitly stated (implied high)≥ 0.68 (overall for segmentation)
    Jaccard IndexNot explicitly stated (implied high)≥ 0.62 (overall for segmentation)
    Periapical Radiolucency
    wAFROC FOM> 0.710.8113
    Lesion-level SensitivityNot explicitly stated (implied high)82%
    DiceNot explicitly stated (implied high)≥ 0.68 (overall for segmentation)
    Jaccard IndexNot explicitly stated (implied high)≥ 0.62 (overall for segmentation)
    Caries
    wAFROC FOM> 0.700.7211
    Lesion-level SensitivityNot explicitly stated (implied high)77%
    DiceNot explicitly stated (implied high)≥ 0.68 (overall for segmentation)
    Jaccard IndexNot explicitly stated (implied high)≥ 0.62 (overall for segmentation)
    General (Across all features)
    Statistical Significance (p-value)< 0.0001 (implied for exceeding thresholds)< 0.0001 (for all wAFROC values)
    Segmentation (Dice & Jaccard)Not explicitly stated (implied high)Dice ≥ 0.68, Jaccard ≥ 0.62
    MRMC (Improvement with AI)
    Periapical Radiolucency (Lesion wAFROC difference)Statistically significant increase0.0705 (p < 0.00001)
    Periapical Radiolucency (Lesion Sens. gain)Statistically significant increase0.2045 (p < 0.00001)
    Caries (Lesion wAFROC difference)Statistically significant increase0.0306 (p = 0.0195)
    Caries (Lesion Sens. gain)Statistically significant increase0.1169
    Impacted Teeth (Lesion wAFROC difference)Statistically significant increase0.0093 (p = 0.0326)
    Impacted Teeth (Lesion Sens. gain)Statistically significant increase0.0192
    FPPI or SpecificityNot increasing/reducing significantlyStable FPPI, high specificity (≥0.97) maintained

    2. Sample size used for the test set and the data provenance

    • Sample Size (Test Set): An "enriched regionally balanced image set of 795 images" was used for the clinical evaluation.
    • Data Provenance:
      • Country of Origin: Not explicitly stated for each image, but geographically diverse, described "with respect to the United States" and including specific regions (Northwest, Northeast, South, West, Midwest).
      • Retrospective/Prospective: The study is described as "retrospective" due to "non-patient-contact nature."

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Number of Experts: Four board-certified dentists.
    • Qualifications of Experts: Each possessed "a minimum of five years practice experience" as ground truth readers.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

    • Adjudication Method: Consensus approach based on agreement among at least three out of four expert readers. (This is a 3-out-of-4 or 3/4 consensus method).

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • Yes, a fully-crossed MRMC study was done.
    • Effect Size of Improvement (AI-aided vs. unaided reading):
      • Periapical Radiolucency:
        • Lesion level wAFROC difference: 0.0705 (95% CI: 0.04–0.10)
        • Image level wAFROC difference: 0.0715 (95% CI: 0.07–0.07)
        • Lesion-level sensitivity gain: 0.2045 (95% CI: 0.17–0.24)
      • Caries:
        • Lesion level wAFROC difference: 0.0306 (95% CI: 0.00–0.06)
        • Image level wAFROC difference: 0.0176 (95% CI: 0.02–0.02)
        • Lesion-level sensitivity gain: 0.1169 (95% CI: 0.08–0.15)
      • Impacted Teeth:
        • Lesion level wAFROC difference: 0.0093 (95% CI: 0.00–0.02)
        • Image level wAFROC difference: 0.0715 (95% CI: 0.07–0.07)
        • Lesion-level sensitivity gain: 0.0192 (95% CI: 0.01–0.03)

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

    • Yes, a standalone clinical study was done. The results are discussed in the "Standalone Testing" section, demonstrating the algorithm's performance independent of human readers.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • Type of Ground Truth: Expert consensus. Specifically, "consensus ground truth established by expert dental radiologists" using agreement among the four board-certified dentists.

    8. The sample size for the training set

    • The document does not provide the sample size for the training set. It only describes the test set size.

    9. How the ground truth for the training set was established

    • The document does not specify how the ground truth for the training set was established. It only details the ground truth establishment process for the test set.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1