K Number
K242522
Manufacturer
Date Cleared
2025-01-16

(146 days)

Product Code
Regulation Number
892.2070
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Second Opinion® CC is a computer aided detection ("CADe") software to aid dentists in the detection of caries by drawing bounding polygons to highlight the suspected region of interest.

It is designed to aid dental health professionals to review bitewing and periapical radiographs of permanent teeth in patients 19 years of age or older as a second reader.

Device Description

Second Opinion CC (Caries Contouring) is a radiological, automated, computer-assisted detection (CADe) software intended to aid in the detection of caries on bitewing and periapical radiographs using polygonal contours. The device is not intended as a replacement for a complete dentist's review or their clinical judgment which considers other relevant information from the image, patient history, or actual in vivo clinical assessment.

Second Opinion CC consists of three parts:

  • · Application Programing Interface ("API")
  • · Machine Learning Modules ("ML Modules")
  • · Client User Interface ("Client")

The processing sequence for an image is as follows:

  • Images are sent for processing via the API 1.
    1. The API routes images to the ML modules
    1. The ML modules produce detection output
    1. The UI renders the detection output

The API serves as a conduit for passing imagery and metadata between the user interface and the machine learning modules. The API sends imagery to the machine learning modules for processing and subsequently receives metadata generated by the machine learning modules which is passed to the interface for rendering.

Second Opinion CC uses machine learning to detect caries. Images received by the ML modules are processed yielding detections which are represented as metadata. The final output is made accessible to the API for the purpose of sending to the UI for visualization. Detected carious lesions are displayed as polygonal overlays atop the original radiograph which indicate to the practitioner which teeth contain which detected carious lesions that may require clinical review. The clinician can toggle over the image to highlight a potential condition for viewing. In addition, the clinician has the ability to edit the detections as they see fit to align with their diagnosis.

AI/ML Overview

Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) summary for Pearl Inc.'s "Second Opinion CC" device:

1. Table of Acceptance Criteria and Reported Device Performance

The acceptance criteria are implied by the non-inferiority study design. The primary performance metric was the Weighted Alternative Free-Response Receiver Operating Characteristic (wAFROC) Figure of Merit (FOM). The acceptance criterion for the Dice coefficient was explicitly stated.

MetricAcceptance CriteriaReported Device Performance
Primary Endpoint
wAFROC-FOM difference (Second Opinion CC vs. Second Opinion)Lower bound of 95% CI for difference > -0.05 (non-inferiority to Second Opinion)0.26 (95% CI: 0.22, 0.31) - Lower bound (0.22) exceeds -0.05, demonstrating non-inferiority.
Secondary Endpoints / Other Metrics
wAFROC-FOM for Second Opinion CC(Not explicitly stated as an acceptance criterion, but reported as a measure of efficacy)0.81 (95% CI: 0.77, 0.85)
HR-ROC-AUC for Second Opinion CC(Not explicitly stated as an acceptance criterion, but reported as supporting non-inferiority)0.88 (95% CI: 0.85, 0.91)
Lesion Level Sensitivity for Second Opinion CC(Not explicitly stated as an acceptance criterion, but reported)90% (95% CI: 87%, 94%)
Average False Positives per image for Second Opinion CC(Not explicitly stated as an acceptance criterion, but reported)1.34 (95% CI: 1.20, 1.48)
Dice Coefficient for true positivesLeast squares (LS) mean (95% CI) > 0.70 (pre-defined clinically justified acceptance criteria)LS mean = 0.73 (95% CI: 0.71, 0.75) - Lower bound (0.71) exceeds 0.70.

2. Sample Size Used for the Test Set and Data Provenance

  • Test Set Sample Size: 500 images
  • Data Provenance: The dataset is characterized by a diverse distribution, including:
    • Geographical Regions (within the US): Northwest (15.2%), Southwest (17.8%), South (24.6%), East (22.6%), Midwest (19.6%), and 1 unknown origin.
    • Gender Distribution: Females (19.0%), Males (25.0%), Other genders (7.6%), and Unknown gender (48.4%).
    • Age: 12-18 (1.8%), 18-75 (46.6%), 75+ (2.6%), and Unknown age (49.0%).
    • Imaging Devices: Various models from Carestream-Trophy, DEXIS, and KaVo Dental Technologies, along with unknown devices.
    • Image Types: 249 periapical radiographs (49.8%) and 251 bitewing radiographs (50.2%).
  • Retrospective/Prospective: The document does not explicitly state whether the data was collected retrospectively or prospectively. However, the diverse and "characterized" distribution of various demographic and technical factors, along with the specific mention of "a diverse distribution of radiographs," often suggests a retrospective collection from existing databases for a test set.

3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

  • Number of Experts: Four expert readers.
  • Qualifications of Experts: The document refers to them as "expert readers" with no further specific details on their qualifications (e.g., years of experience, board certification, specialty).

4. Adjudication Method for the Test Set

  • Adjudication Method: Consensus approach based on agreement among at least three out of four expert readers (3+1 or 4/4 consensus).
    • "The ground truth (GT) was established using the consensus approach based on agreement among at least three out of four expert readers."

5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done

  • MRMC Study: No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done for the subject device (Second Opinion CC) in this submission.
    • The document explicitly states: "Pearl demonstrated the benefits of the device through a non-inferiority standalone clinical study."
    • It also clarifies: "Second Opinion CC was clinically tested as a standalone device in comparison to the predicate device, Second Opinion, using a non-inferiority study."
    • It mentions that the original clearance (K210365) for the predicate device (Second Opinion) was "based on standalone and MRMC studies," but this current submission for Second Opinion CC did not include one.

6. If a Standalone (i.e. algorithm only without human-in-the loop performance) was done

  • Standalone Study: Yes.
    • "Clinical evaluation of Second Opinion CC was performed to validate the efficacy of the system in detecting potential caries lesions using polygons instead of bounding boxes on intraoral radiographs."
    • "Second Opinion CC was clinically tested as a standalone device in comparison to the predicate device, Second Opinion, using a non-inferiority study."
    • The results for each image were analyzed for "Non-Lesion Localization (NL)" and "Lesion Localization (LL)" directly by the algorithm's output.

7. The Type of Ground Truth Used

  • Type of Ground Truth: Expert consensus.
    • "The ground truth (GT) was established using the consensus approach based on agreement among at least three out of four expert readers."
    • Each GT expert independently marked areas using the smallest possible polygonal contour to encompass the entire region identified.

8. The Sample Size for the Training Set

  • The document does not provide the sample size for the training set. It only describes the test set.

9. How the Ground Truth for the Training Set was Established

  • The document does not describe how the ground truth for the training set was established. It only details the ground truth establishment for the test set.

§ 892.2070 Medical image analyzer.

(a)
Identification. Medical image analyzers, including computer-assisted/aided detection (CADe) devices for mammography breast cancer, ultrasound breast lesions, radiograph lung nodules, and radiograph dental caries detection, is a prescription device that is intended to identify, mark, highlight, or in any other manner direct the clinicians' attention to portions of a radiology image that may reveal abnormalities during interpretation of patient radiology images by the clinicians. This device incorporates pattern recognition and data analysis capabilities and operates on previously acquired medical images. This device is not intended to replace the review by a qualified radiologist, and is not intended to be used for triage, or to recommend diagnosis.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Design verification and validation must include:
(i) A detailed description of the image analysis algorithms including a description of the algorithm inputs and outputs, each major component or block, and algorithm limitations.
(ii) A detailed description of pre-specified performance testing methods and dataset(s) used to assess whether the device will improve reader performance as intended and to characterize the standalone device performance. Performance testing includes one or more standalone tests, side-by-side comparisons, or a reader study, as applicable.
(iii) Results from performance testing that demonstrate that the device improves reader performance in the intended use population when used in accordance with the instructions for use. The performance assessment must be based on appropriate diagnostic accuracy measures (
e.g., receiver operator characteristic plot, sensitivity, specificity, predictive value, and diagnostic likelihood ratio). The test dataset must contain a sufficient number of cases from important cohorts (e.g., subsets defined by clinically relevant confounders, effect modifiers, concomitant diseases, and subsets defined by image acquisition characteristics) such that the performance estimates and confidence intervals of the device for these individual subsets can be characterized for the intended use population and imaging equipment.(iv) Appropriate software documentation (
e.g., device hazard analysis; software requirements specification document; software design specification document; traceability analysis; description of verification and validation activities including system level test protocol, pass/fail criteria, and results; and cybersecurity).(2) Labeling must include the following:
(i) A detailed description of the patient population for which the device is indicated for use.
(ii) A detailed description of the intended reading protocol.
(iii) A detailed description of the intended user and user training that addresses appropriate reading protocols for the device.
(iv) A detailed description of the device inputs and outputs.
(v) A detailed description of compatible imaging hardware and imaging protocols.
(vi) Discussion of warnings, precautions, and limitations must include situations in which the device may fail or may not operate at its expected performance level (
e.g., poor image quality or for certain subpopulations), as applicable.(vii) Device operating instructions.
(viii) A detailed summary of the performance testing, including: test methods, dataset characteristics, results, and a summary of sub-analyses on case distributions stratified by relevant confounders, such as lesion and organ characteristics, disease stages, and imaging equipment.