Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K242437
    Device Name
    Smile Dx®
    Manufacturer
    Date Cleared
    2025-05-14

    (271 days)

    Product Code
    Regulation Number
    892.2070
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K210365, K210187

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Smile Dx® is a computer-assisted detection (CADe) software designed to aid dentists in the review of digital files of bitewing and periapical radiographs of permanent teeth. It is intended to aid in the detection and segmentation of suspected dental findings which include: caries, periapical radiolucencies (PARL), restorations, and dental anatomy.

    Smile Dx® is also intended to aid dentists in the measurement (in millimeter and percentage measurements) of mesial and distal bone levels associated with each tooth.

    The device is not intended as a replacement for a complete dentist's review or their clinical judgment that takes into account other relevant information from the image, patient history, and actual in vivo clinical assessment.

    Smile Dx® supports both digital and phosphor sensors.

    Device Description

    Smile Dx® is a computer assisted detection (CADe) device indicated for use by licensed dentists as an aid in their assessment of bitewing and periapical radiographs of secondary dentition in adult patients. Smile Dx® utilizes machine learning to produce annotations for the following findings:

    • Caries
    • Periapical radiolucencies
    • Bone level measurements (mesial and distal)
    • Normal anatomy (enamel, dentin, pulp, and bone)
    • Restorations
    AI/ML Overview

    The provided FDA 510(k) Clearance Letter for Smile Dx® outlines the device's acceptance criteria and the studies conducted to prove it meets those criteria.

    Acceptance Criteria and Device Performance

    The acceptance criteria are implicitly defined by the performance metrics reported in the "Performance Testing" section. The device's performance is reported in terms of various metrics for both standalone and human-in-the-loop (MRMC) evaluations.

    Here's a table summarizing the reported device performance against the implied acceptance criteria:

    Table 1: Acceptance Criteria and Reported Device Performance

    Feature/MetricAcceptance Criteria (Implied)Reported Device Performance
    Standalone Testing:
    Caries DetectionHigh Dice, SensitivityDice: 0.74 [0.72 0.76]
    Sensitivity (overall): 88.3% [83.5%, 92.6%]
    Periapical Radiolucency (PARL) DetectionHigh Dice, SensitivityDice: 0.77 [0.74, 0.80]
    Sensitivity: 86.1% [80.2%, 91.9%]
    Bone Level Detection (Bitewing)High Sensitivity, Specificity, Low MAESensitivity: 95.5% [94.3%, 96.7%]
    Specificity: 94.0% [91.1%, 96.6%]
    MAE: 0.30 mm [0.29mm, 0.32mm]
    Bone Level Detection (Periapical)High Sensitivity, Specificity, Low MAE (percentage)Sensitivity: 87.3% [85.4%, 89.2%]
    Specificity: 92.1% [89.9%, 94.1%]
    MAE: 2.6% [2.4%, 2.8%]
    Normal Anatomy DetectionHigh Dice, Sensitivity, SpecificityDice: 0.84 [0.83, 0.85]
    Sensitivity (Pixel-level): 86.1% [85.4%, 86.8%]
    Sensitivity (Contour-level): 95.2% [94.5%, 96%]
    Specificity (Contour-level): 93.5% [91.6%, 95.8%]
    Restorations DetectionHigh Dice, Sensitivity, SpecificityDice: 0.87 [0.85, 0.90]
    Sensitivity (Pixel-level): 83.1% [80.3%, 86.4%]
    Sensitivity (Contour-level): 90.9% [88.2%, 93.9%]
    Specificity (Contour-level): 99.6% [99.3%, 99.8%]
    MRMC Clinical Evaluation - Reader Improvement:
    Caries Detection (wAFROC Δθ)Statistically significant improvement+0.127 [0.081, 0.172] (p
    Ask a Question

    Ask a specific question about this device

    K Number
    K242600
    Manufacturer
    Date Cleared
    2025-04-11

    (224 days)

    Product Code
    Regulation Number
    892.2070
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K210365,K231678

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Second Opinion PC is a computer aided detection ("CADe") software to aid dentists in the detection of periapical radiolucencies by drawing bounding polygons to highlight the suspected region of interest.

    It is designed to aid dental health professionals to review periapical radiographs of permanent teeth in patients 12 years of age or older as a second reader.

    Device Description

    Second Opinion PC (Periapical Radiolucency Contouring) is a radiological, automated, computer-assisted detection (CADe) software intended to aid in the detection of periapical radiolucencies on periapical radiographs using polygonal contours. The device is not intended as a replacement for a complete dentist's review or their clinical judgment which considers other relevant information from the image, patient history, or actual in vivo clinical assessment.

    Second Opinion PC consists of three parts:

    • Application Programing Interface ("API")
    • Machine Learning Modules ("ML Modules")
    • Client User Interface ("Client")

    The processing sequence for an image is as follows:

    1. Images are sent for processing via the API
    2. The API routes images to the ML modules
    3. The ML modules produce detection output
    4. The UI renders the detection output

    The API serves as a conduit for passing imagery and metadata between the user interface and the machine learning modules. The API sends imagery to the machine learning modules for processing and subsequently receives metadata generated by the machine learning modules which is passed to the interface for rendering.

    Second Opinion PC uses machine learning to detect periapical radiolucencies. Images received by the ML modules are processed yielding detections which are represented as metadata. The final output is made accessible to the API for the purpose of sending to the UI for visualization. Detected periapical radiolucencies are displayed as polygonal overlays atop the original radiograph which indicate to the practitioner which teeth contain which detected periapical radiolucencies that may require clinical review. The clinician can toggle over the image to highlight a potential condition for viewing.

    AI/ML Overview

    Here's a detailed breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) clearance letter:


    Acceptance Criteria and Device Performance Study

    The Pear Inc. "Second Opinion Periapical Radiolucency Contours" (Second Opinion PC) device aims to aid dentists in detecting periapical radiolucencies using polygonal contours, functioning as a second reader. The device's performance was evaluated through a standalone clinical study demonstrating non-inferiority to its predicate device, which used bounding boxes.

    1. Table of Acceptance Criteria and Reported Device Performance

    The submission document primarily focuses on demonstrating non-inferiority to the predicate device rather than explicitly stating pre-defined acceptance criteria with specific thresholds for "passing." However, the implicit acceptance criteria are that the device is non-inferior to its predicate (Second Opinion K210365) in detecting periapical radiolucencies when using polygonal contours.

    Acceptance Criterion (Implicit)Reported Device Performance (Second Opinion PC)
    Non-inferiority in periapical radiolucency detection accuracy compared to predicate device (Second Opinion K210365) using bounding boxes.wAFROC-FOM (Estimated Difference): 0.15 (95% CI: 0.10, 0.21) compared to Second Opinion (predicate)
    (Lower bound of 95% CI (0.10) exceeded -0.05, demonstrating non-inferiority at 5% significance level)
    Overall detection accuracy (wAFROC-FOM)wAFROC-FOM: 0.85 (95% CI: 0.81, 0.89)
    Overall detection accuracy (HR-ROC-AUC)HR-ROC-AUC: 0.93 (95% CI: 0.90, 0.96)
    Lesion level sensitivityLesion Level Sensitivity: 77% (95% CI: 69%, 84%)
    Average false positives per imageAverage False Positives per Image: 0.28 (95% CI: 0.23, 0.33)

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: 500 unique unannotated periapical radiographs.
    • Data Provenance: The dataset is characterized by a representative distribution across:
      • Geographical Regions (within the United States):
        • Northwest: 116 radiographs (23.2%)
        • Southwest: 46 radiographs (9.2%)
        • South: 141 radiographs (28.2%)
        • East: 84 radiographs (16.8%)
        • Midwest: 113 radiographs (22.6%)
      • Patient Cohorts (Age Distribution):
        • 12-18 years: 4 radiographs (0.8%)
        • 18-75 years: 209 radiographs (41.8%)
        • 75+ years: 8 radiographs (1.6%)
        • Unknown age: 279 radiographs (55.8%)
      • Imaging Devices: A variety of devices were used, including Carestream-Trophy (RVG6100, RVG5200, RVG6200), DEXIS (DEXIS, DEXIS Platinum, KaVo Dental Technologies DEXIS Titanium), Kodak-Trophy KodakRVG6100, XDR EV71JU213, and unknown devices.
    • Retrospective or Prospective: Not explicitly stated, but the description of "representative distribution" and diverse origins suggests a retrospective collection of existing images.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    • Number of Experts: Four expert readers.
    • Qualifications of Experts: Not explicitly stated beyond "expert readers."

    4. Adjudication Method for the Test Set

    • Adjudication Method: Consensus approach based on agreement among at least three out of four expert readers (3+1 adjudication).

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done

    • Was an MRMC study done? No, a traditional MRMC comparative effectiveness study was not performed for the subject device (Second Opinion PC).
    • Effect Size of Human Readers with AI vs. without AI: Not applicable for this specific study of Second Opinion PC. The predicate device (Second Opinion K210365) did undergo MRMC studies, demonstrating statistically significant improvement in aided reader performance for that device. The current study focuses on the standalone non-inferiority of Second Opinion PC compared to its predicate.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    • Was a standalone study done? Yes, a standalone clinical study was performed. The study compared the performance of Second Opinion PC (polygonal localization) directly with Second Opinion (bounding box localization) in detecting periapical radiolucencies.
    • Metrics: wAFROC-FOM and HR-ROC-AUC were used.
    • Key Finding: Second Opinion PC was found to be non-inferior to Second Opinion.

    7. The Type of Ground Truth Used

    • Type of Ground Truth: Expert consensus. The ground truth (GT) was established by the consensus of at least three out of four expert readers who independently marked periapical radiolucencies using the smallest possible polygonal contour.

    8. The Sample Size for the Training Set

    • Sample Size for Training Set: Not explicitly mentioned in the provided text. The document focuses on the clinical validation (test set).

    9. How the Ground Truth for the Training Set Was Established

    • Ground Truth for Training Set: Not explicitly mentioned in the provided text. It is implied that the device was "developed using machine learning techniques" from "open-source models using supervised machine learning," which typically requires a labeled training set, but specifics on its establishment are absent.
    Ask a Question

    Ask a specific question about this device

    K Number
    K240003
    Manufacturer
    Date Cleared
    2024-08-30

    (241 days)

    Product Code
    Regulation Number
    892.2070
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K210365, K230144

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    VELMENI for DENTISTS (V4D) is a concurrent-read, computer-assisted detection software intended to assist dentists in the clinical detection of dental caries, fillings/restorations, fixed prostheses, and implants in digital bitewing, periapical, and panoramic radiographs of permanent teeth in patients 15 years of age or older. This device provides additional information for dentists in examining radiographs of patients' teeth. This device is not intended as a replacement for a complete examination by the dentist or their clinical judgment that considers other relevant information from the image, patient history, or actual in vivo clinical assessment. Final diagnoses and patient treatment plans are the responsibility of the dentist.

    Device Description

    V4D software medical device comprises of the following key components:

    • Web Application Interface delivers front-end capabilities and is the point of interaction between the device and the user.
    • Machine Learning (ML) Engine delivers V4D's core ML capabilities through the radiograph type classifier, condition detection module, tooth numbering module, and merging module.
    • Backend API allows interaction between all the components, as defined in this section, in order to fulfill the user's requests on the web application interface.
    • Queue receives and stores messages from Backend API to send to Al-Worker.
    • Al-Worker accepts radiograph analysis requests from Backend API via the Queue, passes gray scale radiographs to the ML Engine in the supported extensions (jpeg and png), and returns the ML analysis results to the Backend API.
    • Database and File Storage store critical information related to the application, including user data, patient profiles, analysis results, radiographs, and associated data.

    The following non-medical interfaces are also available with VELMENI for DENTISTS (V4D):

    • VELMENI BRIDGE (VB) acts as a conduit enabling data and information exchange between Backend API and third-party software like Patient Management or Imaging Software
    • Rejection Review (RR) module captures the ML-detected conditions rejected by dental professionals to aid in future product development and to be evaluated in accordance with VELMENIs post-market surveillance procedure.
    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) summary for "Velmeni for Dentists (V4D)":

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly present "acceptance criteria" in a tabular format with predefined thresholds. Instead, it reports the performance metrics from both standalone and clinical (MRMC) studies. The acceptance criteria are implicitly met if the reported performance demonstrates safety, effectiveness, and substantial equivalence to the predicate device.

    Implicit Acceptance Criteria & Reported Device Performance:

    Metric / FeatureAcceptance Criteria (Implicit)Reported Device Performance (Velmeni for Dentists (V4D))
    Standalone PerformanceDemonstrate objective performance (sensitivity, specificity, Dice coefficient) for the indicated features across supported image types.Caries (Lesion-Level Sensitivity): Bitewing: 72.8%, Periapical: 70.6%, Panoramic: 68.3%
    Fixed Prosthesis (Lesion-Level Sensitivity): Bitewing: 92.1%, Periapical: 81.0%, Panoramic: 74.5%
    Implant (Lesion-Level Sensitivity): Bitewing: 81.1%, Periapical: 94.5%, Panoramic: 79.6%
    Restoration (Lesion-Level Sensitivity): Bitewing: 88.1%, Periapical: 76.8%, Panoramic: 72.6%
    False Positives Per Image (Mean): Caries: 0.24-0.33, Fixed Prosthesis: 0.01-0.06, Implant: 0.00-0.01, Restoration: 0.10-0.62
    Dice Score (Mean): Caries: 77.07-82.77%, Fixed Prosthesis: 91.47-97.09%, Implant: 88.67-95.47%, Restoration: 81.49-90.45%
    Clinical Performance (MRMC)Demonstrate that human readers (dentists) improve their diagnostic performance (e.g., sensitivity, wAFROC AUC) when assisted by the AI device, compared to working unassisted, without an unacceptable increase in false positives or decrease in specificity. The device should provide clear benefit.wAFROC AUC (Aided vs. Unaided): Bitewing: 0.848 vs. 0.794 (Diff: 0.054), Periapical: 0.814 vs. 0.721 (Diff: 0.093), Panoramic: 0.615 vs. 0.579 (Diff: 0.036)
    Significant Improvements in Lesion-Level Sensitivity, Case-Level Sensitivity, and/or reductions in False Positives per Image (details in study section below). The study states "The V4D software demonstrated clear benefit for bitewing and periapical views in all features. The panoramic view demonstrated benefit..."
    Safety & EffectivenessThe device must be demonstrated to be as safe and effective as the predicate device, with any differences in technological characteristics not raising new or different questions of safety or effectiveness."The results of the stand-alone and MRMC reader studies demonstrate that the performance of V4D is as safe, as effective, and performs equivalent to that of the predicate device, and VELMENI has demonstrated that the proposed device complies with applicable Special Controls for Medical Image Analyzers. Therefore, VELMENI for DENTISTS (V4D) can be found substantially equivalent to the predicate device."

    2. Sample Sizes Used for the Test Set and Data Provenance

    • Test Set Sample Sizes:

      • Standalone Performance:
        • 600 Bitewing images
        • 597 Periapical images
        • 600 Panoramic images
      • Clinical Performance (MRMC):
        • 600 Bitewing images (total caries 315)
        • 597 Periapical images (total caries 271)
        • 600 Panoramic images (total caries 853)
    • Data Provenance: The document states that "Subgroup analyses were performed among types of caries (primary and secondary caries; for caries-level sensitivity only), sex, age category, sensor, and study site." This suggests the data was collected from multiple study sites, implying a degree of diversity in the source of the images. However, the specific country of origin of the data is not explicitly stated. The study for initial data development seems to be centered around US licensed dentists and oral radiologists, suggesting US-based data collection, but this is not definitively stated for the entire dataset. The images were collected from various sensor manufacturers (Dexis, Dexis platinum, Kavo, Carestream, Planmeca). The document does not explicitly state whether the data was retrospective or prospective.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    • Number of Experts:
      • Ground truth for both standalone and clinical performance studies was established by three US licensed dentists.
      • Non-consensus labels were adjudicated by one oral radiologist.
    • Qualifications of Experts:
      • US licensed dentists.
      • Oral radiologist.
      • No further details on their experience (e.g., years of experience) are provided in this summary.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Consensus with Adjudication.
      • Ground truth was initially established by "consensus labels of three US licensed dentists."
      • "Non-consensus labels were adjudicated by an oral radiologist." This implies a "3+1" or similar method, where the three initial readers attempt to reach consensus, and any disagreements are resolved by a fourth, independent expert (the oral radiologist).

    5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done, and its effect size

    • Yes, an MRMC comparative effectiveness study was done. It was described as a "multi-reader fully crossed reader improvement study."

    • Effect Size of Human Readers' Improvement with AI vs. Without AI Assistance:
      The effect size is presented as the difference in various metrics (wAFROC AUC, lesion-level sensitivity, case-level sensitivity) between aided and unaided modes.

      • wAFROC AUC:

        • Bitewing: +0.054 (Aided 0.848 vs Unaided 0.794)
        • Periapical: +0.093 (Aided 0.814 vs Unaided 0.721)
        • Panoramic: +0.036 (Aided 0.615 vs Unaided 0.579)
      • Lesion-Level Sensitivity Improvement (Aided vs. Unaided):

        • Bitewing:
          • Caries: +12.8% (80.3% vs 67.5%)
          • Fixed Prosthesis: +5.5% (95.7% vs 90.2%)
          • Implant: +32.0% (93.2% vs 61.3%)
          • Restoration: +16.7% (90.8% vs 74.1%)
        • Periapical:
          • Caries: +24.8% (73.4% vs 48.7%)
          • Fixed Prosthesis: +11.1% (91.1% vs 80.0%)
          • Implant: +16.4% (95.9% vs 79.5%)
          • Restoration: +10.3% (90.6% vs 80.3%)
        • Panoramic:
          • Caries: +6.5% (27.2% vs 15.1%)
          • Fixed Prosthesis: +8.2% (88.8% vs 80.5%)
          • Implant: +8.7% (88.3% vs 79.6%)
          • Restoration: +15.6% (73.0% vs 57.4%)
      • The study design also included measures of false positives per image (Mean FPs per Image) and case-level specificity to evaluate potential adverse effects of aid. While some specificities slightly decreased (e.g., Periapical Caries: -10.3%), the document generally concludes "The V4D software demonstrated clear benefit for bitewing and periapical views in all features. The panoramic view demonstrated benefit though the absolute benefit for caries sensitivity was smaller due to lower overall reader performance. In addition, for the panoramic view, there was a benefit in restoration sensitivity that was somewhat offset by a drop in image-level specificity."

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    • Yes, a standalone performance evaluation was done. It was conducted against the established ground truth. The results are reported in Tables 2 and 3 and include lesion-level sensitivity, case-level sensitivity, false positives per image, case-level specificity, and Dice coefficient for segmentation.

    7. The Type of Ground Truth Used

    • The ground truth used for both standalone and clinical studies was based on expert consensus with adjudication. Specifically, it was established by "consensus labels of three US licensed dentists, and nonconsensus labels were adjudicated by an oral radiologist."

    8. The Sample Size for the Training Set

    • The document does not provide the sample size for the training set. It only describes the test set and validation processes.

    9. How the Ground Truth for the Training Set Was Established

    • The document does not explicitly describe how the ground truth for the training set was established. It focuses solely on the ground truth establishment for the test (evaluation) dataset. It's common for training data ground truth to be established through similar expert labeling processes, but this is not mentioned in the provided summary.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1