Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K242437
    Device Name
    Smile Dx®
    Manufacturer
    Date Cleared
    2025-05-14

    (271 days)

    Product Code
    Regulation Number
    892.2070
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    Reference Devices :

    K210365, K210187

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Smile Dx® is a computer-assisted detection (CADe) software designed to aid dentists in the review of digital files of bitewing and periapical radiographs of permanent teeth. It is intended to aid in the detection and segmentation of suspected dental findings which include: caries, periapical radiolucencies (PARL), restorations, and dental anatomy.

    Smile Dx® is also intended to aid dentists in the measurement (in millimeter and percentage measurements) of mesial and distal bone levels associated with each tooth.

    The device is not intended as a replacement for a complete dentist's review or their clinical judgment that takes into account other relevant information from the image, patient history, and actual in vivo clinical assessment.

    Smile Dx® supports both digital and phosphor sensors.

    Device Description

    Smile Dx® is a computer assisted detection (CADe) device indicated for use by licensed dentists as an aid in their assessment of bitewing and periapical radiographs of secondary dentition in adult patients. Smile Dx® utilizes machine learning to produce annotations for the following findings:

    • Caries
    • Periapical radiolucencies
    • Bone level measurements (mesial and distal)
    • Normal anatomy (enamel, dentin, pulp, and bone)
    • Restorations
    AI/ML Overview

    The provided FDA 510(k) Clearance Letter for Smile Dx® outlines the device's acceptance criteria and the studies conducted to prove it meets those criteria.

    Acceptance Criteria and Device Performance

    The acceptance criteria are implicitly defined by the performance metrics reported in the "Performance Testing" section. The device's performance is reported in terms of various metrics for both standalone and human-in-the-loop (MRMC) evaluations.

    Here's a table summarizing the reported device performance against the implied acceptance criteria:

    Table 1: Acceptance Criteria and Reported Device Performance

    Feature/MetricAcceptance Criteria (Implied)Reported Device Performance
    Standalone Testing:
    Caries DetectionHigh Dice, SensitivityDice: 0.74 [0.72 0.76] Sensitivity (overall): 88.3% [83.5%, 92.6%]
    Periapical Radiolucency (PARL) DetectionHigh Dice, SensitivityDice: 0.77 [0.74, 0.80] Sensitivity: 86.1% [80.2%, 91.9%]
    Bone Level Detection (Bitewing)High Sensitivity, Specificity, Low MAESensitivity: 95.5% [94.3%, 96.7%] Specificity: 94.0% [91.1%, 96.6%] MAE: 0.30 mm [0.29mm, 0.32mm]
    Bone Level Detection (Periapical)High Sensitivity, Specificity, Low MAE (percentage)Sensitivity: 87.3% [85.4%, 89.2%] Specificity: 92.1% [89.9%, 94.1%] MAE: 2.6% [2.4%, 2.8%]
    Normal Anatomy DetectionHigh Dice, Sensitivity, SpecificityDice: 0.84 [0.83, 0.85] Sensitivity (Pixel-level): 86.1% [85.4%, 86.8%] Sensitivity (Contour-level): 95.2% [94.5%, 96%] Specificity (Contour-level): 93.5% [91.6%, 95.8%]
    Restorations DetectionHigh Dice, Sensitivity, SpecificityDice: 0.87 [0.85, 0.90] Sensitivity (Pixel-level): 83.1% [80.3%, 86.4%] Sensitivity (Contour-level): 90.9% [88.2%, 93.9%] Specificity (Contour-level): 99.6% [99.3%, 99.8%]
    MRMC Clinical Evaluation - Reader Improvement:
    Caries Detection (wAFROC Δθ)Statistically significant improvement+0.127 [0.081, 0.172] (p < 0.001)
    PARL Detection (wAFROC Δθ)Statistically significant improvement+0.098 [0.061, 0.135] (p < 0.001)
    Caries Detection (Sensitivity Improvement)Increased sensitivity with device assistance19.6% [12.8%, 26.4%] increase (from 64.3% to 83.9%)
    PARL Detection (Sensitivity Improvement)Increased sensitivity with device assistance19.1% [13.6%, 24.7%] increase (from 70.7% to 89.8%)
    Caries Detection (Specificity Improvement)Maintained or improved specificity with device assistance16.7% [13.5%, 19.9%] increase (from 73.6% to 90.2%)
    PARL Detection (Specificity Improvement)Maintained or improved specificity with device assistance4.7% [3%, 6.4%] increase (from 92.6% to 97.3%)

    Study Details for Device Performance Proof:

    1. Sample Sizes for the Test Set and Data Provenance

    • Standalone Testing:
      • Caries and Periapical Radiolucency Detection: 867 cases.
      • Bone Level Detection and Bone Loss Measurement: 352 cases.
      • Normal Anatomy and Restorations: 200 cases.
    • MRMC Clinical Evaluation: 352 cases.
    • Data Provenance: All test sets were collected from "multiple U.S. sites." The data is retrospective, as it's used in a "retrospective study" for the MRMC evaluation. Sub-group analysis also included "imaging hardware" and "patient demographics (i.e., age, sex, race)," indicating diversity in data.

    2. Number of Experts and Qualifications for Ground Truth

    • Standalone Testing (Implicit): Not explicitly stated how ground truth for standalone testing was established, but it is likely derived from expert consensus, similar to the MRMC study.
    • MRMC Clinical Evaluation: Ground truth was established by the "consensus labels of four US licensed dentists."

    3. Adjudication Method for the Test Set

    • MRMC Clinical Evaluation: The ground truth for the MRMC study was established by the "consensus labels of four US licensed dentists." This implies a form of consensus adjudication, likely where all four experts reviewed and reached agreement on the findings. The specific method (e.g., majority vote, 2+1, 3+1) is not explicitly detailed beyond "consensus labels." For standalone testing, the adjudication method for ground truth is not specified.

    4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Yes, an MRMC comparative effectiveness study was done.
    • Effect Size of Human Readers' Improvement with AI vs. Without AI Assistance:
      • Caries Detection:
        • wAFROC Δθ: +0.127 [0.081, 0.172] (p < 0.001)
        • Sensitivity Improvement: 19.6% increase (from 64.3% without device to 83.9% with device).
        • Specificity Improvement: 16.7% increase (from 73.6% without device to 90.2% with device).
      • Periapical Radiolucency (PARL) Detection:
        • wAFROC Δθ: +0.098 [0.061, 0.135] (p < 0.001)
        • Sensitivity Improvement: 19.1% increase (from 70.7% without device to 89.8% with device).
        • Specificity Improvement: 4.7% increase (from 92.6% without device to 97.3% with device).
    • The study design was a "fully-crossed, multiple-reader multiple-case (MRMC) evaluation method" with "at least 13 US licensed dentists (Smile Dx® had 14 readers)." Half of the data set contained unannotated images, and the second half contained radiographs that had been processed through the CADe device. Radiographs were presented to readers in alternating groups throughout two different sessions, separated by a washout period.

    5. Standalone Performance Study (Algorithm Only)

    • Yes, a standalone performance study was done.
    • It evaluated the algorithm's performance for:
      • Caries and Periapical Radiolucency Detection (Dice, Sensitivity)
      • Bone Level Detection and Bone Loss Measurement (Sensitivity, Specificity, Mean Absolute Error)
      • Normal Anatomy and Restorations Detection (Dice, Pixel-level Sensitivity, Contour-level Sensitivity, Contour-level Specificity)

    6. Type of Ground Truth Used

    • Explicitly for MRMC Study: "Consensus labels of four US licensed dentists." This indicates expert consensus was used for ground truth for the human-in-the-loop evaluation and likely for the standalone evaluation's ground truth as well. There is no mention of pathology or outcomes data.

    7. Sample Size for the Training Set

    • The document does not explicitly state the sample size for the training set. It only mentions the test set sizes.

    8. How Ground Truth for Training Set Was Established

    • The document does not explicitly state how ground truth for the training set was established. It mentions the model "utilizes machine learning to produce annotations" and "training data" is used, but provides no details on its annotation process. It's highly probable that expert annotation was also used for the training data, similar to the test set, but this is not confirmed in the provided text.
    Ask a Question

    Ask a specific question about this device

    K Number
    K223296
    Manufacturer
    Date Cleared
    2023-02-06

    (103 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    Reference Devices :

    K210187

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Videa Perio Assist is a radiological semi-automated image processing software device intended to aid dental professionals in the measurements and visualization of mesial and distal bone levels associated with each tooth from bitewing and periapical radiographs. Measurements are made available as linear distances or relative percentages.

    It should not be used in-lieu of full patient evaluation or solely relied upon to make or confirm a diagnosis. The system is to be used by trained professionals including, but not limited to, dentists and dental hygienists.

    Device Description

    Videa Perio Assist (VPA) software is a cloud-based AI-powered medical device for the automatic measurement of tooth interproximal alveolar bone level in dental radiographs. The device itself is available as an API (Application Programming Interface) behind a firewalled network. The device returns 1) a series of points with connecting lines measuring the mesial and distal alveolar bone levels associated with each tooth 2) this distance expressed in millimeters and/or as a percentage of the root length.

    Videa Perio Assist is accessed by the trained professional through their image viewer. From within the image viewer the user can upload a radiograph to Videa Perio Assist and then review the results. The device outputs a line to identify these points which calculate the interproximal bone level.

    The device output will show all applicable measurements from one radiograph regardless of the number of teeth present. If no teeth are present the device outputs a clear indication that there are no identifiable teeth to calculate the interproximal bone level.

    The intended users of Videa Perio Assist are trained professionals such as dentists and dental hygienists.

    The intended patients of Videa Perio Assist are patients 12 years and above with permanent dentition undergoing routine dental visits or suspected of having interproximal bone level concerns. Videa Perio Assist may only be used with patients with permanent dentition present in the radiograph.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device's performance, based on the provided FDA 510(k) summary for Videa Perio Assist:

    1. Table of Acceptance Criteria & Reported Device Performance

    Videa Perio Assist underwent two primary types of testing: Bench Testing (focused on algorithm precision/recall) and Clinical Testing (focused on algorithm sensitivity, specificity, and accuracy for clinical measurements).

    Bench Testing Acceptance Criteria & Performance (Per Tooth Landmark Detection)

    MetricAcceptance Criteria (Overall)VPA Performance (Bitewing)VPA Performance (Periapical - Overall)VPA Performance (Periapical - CEJ-ABL subgroup)
    Recall> 82%94.4%91.9%N/A (Not reported specifically for this subgroup for recall)
    Precision> 82%84.3%N/A (Not reported overall for periapical)79.1% (Did not meet criteria for this subgroup)

    Note: The document notes that for the periapical CEJ-ABL subgroup, precision was 79.1%, meaning it did not meet the acceptance criteria of >82% precision for this specific subgroup, however, this was attributed to difficulty in estimating obscured points on overlapping teeth.

    Clinical Testing Acceptance Criteria & Performance (Per Interproximal Bone Level Measurement)

    MetricAcceptance Criteria (Overall)VPA Performance (Bitewing)VPA Performance (Periapical - All)VPA Performance (Periapical - CEJ->ABL subgroup)VPA Performance (Periapical - CEJ->RT subgroup)VPA Performance (Periapical - ABL->RT subgroup)
    Sensitivity> 82%92.8% (Met)88.3% (Met)MetMetMet
    Specificity> 81%89.4% (Met)87.0% (Met)Did not meet (for this subgroup)MetMet
    Mean Absolute Error< 1.5mmMetMetMetMetMet

    Note: The document explicitly states for the periapical CEJ-ABL subgroup that specificity "Did not meet acceptance criteria," for the same reason as precision in the bench study—difficulty with obscured points.

    2. Sample Sizes Used for the Test Set and Data Provenance

    • Bench Testing (Algorithm Standalone Performance):
      • Sample Size: 996 radiographs and 16,131 landmarks.
      • Data Provenance: Not explicitly stated regarding country of origin, but generally, for FDA submissions, data should reflect the US population or be justifiable for generalizability to the US. The document refers to "US licensed dentists" for clinical testing, implying a US context. The data was collected across two phases (retrospective or prospective is not specified, but the context generally suggests retrospective collection for developing and testing an algorithm).
    • Clinical Testing (Human-in-the-loop/Algorithm Assistance Effectiveness):
      • Sample Size: 189 radiographs and over 2,350 lines (measurements).
      • Data Provenance: Not explicitly stated regarding country of origin, though "US licensed dentists" and "US licensed periodontists" are mentioned, suggesting US data. The study was conducted in "two phases." The type of study (retrospective vs. prospective) is not specified.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Bench Testing: "Ground truth labeled across two phases." The specific number or qualifications of the individuals performing this ground truth labeling is not explicitly stated in the provided text.
    • Clinical Testing:
      • Initial Labeling: "US licensed dentists labeled data across two phases." The number of dentists is not specified.
      • Adjudication/Reference Standard Establishment: "two US licensed periodontists adjudicated those labels to establish a reference standard for the study." The specific qualifications (e.g., years of experience) for these periodontists are not detailed beyond being "US licensed periodontists."

    4. Adjudication Method for the Test Set

    • Bench Testing: Adjudication method not explicitly described, only that ground truthing occurred across two phases.
    • Clinical Testing: The ground truth was established by "two US licensed periodontists adjudicat[ing] those labels." This implies a consensus process between the two periodontists, or one reviewing the other's work, but the specific consensus or tie-breaking rule (e.g., 2+1, 3+1) is not detailed.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • No, an MRMC comparative effectiveness study was NOT mentioned.
    • The study primarily focused on the standalone performance of the algorithm against a human-established ground truth, not on how human readers' performance improved with AI assistance. The "Clinical Testing" section describes measuring the algorithm's performance against a reference standard, not a comparative study of human performance with and without the device.

    6. Standalone (Algorithm Only Without Human-in-the-Loop) Performance

    • Yes, standalone performance was explicitly evaluated. This is what the "Bench Testing" section describes.
      • The Videa Perio Assist output (lines and measurements) was scored directly against the ground-truthed landmarks and measurements.

    7. Type of Ground Truth Used

    • Expert Consensus/Expert Labeling:
      • For Bench Testing, ground truth was established by "ground truth labeled across two phases." This implies expert review and labeling of landmarks.
      • For Clinical Testing, the reference standard (ground truth) was established by "two US licensed periodontists adjudicat[ing] those labels" initially provided by "US licensed dentists." This is an expert consensus or adjudicated expert labeling ground truth.
    • Not Pathology or Outcomes Data.

    8. Sample Size for the Training Set

    • The document does not explicitly state the sample size for the training set. It only mentions that the "artificial intelligence algorithm was trained with that patient population" (referring to permanent dentition patients).
    • However, it does refer to "Bench testing has sensor manufacturer and patient age subgroup analysis for generalizability in a similar method as described in the clinical study generalizability section below. The sensor manufacturer and patient age did not have any outliers in the bench study." This suggests that the training data and evaluation focused on ensuring generalizability across these factors, but the specific size is missing.

    9. How the Ground Truth for the Training Set Was Established

    • The document does not explicitly describe how the ground truth for the training set was established. It can be inferred that it would follow a similar expert labeling process as the test set (e.g., by dental professionals), but no specifics are provided.
    Ask a Question

    Ask a specific question about this device

    K Number
    K212519
    Manufacturer
    Date Cleared
    2022-05-10

    (273 days)

    Product Code
    Regulation Number
    892.2070
    Reference & Predicate Devices
    Predicate For
    Why did this record match?
    Reference Devices :

    K210187

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Overjet Caries Assist (OCA) is a radiological, automated, concurrent read, computer-assisted detection software intended to aid in the detection and segmentation of caries on bitewing radiographs. The device provides additional information for the dentist to use in their diagnosis of a tooth surface suspected of being carious. The device is not intended as a replacement for a complete dentist's review or that takes into account other relevant information from the image, patient history, and actual in vivo clinical assessment.

    Device Description

    Overjet Caries Assist (OCA) is a radiological automated concurrent read computer-assisted detection (CAD) software intended to aid in the detection and segmentation of caries on bitewing radiographs. The device provides additional information for the clinician to use in their diagnosis of a tooth surface suspected of being carious. The device is not intended as a replacement for a complete clinician's review or their clinical judgment that takes into account other relevant information from the image or patient history.

    OCA is a software-only device which operates in three layers - a Network Layer, a Presentation Layer, and a Decision Layer (as shown in the data flow diagram below). Images are pulled in from a clinic/dental office, and the Machine Learning model creates predictions in the Decision Layer and results are pushed to the dashboard, which are in the Presentation Layer.

    The Machine Learning System within the Decision Layer processes bitewing radiographs and annotates suspected carious lesions. It is comprised of four modules:

    • Image Classifier The model evaluates the incoming radiograph and predicts the ● image type between Bitewing and Periapical Radiograph. This classification is used to support the data flow of the incoming radiograph. As part of the classification of the image type any non-radiographs are classified as "junk" and not processed. These include patient charting information, or other non-bitewing or periapical radiographs. OCA shares classifier and Tooth Number modules with the Overjet Dental Assist product cleared under K210187.
    • . Tooth Number Assignment module - This module analyzes the processed image and determines what tooth numbers are present and provides a pixel wise segmentation mask for each tooth number.
    • Caries module - This module outputs a pixel wise segmentation mask of all carious lesions using an ensemble of 3 U-Net based models. The shape and location of every carious lesion is contained in this mask as the carious lesions' predictions.
    • Post Processing The overlap of tooth masks from the Tooth Number . Assignment Module and carious lesions from the Caries Module is used to assign specific carious lesions to a specific tooth. The Image Post Processor module annotates the original radiograph with the carious lesions' predictions.
    AI/ML Overview

    Acceptance Criteria and Device Performance for Overjet Caries Assist

    The Overjet Caries Assist (OCA) is a radiological, automated, concurrent read, computer-assisted detection software intended to aid in the detection and segmentation of caries on bitewing radiographs. The device's performance was evaluated through standalone testing of the AI algorithm and a clinical reader improvement study.

    1. Table of Acceptance Criteria and Reported Device Performance

    MeasureAcceptance Criteria (Predicate Device Performance)Reported Device Performance (Overjet Caries Assist)
    Reader Improvement Study
    Increase in dentist's sensitivity with AI assistanceApproximately 20% increase in sensitivity for the predicate device. For OCA, a greater than 15% increase in dentist's sensitivity was established as acceptance criteria.Overall reader sensitivity improved from 57.9% to 76.2% (an increase of 18.3 percentage points, satisfying the >15% criterion). - Primary caries: 60.5% to 79.4% (18.9 pp improvement). - Secondary caries: 49.8% to 63.0% (13.2 pp improvement).
    Specificity with AI assistanceNot explicitly defined as an improvement criterion for the predicate, but overall specificity is a key measure.Overall reader specificity decreased slightly from 99.3% to 98.4% (a decrease of less than 1%), deemed acceptable by the applicant as the benefit in sensitivity outweighs this slight decrease.
    AFROC Score (Assisted)The predicate did not explicitly state an AFROC criterion, but improving diagnostic accuracy is implicit.AUC increased from 0.593 (unassisted) to 0.649 (assisted), for an increase of 0.057 (statistically significant, p < 0.001).
    Standalone Performance (AI Algorithm Only)
    Standalone SensitivityNot directly comparable to predicate's standalone AI performance, as the predicate's description focuses on human improvement.Overall standalone sensitivity: 72.0% (95% CI: 62.9%, 81.1%) - Primary caries: 74.4% (95% CI: 64.4%, 84.4%) - Secondary caries: 62.5% (95% CI: 46.6%, 78.4%)
    Standalone SpecificityNot directly comparable to predicate's standalone AI performance.Overall standalone specificity: 98.1% (95% CI: 97.7%, 98.5%)
    Lesion Segmentation (Dice Score)Not explicitly provided for the predicate device.Mean Dice score for true positives: - Primary caries: 0.69 (0.66, 0.72) - Secondary caries: 0.75 (0.71, 0.79)

    2. Sample Size and Data Provenance for the Test Set

    • Sample Size for Test Set: 352 bitewing radiographs (104 containing caries / 248 without caries).
    • Data Provenance: Not explicitly stated in the provided text (e.g., country of origin). However, given the context of U.S. FDA clearance and the use of US-licensed dentists, it is likely that the data is either from the US or representative of populations seen in the US. The type of data is retrospective, as existing radiographs were used.

    3. Number of Experts and Qualifications for Ground Truth

    • Number of Experts: Three US-licensed dentists for initial consensus, and one Dental Radiologist for adjudication of non-consensus labels.
    • Qualifications of Experts: All experts were US-licensed dentists. The adjudicating expert was specifically a Dental Radiologist. No further details on years of experience were provided.

    4. Adjudication Method for the Test Set

    The adjudication method used was a "3+1" approach. Ground truth was initially established by the consensus labels of three US-licensed dentists. Any non-consensus labels were then adjudicated by a Dental Radiologist.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    Yes, an MRMC comparative effectiveness study was conducted.

    • Effect Size of Human Readers Improvement with AI vs. without AI assistance:
      • Sensitivity: Overall reader sensitivity improved by 18.3 percentage points (from 57.9% unassisted to 76.2% assisted).
        • For primary caries, sensitivity improved by 18.9 percentage points (60.5% unassisted to 79.4% assisted).
        • For secondary caries, sensitivity improved by 13.2 percentage points (49.8% unassisted to 63.0% assisted).
      • Specificity: Overall reader specificity decreased slightly by 0.9 percentage points (from 99.3% unassisted to 98.4% assisted).
      • AFROC AUC: The average AUC for all readers increased by 0.057 (from 0.593 unassisted to 0.649 assisted). This increase was statistically significant (p < 0.001).
      • Average Dice Scores for Segmentation:
        • Primary caries: Mean Dice scores improved from 0.67 unassisted to 0.69 assisted.
        • Secondary caries: Mean Dice scores improved from 0.65 unassisted to 0.74 assisted. (Note: These segmentation improvements were not statistically significant).

    6. Standalone (Algorithm Only) Performance

    Yes, standalone performance (algorithm only without human-in-the-loop) was conducted.

    • Overall standalone sensitivity: 72.0% (95% CI: 62.9%, 81.1%)
    • Overall standalone specificity: 98.1% (95% CI: 97.7%, 98.5%)
    • Lesion Segmentation (Dice Score):
      • Primary caries: Mean Dice score of 0.69
      • Secondary caries: Mean Dice score of 0.75

    7. Type of Ground Truth Used

    The ground truth used was expert consensus complemented by expert adjudication. Specifically, a consensus of three US-licensed dentists, with non-consensus cases adjudicated by a Dental Radiologist.

    8. Sample Size for the Training Set

    The sample size for the training set is not provided in the excerpt. The document only mentions "training data" in the context of the algorithm's capability to learn during its operation, but not a specific size for its initial training.

    9. How the Ground Truth for the Training Set was Established

    The method for establishing ground truth for the training set is not explicitly detailed in the provided text. It generally states that the algorithm "has been trained," but does not provide information on how the ground truth for that training was established.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1